org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1.0 (TID 9, ukfhpdbivp12.uk.experian.local): org.apache.spark.SparkException: Task failed while writing rows.

Stack Overflow | user2606255 | 1 year ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Spark Exception : Task failed while writing rows

    Stack Overflow | 1 year ago | user2606255
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1.0 (TID 9, ukfhpdbivp12.uk.experian.local): org.apache.spark.SparkException: Task failed while writing rows.
  2. 0

    [SPARK-3365] Failure to save Lists to Parquet - ASF JIRA

    apache.org | 2 years ago
    java.lang.ArithmeticException: / by zero

    Root Cause Analysis

    1. java.lang.ArithmeticException

      / by zero

      at parquet.hadoop.InternalParquetRecordWriter.initStore()
    2. Parquet
      ParquetOutputFormat.getRecordWriter
      1. parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:101)
      2. parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:94)
      3. parquet.hadoop.ParquetRecordWriter.<init>(ParquetRecordWriter.java:64)
      4. parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:282)
      5. parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:252)
      5 frames
    3. Spark Project SQL
      InsertIntoHadoopFsRelation$$anonfun$insert$1.apply
      1. org.apache.spark.sql.parquet.ParquetOutputWriter.<init>(newParquet.scala:83)
      2. org.apache.spark.sql.parquet.ParquetRelation2$$anon$4.newInstance(newParquet.scala:229)
      3. org.apache.spark.sql.sources.DefaultWriterContainer.initWriters(commands.scala:470)
      4. org.apache.spark.sql.sources.BaseWriterContainer.executorSideSetup(commands.scala:360)
      5. org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.org$apache$spark$sql$sources$InsertIntoHadoopFsRelation$$writeRows$1(commands.scala:172)
      6. org.apache.spark.sql.sources.InsertIntoHadoopFsRelation$$anonfun$insert$1.apply(commands.scala:160)
      7. org.apache.spark.sql.sources.InsertIntoHadoopFsRelation$$anonfun$insert$1.apply(commands.scala:160)
      7 frames
    4. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
      2. org.apache.spark.scheduler.Task.run(Task.scala:70)
      3. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      3 frames
    5. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames