org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.UnsupportedOperationException: Can't read accumulator value in task

Stack Overflow | Vidya Shree | 7 months ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    Finding the Max value in JavaPairDStream

    Stack Overflow | 7 months ago | Vidya Shree
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.UnsupportedOperationException: Can't read accumulator value in task

    Root Cause Analysis

    1. org.apache.spark.SparkException

      Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.UnsupportedOperationException: Can't read accumulator value in task

      at org.apache.spark.Accumulable.value()
    2. Spark
      Accumulable.value
      1. org.apache.spark.Accumulable.value(Accumulators.scala:98)
      1 frame
    3. sample.sample
      SampleJdd$2.call
      1. sample.sample.SampleJdd$2.call(SampleJdd.java:82)
      2. sample.sample.SampleJdd$2.call(SampleJdd.java:74)
      2 frames
    4. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:996)
      2. org.apache.spark.util.collection.ExternalSorter$$anonfun$5.apply(ExternalSorter.scala:200)
      3. org.apache.spark.util.collection.ExternalSorter$$anonfun$5.apply(ExternalSorter.scala:199)
      4. org.apache.spark.util.collection.AppendOnlyMap.changeValue(AppendOnlyMap.scala:138)
      5. org.apache.spark.util.collection.SizeTrackingAppendOnlyMap.changeValue(SizeTrackingAppendOnlyMap.scala:32)
      6. org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:205)
      7. org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:56)
      8. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
      9. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      10. org.apache.spark.scheduler.Task.run(Task.scala:64)
      11. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
      11 frames
    5. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames