org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 5, averie001-edt-loc): java.lang.IllegalStateException: unread block data java.io.ObjectInputStream$BlockDataInputStream. setBlockDataMode(ObjectInputStream.java:2421) java.io.ObjectInputStream. readObject0(ObjectInputStream.java:1382) java.io.ObjectInputStream. defaultReadFields(ObjectInputStream.java:1990) java.io.ObjectInputStream. readSerialData(ObjectInputStream.java:1915) java.io.ObjectInputStream. readOrdinaryObject(ObjectInputStream.java:1798) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) java.io.ObjectInputStream.readObject(ObjectInputStream.java:370) org.apache.spark.serializer.JavaDeserializationStream. readObject(JavaSerializer.scala:62) org.apache.spark.serializer. JavaSerializerInstance.deserialize(JavaSerializer.scala:87) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:160) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$ scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Re: Zeppelin with Stratio DeepSpark
    via by Kevin Kim (Sangwoo),
  • Having problems with model training in v0.8.0
    via by Satish Ayyaswami,
  • Spark cluster computing framework
    via by Unknown author,
  • Re: Executor Lost Failure
    via by Unknown author,
  • Problem running ADAM on CDH5.
    via by Jaeki Hong,
  • an error on Mesos
    via GitHub by jsongcse
    ,
    • org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 5, averie001-edt-loc): java.lang.IllegalStateException: unread block data java.io.ObjectInputStream$BlockDataInputStream. setBlockDataMode(ObjectInputStream.java:2421) java.io.ObjectInputStream. readObject0(ObjectInputStream.java:1382) java.io.ObjectInputStream. defaultReadFields(ObjectInputStream.java:1990) java.io.ObjectInputStream. readSerialData(ObjectInputStream.java:1915) java.io.ObjectInputStream. readOrdinaryObject(ObjectInputStream.java:1798) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) java.io.ObjectInputStream.readObject(ObjectInputStream.java:370) org.apache.spark.serializer.JavaDeserializationStream. readObject(JavaSerializer.scala:62) org.apache.spark.serializer. JavaSerializerInstance.deserialize(JavaSerializer.scala:87) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:160) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$ scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

    Users with the same issue

    johnxfly
    johnxfly1 times, last one,
    tyson925
    tyson9253 times, last one,
    Nikolay Rybak
    Nikolay Rybak1 times, last one,
    meneal
    meneal1 times, last one,
    Unknown visitor1 times, last one,
    19 more bugmates