org.apache.spark.SparkException: Job aborted due to stage failure: Task 21.0:0 failed 4 times, most recent failure: Exception failure in TID 34 on host krbda1anode01.kr.test.com: scala.MatchError: 2.0 (of class java.lang.Double) org.apache.spark.mllib.tree.DecisionTree$.classificationBinSeqOp$1(DecisionTree.scala:568) org.apache.spark.mllib.tree.DecisionTree$.org$apache$spark$mllib$tree$DecisionTree$$binSeqOp$1(DecisionTree.scala:623) org.apache.spark.mllib.tree.DecisionTree$$anonfun$4.apply(DecisionTree.scala:657) org.apache.spark.mllib.tree.DecisionTree$$anonfun$4.apply(DecisionTree.scala:657) scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144) scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144) scala.collection.Iterator$class.foreach(Iterator.scala:727) scala.collection.AbstractIterator.foreach(Iterator.scala:1157) scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144) scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157) scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:201) scala.collection.AbstractIterator.aggregate(Iterator.scala:1157) org.apache.spark.rdd.RDD$$anonfun$21.apply(RDD.scala:838) org.apache.spark.rdd.RDD$$anonfun$21.apply(RDD.scala:838) org.apache.spark.SparkContext$$anonfun$23.apply(SparkContext.scala:1116) org.apache.spark.SparkContext$$anonfun$23.apply(SparkContext.scala:1116) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) org.apache.spark.scheduler.Task.run(Task.scala:51) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Driver stacktrace:

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via spark-user by jake Lim, 1 year ago
Job aborted due to stage failure: Task 21.0:0 failed 4 times, most recent failure: Exception failure in TID 34 on host krbda1anode01.kr.test.com: scala.MatchError: 2.0 (of class java.lang.Double) org.apache.spark.mllib.tree.DecisionTree
via nabble.com by Unknown author, 2 years ago
Job aborted due to stage failure: Task 1 in stage 7.0 failed 1 times, most recent failure: Lost task 1.0 in stage 7.0 (TID 13, localhost): java.lang.ArrayIndexOutOfBoundsException: 6301 org.apache.spark.mllib.tree.DecisionTree
via scalaclass.com by Unknown author, 2 years ago
Job aborted due to stage failure: Task 0.0:0 failed 1 times, most recent failure: Exception failure in TID 0 on host localhost: A worker violation occurred: Bad number detected. SparkExceptions$.func(SparkExceptions.scala:26
via Stack Overflow by Jason
, 2 years ago
Job aborted due to stage failure: Task 0.0:0 failed 1 times, most recent failure: Exception failure in TID 0 on host localhost: A worker violation occurred: Bad number detected. SparkExceptions$.func(SparkExceptions.scala:26
via Stack Overflow by Unknown author, 2 years ago
Job aborted due to stage failure: Task 0.0:9 failed 4 times, most recent failure: Exception failure in TID 17 on host vllxbd621node07.scif.com: java.lang.ClassCastException: org.apache.avro.generic.GenericData$Record cannot be cast to
via Stack Overflow by Unknown author, 2 years ago
Job aborted due to stage failure: Task 3 in stage 0.0 failed 1 times, most recent failure: Lost task 3.0 in stage 0.0 (TID 3, localhost): java.lang.ClassCastException: java.util.LinkedHashMap cannot be cast to $iwC$$iwC$Person $iwC$$iwC$$iwC
org.apache.spark.SparkException: Job aborted due to stage failure: Task 21.0:0 failed 4 times, most recent failure: Exception failure in TID 34 on host krbda1anode01.kr.test.com: scala.MatchError: 2.0 (of class java.lang.Double) org.apache.spark.mllib.tree.DecisionTree$.classificationBinSeqOp$1(DecisionTree.scala:568) org.apache.spark.mllib.tree.DecisionTree$.org$apache$spark$mllib$tree$DecisionTree$$binSeqOp$1(DecisionTree.scala:623) org.apache.spark.mllib.tree.DecisionTree$$anonfun$4.apply(DecisionTree.scala:657) org.apache.spark.mllib.tree.DecisionTree$$anonfun$4.apply(DecisionTree.scala:657) scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144) scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144) scala.collection.Iterator$class.foreach(Iterator.scala:727) scala.collection.AbstractIterator.foreach(Iterator.scala:1157) scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144) scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157) scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:201) scala.collection.AbstractIterator.aggregate(Iterator.scala:1157) org.apache.spark.rdd.RDD$$anonfun$21.apply(RDD.scala:838) org.apache.spark.rdd.RDD$$anonfun$21.apply(RDD.scala:838) org.apache.spark.SparkContext$$anonfun$23.apply(SparkContext.scala:1116) org.apache.spark.SparkContext$$anonfun$23.apply(SparkContext.scala:1116) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) org.apache.spark.scheduler.Task.run(Task.scala:51) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:745) Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Users with the same issue

Once, 4 months ago
Once, 9 months ago
4 times, 10 months ago
Once, 11 months ago
Once, 1 year ago

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.