Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by Pari Margu
, 1 year ago
Job aborted due to stage failure: Task 0 in stage 229.0 failed 1 times, most recent failure: Lost task 0.0 in stage 229.0 (TID 1625, localhost): scala.MatchError: -39.099998474121094 (of class java.lang.Double)
via GitHub by zlosim
, 1 year ago
Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, dwh-mapr-dev-01): scala.MatchError: Buffer(_default) (of class scala.collection.convert.Wrappers$JListWrapper)
via Stack Overflow by user2895589
, 7 months ago
Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4, lrdne22tpapu1.corp.bankofamerica.com): scala.MatchError: [Ljava.lang.Object;@355780d6 (of class [Ljava.lang.Object;)
via Stack Overflow by Luckylukee
, 11 months ago
Job aborted due to stage failure: Task 117 in stage 26.0 failed 4 times, most recent failure: Lost task 117.3 in stage 26.0 (TID 3249, AUPER01-01-20-08-0.prod.vroc.com.au): scala.MatchError: 1435766400001 (of class java.lang.Long)
via GitHub by Rushi-Kumar
, 11 months ago
Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): scala.MatchError: 42.3626 (of class java.lang.Float)
via GitHub by Rushi-Kumar
, 11 months ago
Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): scala.MatchError: 12.9833 (of class java.lang.Float)
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 229.0 failed 1 times, most recent failure: Lost task 0.0 in stage 229.0 (TID 1625, localhost): scala.MatchError: -39.099998474121094 (of class java.lang.Double)	at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:295)	at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:294)	at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)	at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)	at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)	at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)	at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)	at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)	at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)	at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)	at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:59)	at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:56)	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)	at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)	at scala.collection.Iterator$class.foreach(Iterator.scala:727)	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)	at scala.collection.AbstractIterator.to(Iterator.scala:1157)	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)	at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)	at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:212)	at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:212)	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1863)	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1863)	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)	at org.apache.spark.scheduler.Task.run(Task.scala:89)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)	at java.lang.Thread.run(Thread.java:785)