org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 229.0 failed 1 times, most recent failure: Lost task 0.0 in stage 229.0 (TID 1625, localhost): scala.MatchError: -39.099998474121094 (of class java.lang.Double)

Stack Overflow | Pari Margu | 7 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    scala.MatchError in SparkR(DataFrame with Spark SQL)

    Stack Overflow | 7 months ago | Pari Margu
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 229.0 failed 1 times, most recent failure: Lost task 0.0 in stage 229.0 (TID 1625, localhost): scala.MatchError: -39.099998474121094 (of class java.lang.Double)
  2. 0

    SparkSql scala.MatchError when inserting single-value object in nested type

    GitHub | 7 months ago | juyttenh
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 7.0 failed 1 times, most recent failure: Lost task 1.0 in stage 7.0 (TID 17, localhost): scala.MatchError: [joe,15] (of class org.elasticsearch.spark.sql.ScalaEsRow)
  3. 0

    Dataframe in apache spark Error java.lang.ArrayIndexOutOfBoundsException:

    Stack Overflow | 4 months ago | Mark
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 92.0 failed 4 times, most recent failure: Lost task 0.3 in stage 92.0 (TID 137, ip-10-90-200-51.eu-west-1.compute.internal): java.lang.ArrayIndexOutOfBoundsException: 1
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    spark sql bug

    GitHub | 1 year ago | zlosim
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, dwh-mapr-dev-01): scala.MatchError: Buffer(_default) (of class scala.collection.convert.Wrappers$JListWrapper)
  6. 0

    GitHub comment 644#169340991

    GitHub | 1 year ago | costin
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost): scala.MatchError: Buffer() (of class scala.collection.convert.Wrappers$JListWrapper)
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. org.apache.spark.SparkException

    Job aborted due to stage failure: Task 0 in stage 229.0 failed 1 times, most recent failure: Lost task 0.0 in stage 229.0 (TID 1625, localhost): scala.MatchError: -39.099998474121094 (of class java.lang.Double)

    at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl()
  2. Spark Project Catalyst
    CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply
    1. org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:295)
    2. org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:294)
    3. org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
    4. org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
    5. org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
    6. org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
    7. org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
    8. org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
    9. org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
    10. org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
    10 frames
  3. Spark Project SQL
    RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply
    1. org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:59)
    2. org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:56)
    2 frames
  4. Scala
    AbstractIterator.toArray
    1. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    2. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    3. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    4. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    5. scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
    6. scala.collection.Iterator$class.foreach(Iterator.scala:727)
    7. scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    8. scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    9. scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    10. scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    11. scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    12. scala.collection.AbstractIterator.to(Iterator.scala:1157)
    13. scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    14. scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    15. scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    16. scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    16 frames
  5. Spark Project SQL
    SparkPlan$$anonfun$5.apply
    1. org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:212)
    2. org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:212)
    2 frames
  6. Spark
    Executor$TaskRunner.run
    1. org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1863)
    2. org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1863)
    3. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    4. org.apache.spark.scheduler.Task.run(Task.scala:89)
    5. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
    5 frames
  7. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    3. java.lang.Thread.run(Thread.java:785)
    3 frames