org.apache.spark.SparkException: Job aborted.

GitHub | manisha803 | 4 months ago
  1. 0

    spark 2.0.0 csv write fails for empty input string

    GitHub | 4 months ago | manisha803
    org.apache.spark.SparkException: Task failed while writing rows
  2. 0

    SparkContext textFile with localfile system throws IllegalArgumentException

    Stack Overflow | 2 years ago | Vijay
    java.lang.IllegalArgumentException: For input string: "0"
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    Beta of new Notebook Application for Spark & SQL | Hue - Hadoop User Experience - The Apache Hadoop UI

    gethue.com | 2 months ago
    java.lang.IllegalArgumentException: For input string: “$SPARK_HOME/logs”
  5. 0

    Reading boolean 0/1 values from ES into Spark does not work (although false/true is ok)

    GitHub | 5 months ago | mathieu-rossignol
    org.elasticsearch.hadoop.rest.EsHadoopParsingException: Cannot parse value [1] for field [boolField]

  1. rp 2 times, last 2 months ago
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.IllegalArgumentException

    For input string: ""

    at scala.collection.immutable.StringLike$class.parseBoolean()
  2. Scala
    StringOps.toBoolean
    1. scala.collection.immutable.StringLike$class.parseBoolean(StringLike.scala:238)
    2. scala.collection.immutable.StringLike$class.toBoolean(StringLike.scala:226)
    3. scala.collection.immutable.StringOps.toBoolean(StringOps.scala:31)
    3 frames
  3. org.apache.spark
    CSVFileFormat$$anonfun$buildReader$1$$anonfun$apply$1.apply
    1. org.apache.spark.sql.execution.datasources.csv.CSVTypeCast$.castTo(CSVInferSchema.scala:272)
    2. org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$csvParser$3.apply(CSVRelation.scala:115)
    3. org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$csvParser$3.apply(CSVRelation.scala:84)
    4. org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1$$anonfun$apply$1.apply(CSVFileFormat.scala:125)
    5. org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1$$anonfun$apply$1.apply(CSVFileFormat.scala:124)
    5 frames
  4. Scala
    Iterator$$anon$11.hasNext
    1. scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
    2. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    2 frames
  5. org.apache.spark
    FileScanRDD$$anon$1.hasNext
    1. org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
    1 frame
  6. Scala
    Iterator$$anon$11.hasNext
    1. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    1 frame
  7. org.apache.spark
    DefaultWriterContainer$$anonfun$writeRows$1.apply
    1. org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:253)
    2. org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
    3. org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
    3 frames
  8. Spark
    Utils$.tryWithSafeFinallyAndFailureCallbacks
    1. org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1325)
    1 frame
  9. org.apache.spark
    InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply
    1. org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
    2. org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
    3. org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
    3 frames
  10. Spark
    Executor$TaskRunner.run
    1. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    2. org.apache.spark.scheduler.Task.run(Task.scala:85)
    3. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    3 frames
  11. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    3. java.lang.Thread.run(Thread.java:745)
    3 frames