java.lang.NullPointerException

Stack Overflow | user6638138 | 5 months ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    How to debug IOException?

    GitHub | 4 years ago | danmelnick
    java.io.IOException: java.lang.NullPointerException
  2. Speed up your debug routine!

    Automated exception search integrated into your IDE

  3. 0

    type "int" in propertyfile task throws NullPointerException

    Apache Bugzilla | 1 decade ago | nolan.ring
    java.lang.NullPointerException

Root Cause Analysis

  1. java.lang.NullPointerException

    No message provided

    at java.text.DecimalFormat.parse()
  2. Java RT
    NumberFormat.parse
    1. java.text.DecimalFormat.parse(DecimalFormat.java:1997)
    2. java.text.NumberFormat.parse(NumberFormat.java:383)
    2 frames
  3. org.apache.spark
    CSVTypeCast$$anonfun$castTo$4.apply
    1. org.apache.spark.sql.execution.datasources.csv.CSVTypeCast$$anonfun$castTo$4.apply$mcD$sp(CSVInferSchema.scala:270)
    2. org.apache.spark.sql.execution.datasources.csv.CSVTypeCast$$anonfun$castTo$4.apply(CSVInferSchema.scala:270)
    3. org.apache.spark.sql.execution.datasources.csv.CSVTypeCast$$anonfun$castTo$4.apply(CSVInferSchema.scala:270)
    3 frames
  4. Scala
    Try.getOrElse
    1. scala.util.Try.getOrElse(Try.scala:79)
    1 frame
  5. org.apache.spark
    CSVFileFormat$$anonfun$buildReader$1$$anonfun$apply$1.apply
    1. org.apache.spark.sql.execution.datasources.csv.CSVTypeCast$.castTo(CSVInferSchema.scala:270)
    2. org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$csvParser$3.apply(CSVRelation.scala:115)
    3. org.apache.spark.sql.execution.datasources.csv.CSVRelation$$anonfun$csvParser$3.apply(CSVRelation.scala:84)
    4. org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1$$anonfun$apply$1.apply(CSVFileFormat.scala:125)
    5. org.apache.spark.sql.execution.datasources.csv.CSVFileFormat$$anonfun$buildReader$1$$anonfun$apply$1.apply(CSVFileFormat.scala:124)
    5 frames
  6. Scala
    Iterator$$anon$11.hasNext
    1. scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
    2. scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
    3. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    3 frames
  7. org.apache.spark
    FileScanRDD$$anon$1.hasNext
    1. org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
    1 frame
  8. Spark Project Catalyst
    GeneratedClass$GeneratedIterator.processNext
    1. org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
    2. org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    2 frames
  9. Spark Project SQL
    WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext
    1. org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    2. org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
    2 frames
  10. Scala
    Iterator$$anon$11.hasNext
    1. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    1 frame
  11. Spark
    Executor$TaskRunner.run
    1. org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    2. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
    3. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    4. org.apache.spark.scheduler.Task.run(Task.scala:85)
    5. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    5 frames
  12. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    3. java.lang.Thread.run(Thread.java:745)
    3 frames