Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by ShenbagaKumar
, 1 year ago
Job aborted due to stage failure: Task 0 in stage 13.0 failed 1 times, most recent failure: Lost task 0.0 in stage 13.0 (TID 16, localhost): java.lang.NullPointerException: Value at index 1 in null
via Stack Overflow by Baktaawar
, 1 year ago
Job aborted due to stage failure: Task 2 in stage 1505.0 failed 1 times, most recent failure: Lost task 2.0 in stage 1505.0 (TID 9224, localhost): java.lang.NullPointerException: Value at index 1 in null
via Stack Overflow by John Chrysostom
, 2 years ago
Job aborted due to stage failure: Task 5 in stage 135.0 failed 1 times, most recent failure: Lost task 5.0 in stage 135.0 (TID 7806, localhost): java.lang.NullPointerException: Value at index 1 in null
via GitHub by cyadusha
, 6 months ago
Job aborted due to stage failure: Task 1 in stage 24.0 failed 1 times, most recent failure: Lost task 1.0 in stage 24.0 (TID 33, localhost): java.lang.NullPointerException: Value at index 1 in null
via Stack Overflow by Avinash Vootkuri
, 1 year ago
Job aborted due to stage failure: Task 2 in stage 18.0 failed 1 times, most recent failure: Lost task 2.0 in stage 18.0 (TID 73, localhost): java.lang.NullPointerException: Value at index 1 is null
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 13.0 failed 1 times, most recent failure: Lost task 0.0 in stage 13.0 (TID 16, localhost): java.lang.NullPointerException: Value at index 1 in null	at org.apache.spark.sql.Row$class.getAnyValAs(Row.scala:475)	at org.apache.spark.sql.Row$class.getDouble(Row.scala:243)	at org.apache.spark.sql.catalyst.expressions.GenericRow.getDouble(rows.scala:192)	at org.apache.spark.mllib.evaluation.BinaryClassificationMetrics$$anonfun$$init$$1.apply(BinaryClassificationMetrics.scala:61)	at org.apache.spark.mllib.evaluation.BinaryClassificationMetrics$$anonfun$$init$$1.apply(BinaryClassificationMetrics.scala:61)	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)	at org.apache.spark.scheduler.Task.run(Task.scala:89)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)