java.lang.NullPointerException

DataStax JIRA | Michael Siler | 1 year ago
  1. 0

    I had a bug in my code where I was calling javaFunctions(rdd).writerBuilder(...).saveToCassandra() but I had left the partition key null on some of the pojos in the RDD I was saving. That gave an NPE with the following stack trace: {noformat} java.lang.NullPointerException at com.datastax.spark.connector.writer.TableWriter.batchRoutingKey(TableWriter.scala:113) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1$$anonfun$13.apply(TableWriter.scala:129) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1$$anonfun$13.apply(TableWriter.scala:129) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:107) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:136) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:120) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:100) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:99) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:151) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:99) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:120) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {noformat} Having a null partition key seems like a fairly simple error for the connector to catch, but the exception/stack trace provide no immediate indication of what the problem was, making it harder to diagnose on my end. It would be great to have a more informative exception.

    DataStax JIRA | 1 year ago | Michael Siler
    java.lang.NullPointerException
  2. 0

    I had a bug in my code where I was calling javaFunctions(rdd).writerBuilder(...).saveToCassandra() but I had left the partition key null on some of the pojos in the RDD I was saving. That gave an NPE with the following stack trace: {noformat} java.lang.NullPointerException at com.datastax.spark.connector.writer.TableWriter.batchRoutingKey(TableWriter.scala:113) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1$$anonfun$13.apply(TableWriter.scala:129) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1$$anonfun$13.apply(TableWriter.scala:129) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:107) at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:136) at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:120) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:100) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:99) at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:151) at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:99) at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:120) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36) at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {noformat} Having a null partition key seems like a fairly simple error for the connector to catch, but the exception/stack trace provide no immediate indication of what the problem was, making it harder to diagnose on my end. It would be great to have a more informative exception.

    DataStax JIRA | 1 year ago | Michael Siler
    java.lang.NullPointerException
  3. 0

    Android: Saving Map State in Google map

    Stack Overflow | 11 months ago | Junie Negentien
    java.lang.RuntimeException: Unable to resume activity {com.ourThesis.junieNegentien2015/com.ourThesis.junieNegentien2015.MainActivity}: java.lang.NullPointerException
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.NullPointerException

      No message provided

      at com.datastax.spark.connector.writer.TableWriter.batchRoutingKey()
    2. spark-cassandra-connector
      GroupingBatchBuilder.next
      1. com.datastax.spark.connector.writer.TableWriter.batchRoutingKey(TableWriter.scala:113)
      2. com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1$$anonfun$13.apply(TableWriter.scala:129)
      3. com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1$$anonfun$13.apply(TableWriter.scala:129)
      4. com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:107)
      5. com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31)
      5 frames
    3. Scala
      Iterator$class.foreach
      1. scala.collection.Iterator$class.foreach(Iterator.scala:727)
      1 frame
    4. spark-cassandra-connector
      RDDFunctions$$anonfun$saveToCassandra$1.apply
      1. com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31)
      2. com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:136)
      3. com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:120)
      4. com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:100)
      5. com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:99)
      6. com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:151)
      7. com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:99)
      8. com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:120)
      9. com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
      10. com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
      10 frames
    5. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
      2. org.apache.spark.scheduler.Task.run(Task.scala:56)
      3. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
      3 frames
    6. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames