Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Apache's JIRA Issue Tracker by yangping wu, 1 year ago
via Stack Overflow by Naresh
, 1 year ago
This exception has no message.
via Stack Overflow by Shay
, 1 year ago
This exception has no message.
via Stack Overflow by Marco Catalano
, 2 months ago
java.lang.NullPointerException: 	at org.apache.spark.sql.SQLConf.getConf(SQLConf.scala:217)	at org.apache.spark.sql.SQLConf.dataFrameEagerAnalysis(SQLConf.scala:191)	at org.apache.spark.sql.DataFrame.(DataFrame.scala:132)	at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)	at org.apache.spark.sql.SQLContext.createDataFrame(SQLContext.scala:381)	at logstatstreaming.FlightSearchTodb$$anonfun$logstatstreaming$FlightSearchTodb$$functionToCreateContext$1$1.apply(FlightSearchTodb.scala:57)	at logstatstreaming.FlightSearchTodb$$anonfun$logstatstreaming$FlightSearchTodb$$functionToCreateContext$1$1.apply(FlightSearchTodb.scala:40)	at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1.apply(DStream.scala:534)	at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1.apply(DStream.scala:534)	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:42)	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40)	at scala.util.Try$.apply(Try.scala:161)	at org.apache.spark.streaming.scheduler.Job.run(Job.scala:32)	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:176)	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:176)	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:176)	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:175)	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)	at java.lang.Thread.run(Thread.java:619)