Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

  1. ,
    Expert tip

    This might be an issue with the file location in the Spark submit command. Try it with

    spark-submit --master spark://master:7077 \
         hello_world_from_pyspark.py {file location}
    
    
  2. ,
    Expert tip

    Check if you've set a name in Application -> Run. If you didn't, the generated XML is gonna have missing information and then this exception will be thrown.

Solutions on the web

via zeppelin-users by COUERON Damien (i-BP - MICROPOLE), 1 year ago
via incubator-zeppelin-users by COUERON Damien (i-BP - MICROPOLE), 10 months ago
via incubator-zeppelin-users by Mina Lee, 10 months ago
via zeppelin-users by Mina Lee, 9 months ago
java.lang.ClassNotFoundException: ibp.big.hive.serde.CSVSerde	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)	at java.lang.Class.forName0(Native Method)	at java.lang.Class.forName(Class.java:348)	at org.apache.spark.sql.hive.MetastoreRelation.(HiveMetastoreCatalog.scala:701)	at org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:248)	at org.apache.spark.sql.hive.HiveContext$$anon$2.org$apache$spark$sql$catalyst$analysis$OverrideCatalog$$super$lookupRelation(HiveContext.scala:373)	at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:165)	at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$$anonfun$lookupRelation$3.apply(Catalog.scala:165)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.sql.catalyst.analysis.OverrideCatalog$class.lookupRelation(Catalog.scala:165)	at org.apache.spark.sql.hive.HiveContext$$anon$2.lookupRelation(HiveContext.scala:373)	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:222)	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$7.applyOrElse(Analyzer.scala:233)	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$7.applyOrElse(Analyzer.scala:229)	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51)	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:221)	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:242)	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)	at scala.collection.Iterator$class.foreach(Iterator.scala:727)	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)	at scala.collection.AbstractIterator.to(Iterator.scala:1157)	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)	at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)	at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:272)	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:227)	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:242)	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)	at scala.collection.Iterator$class.foreach(Iterator.scala:727)	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)	at scala.collection.AbstractIterator.to(Iterator.scala:1157)	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)	at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)	at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:272)	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:227)	at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:212)	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:229)	at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:219)	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:61)	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:59)	at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)	at scala.collection.immutable.List.foldLeft(List.scala:84)	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:59)	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:51)	at scala.collection.immutable.List.foreach(List.scala:318)	at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:51)	at org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:933)	at org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:933)	at org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:931)	at org.apache.spark.sql.DataFrame.(DataFrame.scala:131)	at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)	at java.lang.reflect.Method.invoke(Method.java:497)	at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:136)	at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)	at org.apache.zeppelin.scheduler.Job.run(Job.java:170)	at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)	at java.util.concurrent.FutureTask.run(FutureTask.java:266)	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)