java.lang.ClassNotFoundException: org.apache.hadoop.mapred.InputSplitWithLocationInfo

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

,

You are not deploying the oracle driver with the application. Place the driver jars in a shared or library extension folder of your application server. (You should go with option one or two though

,
Expert tip

This might be an issue with the file location in the Spark submit command. Try it with

spark-submit --master spark://master:7077 \
     hello_world_from_pyspark.py {file location}

Solutions on the web

via nabble.com by Unknown author, 2 years ago
via Apache's JIRA Issue Tracker by Staffan Arvidsson, 1 year ago
org.apache.hadoop.mapred.InputSplitWithLocationInfo
via GitHub by ljzzju
, 2 years ago
org.apache.hadoop.mapred.InputSplitWithLocationInfo
java.lang.ClassNotFoundException: org.apache.hadoop.mapred.InputSplitWithLocationInfo
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:191)
at org.apache.spark.rdd.HadoopRDD$SplitInfoReflections.(HadoopRDD.scala:381)
at org.apache.spark.rdd.HadoopRDD$.liftedTree1$1(HadoopRDD.scala:391)
at org.apache.spark.rdd.HadoopRDD$.(HadoopRDD.scala:390)
at org.apache.spark.rdd.HadoopRDD$.(HadoopRDD.scala)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:159)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)

Users with the same issue

Once, 9 months ago
Samebug visitor profile picture
Unknown user
Once, 2 years ago
2 times, 1 year ago
Once, 1 year ago
Samebug visitor profile picture
Unknown user
Once, 1 year ago
1422 more bugmates

Know the solutions? Share your knowledge to help other developers to debug faster.