Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by ra1376
, 11 months ago
Input path does not exist: file:/user/rahul/baby_names.csv
via GitHub by Sun-shan
, 5 months ago
Input path does not exist: file:/hail/test/BRCA1.raw_indel.vcf
via Stack Overflow by nile
, 1 year ago
Input path does not exist: file:/home/hp/Downloads/spark-2.0.0-bin-hadoop2.7/auto-save.csv
via Stack Overflow by Wanderer
, 2 years ago
Input path does not exist: hdfs://localhost:9000/home/hduser2/spark-1.4.1-bin-hadoop2.6/README.md
via Stack Overflow by user110235
, 3 weeks ago
Input Pattern file:/home/big/wordcount/*.txt matches 0 files
via Stack Overflow by Ojas Kale
, 1 year ago
Input path does not exist: file:/home/vagrant/data/data/cs100/lab2/apache.access.log.PROJECT
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/user/rahul/baby_names.csv	at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)	at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)	at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)	at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:53)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)	at org.apache.spark.rdd.RDD.collect(RDD.scala:934)	at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)	at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)	at java.lang.reflect.Method.invoke(Method.java:498)	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)	at py4j.Gateway.invoke(Gateway.java:280)	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)	at py4j.commands.CallCommand.execute(CallCommand.java:79)	at py4j.GatewayConnection.run(GatewayConnection.java:214)	at java.lang.Thread.run(Thread.java:745)