Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via apache.org by Unknown author, 2 years ago
Input path does not exist: hdfs: //ec2-54-234-136-50.compute-1.amazonaws.com:9000/user/root/README.md at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
via spark-user by Mozumder, Monir, 1 year ago
Input path does not exist: hdfs: //ec2-54-234-136-50.compute-1.amazonaws.com:9000/user/root/README.md at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
via nabble.com by Unknown author, 2 years ago
Input path does not exist: hdfs:                                      //ec2-54-234-136-50.compute-1.amazonaws.com:9000/user/root/README.md at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j                                      ava:197) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja                                      va:208)
via nabble.com by Unknown author, 2 years ago
Input path does not exist: hdfs:// at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs:                                      //ec2-54-234-136-50.compute-1.amazonaws.com:9000/user/root/README.md 
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j                                      ava:197)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja                                      va:208)	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:141)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:201)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:199)	at scala.Option.getOrElse(Option.scala:108)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:199)	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:26)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:201)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:199)	at scala.Option.getOrElse(Option.scala:108)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:199)	at org.apache.spark.SparkContext.runJob(SparkContext.scala:886)	at org.apache.spark.rdd.RDD.count(RDD.scala:698)