org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/user/rahul/baby_names.csv

Stack Overflow | ra1376 | 1 month ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    Unable to run query on table created with Spark using registerTempTable

    Stack Overflow | 1 year ago | PRP
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://xxxxxxxxxx.xxxxxx.com:8020/home/zeppelin/data/bank-full.csv
  2. 0

    [ZEPPELIN-7] Support yarn without SPARK_YARN_JAR · apache/incubator-zeppelin@91066c4 · GitHub

    github.com | 1 year ago
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/data/pickat/tsv/app/2015/03/03
  3. 0

    Apache Spark User List - Quick start example (README.md count) doesn't work

    nabble.com | 1 year ago
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs:// at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Spark: Reading S3 file exception with Spark 1.5.2 prebuilt with hadoop-2.6

    Stack Overflow | 1 year ago | Mohitt
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: s3://snapdeal-personalization-dev-us-west-2/TNRealtime/output/2016/01/27/22/45/00/a.txt
  6. 0

    Reading a local Windows file in apache Spark

    Stack Overflow | 2 years ago | Satya
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/Downloads/error.txt

  1. johnxfly 1 times, last 2 months ago
  2. tyson925 1 times, last 3 months ago
7 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. org.apache.hadoop.mapred.InvalidInputException

    Input path does not exist: file:/user/rahul/baby_names.csv

    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus()
  2. Hadoop
    FileInputFormat.getSplits
    1. org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
    2. org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
    3. org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
    3 frames
  3. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
    2. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    3 frames
  4. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:121)
    1 frame
  5. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    2. org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    4 frames
  6. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:121)
    1 frame
  7. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    2. org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:53)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    4 frames
  8. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:121)
    1 frame
  9. Spark
    PythonRDD.collectAndServe
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    2. org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
    3. org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
    4. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    5. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    6. org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    7. org.apache.spark.rdd.RDD.collect(RDD.scala:934)
    8. org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)
    9. org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    9 frames
  10. Java RT
    Method.invoke
    1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    4. java.lang.reflect.Method.invoke(Method.java:498)
    4 frames
  11. Py4J
    GatewayConnection.run
    1. py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    2. py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    3. py4j.Gateway.invoke(Gateway.java:280)
    4. py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    5. py4j.commands.CallCommand.execute(CallCommand.java:79)
    6. py4j.GatewayConnection.run(GatewayConnection.java:214)
    6 frames
  12. Java RT
    Thread.run
    1. java.lang.Thread.run(Thread.java:745)
    1 frame