org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/monir/.bashrc

spark-user | Mozumder, Monir | 2 years ago
  1. 0

    RE: cannot read file form a local path

    apache.org | 12 months ago
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/monir/.bashrc
  2. 0

    RE: cannot read file form a local path

    spark-user | 2 years ago | Mozumder, Monir
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/monir/.bashrc
  3. 0

    Scala code pattern for loading RDD or catching error and creating the RDD?

    Stack Overflow | 2 years ago | Ziggy Eunicien
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost/Users/data/hdfs/namenode/myRDD.txt
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Reading a local Windows file in apache Spark

    Stack Overflow | 1 year ago | Satya
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/Downloads/error.txt
  6. 0

    How to include file in production mode for Play framework

    Stack Overflow | 2 years ago | user3684014
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/Path/to/my/project/target/universal/stage/public/data/array.txt

  1. tyson925 3 times, last 1 month ago
  2. tyson925 1 times, last 1 month ago
7 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. org.apache.hadoop.mapred.InvalidInputException

    Input path does not exist: file:/home/monir/.bashrc

    at org.apache.hadoop.mapred.FileInputFormat.listStatus()
  2. Hadoop
    FileInputFormat.getSplits
    1. org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
    2. org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
    2 frames
  3. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:175)
    2. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
    3 frames