org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/demo.text

Stack Overflow | subho | 2 months ago
  1. 0

    sample spark CSV and JSON program not running in windows

    Stack Overflow | 2 months ago | subho
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/demo.text
  2. 0

    [ZEPPELIN-7] Support yarn without SPARK_YARN_JAR · apache/incubator-zeppelin@91066c4 · GitHub

    github.com | 1 year ago
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/data/pickat/tsv/app/2015/03/03
  3. 0

    Scala code pattern for loading RDD or catching error and creating the RDD?

    Stack Overflow | 2 years ago | Ziggy Eunicien
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost/Users/data/hdfs/namenode/myRDD.txt
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Reading a local Windows file in apache Spark

    Stack Overflow | 1 year ago | Satya
    org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/Downloads/error.txt
  6. 0

    Bypassing org.apache.hadoop.mapred.InvalidInputException: Input Pattern s3n://[...] matches 0 files

    Stack Overflow | 3 years ago | Crystark
    org.apache.hadoop.mapred.InvalidInputException: Input Pattern s3n://bucket/mypattern matches 0 files

  1. tyson925 3 times, last 1 month ago
  2. tyson925 1 times, last 1 month ago
7 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. org.apache.hadoop.mapred.InvalidInputException

    Input path does not exist: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/demo.text

    at org.apache.hadoop.mapred.FileInputFormat.listStatus()
  2. Hadoop
    FileInputFormat.getSplits
    1. org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:251)
    2. org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
    2 frames
  3. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
    2. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
    3 frames
  4. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:121)
    1 frame
  5. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
    2. org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
    4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
    4 frames
  6. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:121)
    1 frame
  7. Spark
    RDD.take
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
    2. org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1293)
    3. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    4. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    5. org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    6. org.apache.spark.rdd.RDD.take(RDD.scala:1288)
    6 frames
  8. com.databricks.spark
    DefaultSource.createRelation
    1. com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:174)
    2. com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:169)
    3. com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:147)
    4. com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:70)
    5. com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:138)
    6. com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:40)
    7. com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:28)
    7 frames
  9. org.apache.spark
    ResolvedDataSource$.apply
    1. org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
    1 frame
  10. Spark Project SQL
    SQLContext.load
    1. org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
    2. org.apache.spark.sql.SQLContext.load(SQLContext.scala:1153)
    2 frames
  11. Unknown
    json1.main
    1. json1$.main(json1.scala:22)
    2. json1.main(json1.scala)
    2 frames