org.apache.hadoop.mapred.InvalidInputException

Input path does not exist: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/demo.text

Samebug tips0

There are no available Samebug tips for this exception. If you know how to solve this issue, help other users by writing a short tip.

Don't give up yet. Paste your full stack trace to get a solution.

Solutions on the web175

  • Input path does not exist: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/demo.text
  • via Stack Overflow by Techno04335
    , 6 months ago
    Input path does not exist: file:/home/test/desktop/CHANGES.txt
  • via Stack Overflow by Satya
    , 1 year ago
    Input path does not exist: file:/C:/Users/Downloads/error.txt
  • Stack trace

    • org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/subho/Desktop/code-master/simple-spark-project/src/test/resources/demo.text at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:251) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1293) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.take(RDD.scala:1288) at com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:174) at com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:169) at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:147) at com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:70) at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:138) at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:40) at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:28) at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119) at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1153) at json1$.main(json1.scala:22) at json1.main(json1.scala)

    Write tip

    You have a different solution? A short tip here would help you and many other users who saw this issue last week.

    Users with the same issue

    Unknown user
    Once, 10 months ago
    Unknown user
    Once, 11 months ago
    Once, 3 months ago
    Once, 6 months ago
    Unknown user
    2 times, 11 months ago
    6 more bugmates