org.apache.spark.sql.AnalysisException: Path does not exist: file:/C:/path/to/my/files/*.csv;

Stack Overflow | Jose Velaz | 1 month ago
tip
Do you find the tips below useful? Click on the to mark them and say thanks to poroszd and poroszd . Or join the community to write better ones.
  1. 0

    Reading multiple csv files with Scala Spark 2 - AnalysisException: Path does not exist

    Stack Overflow | 1 month ago | Jose Velaz
    org.apache.spark.sql.AnalysisException: Path does not exist: file:/C:/path/to/my/files/*.csv;
  2. 0

    GitHub comment 277#258342325

    GitHub | 4 months ago | caprice-j
    org.apache.spark.sql.AnalysisException: Path does not exist: hdfs://spark-cluster-m/tmp/RtmpCrLffh/spark_csv_68cd3e8b80f87463b388b4406d92193740c5053ae737bbedd3026340bc69be98.csv;
  3. 0
    samebug tip
    Check if you use the right path
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    sparkr 2.0 read.df throws path does not exist error

    Stack Overflow | 5 months ago | narik
    org.apache.spark.api.r.RBackendHandler: loadDF on org.apache.spark.sql.api.r.SQLUtils faile d Error in invokeJava(isStatic = TRUE, className, methodName, ...) : org.apache.spark.sql.AnalysisException: **Path does not exist: gs://dev.appspot.com/myData/2014/ 20*,gs://dev.appspot.com/myData/2015/20***;
  6. 0
    samebug tip
    You should use java.sql.Timestamp or Date to map BsonDateTime from mongodb.
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. org.apache.spark.sql.AnalysisException

    Path does not exist: file:/C:/path/to/my/files/*.csv;

    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply()
  2. org.apache.spark
    DataSource$$anonfun$14.apply
    1. org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:377)
    2. org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:370)
    2 frames
  3. Scala
    List.flatMap
    1. scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    2. scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    3. scala.collection.immutable.List.foreach(List.scala:381)
    4. scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    5. scala.collection.immutable.List.flatMap(List.scala:344)
    5 frames
  4. org.apache.spark
    DataSource.resolveRelation
    1. org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
    1 frame
  5. Spark Project SQL
    DataFrameReader.csv
    1. org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
    2. org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:415)
    3. org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:352)
    3 frames