org.apache.spark.sql.AnalysisException: Failed to find data source: com.databricks.spark.avro. Please use Spark package http://spark-packages.org/package/databricks/spark-avro;**_

GitHub | Manikantan22 | 4 months ago
  1. 0

    Unable to write data in Avro format with Spark 2.0.0

    GitHub | 4 months ago | Manikantan22
    org.apache.spark.sql.AnalysisException: Failed to find data source: com.databricks.spark.avro. Please use Spark package http://spark-packages.org/package/databricks/spark-avro;**_
  2. 0

    GitHub comment 277#258342325

    GitHub | 1 month ago | caprice-j
    org.apache.spark.sql.AnalysisException: Path does not exist: hdfs://spark-cluster-m/tmp/RtmpCrLffh/spark_csv_68cd3e8b80f87463b388b4406d92193740c5053ae737bbedd3026340bc69be98.csv;
  3. 0

    Streaming df kinesis updated by zsxwing · Pull Request #33 · tdas/spark · GitHub

    github.com | 4 months ago
    org.apache.spark.sql.AnalysisException: Path does not exist: file:/Users/ksunitha/trunk/spark/file-path-is-incorrect.csv;
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    GitHub comment 277#260222141

    GitHub | 3 weeks ago | caprice-j
    org.apache.spark.sql.AnalysisException: Path does not exist: hdfs://spark-cluster-m/tmp/Rtmp77zUfk/file8c1891654a.csv;
  6. 0

    GitHub comment 277#260222141

    GitHub | 3 weeks ago | caprice-j
    org.apache.spark.sql.AnalysisException: Path does not exist: hdfs://spark-cluster-m/tmp/Rtmp77zUfk/spark_csv_68cd3e8b80f87463b388b4406d92193740c5053ae737bbedd3026340bc69be98.csv;

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.spark.sql.AnalysisException

      Failed to find data source: com.databricks.spark.avro. Please use Spark package http://spark-packages.org/package/databricks/spark-avro;**_

      at org.apache.spark.sql.execution.datasources.DataSource.lookupDataSource()
    2. org.apache.spark
      DataSource.resolveRelation
      1. org.apache.spark.sql.execution.datasources.DataSource.lookupDataSource(DataSource.scala:141)
      2. org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:78)
      3. org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:78)
      4. org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:310)
      4 frames
    3. Spark Project SQL
      DataFrameReader.load
      1. org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
      2. org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132)
      2 frames
    4. com.comcast.dart
      JavaSparkRDD.main
      1. com.comcast.dart.spark.JavaSparkRDD.main(JavaSparkRDD.java:41)
      1 frame
    5. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:606)
      4 frames
    6. Spark Project YARN Stable API
      ApplicationMaster$$anon$2.run
      1. org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
      1 frame