Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

  1. ,
    Expert tip

    A few things cause this exception:
    1) Check if you have all jars and if they're in the correct path.
    2) Your classpath might be broken, you can define it in the command line with java -cp yourClassPath or at your IDE if you're using one.

  2. ,

    you can change your scala version to 2.11.11

Solutions on the web

via Stack Overflow by Valeria Chernenko
, 1 year ago
Failed to find data source: parquet. Please find packages at http://spark-packages.org
via Stack Overflow by Hello lad
, 1 year ago
Failed to find data source: com.databricks.spark.redshift. Please find packages at https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects
via Stack Overflow by kiran kumar
, 1 year ago
Failed to find data source: com.databricks.spark.xml. Please find packages at https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects
via GitHub by VishnuVR1988
, 1 year ago
Failed to find data source:com.databricks.spark.xml. Please find packages at https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects
via GitHub by lujea
, 1 year ago
Detected an incompatible DataSourceRegister. Please remove the incompatible library from classpath or upgrade it. Error: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.ml.source.libsvm.DefaultSource could not be instantiated
via Stack Overflow by nastia klochkova
, 1 year ago
Failed to find data source: text. Please find packages at http://spark-packages.org
java.lang.ClassNotFoundException: parquet.DefaultSource	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)	at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$5$$anonfun$apply$1.apply(DataSource.scala:130)	at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$5$$anonfun$apply$1.apply(DataSource.scala:130)	at scala.util.Try$.apply(Try.scala:192)	at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$5.apply(DataSource.scala:130)	at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$5.apply(DataSource.scala:130)	at scala.util.Try.orElse(Try.scala:84)	at org.apache.spark.sql.execution.datasources.DataSource.lookupDataSource(DataSource.scala:130)	at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:78)	at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:78)	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:310)	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)	at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:427)	at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:411)	at org.apache.spark.mllib.classification.impl.GLMClassificationModel$SaveLoadV1_0$.loadData(GLMClassificationModel.scala:77)	at org.apache.spark.mllib.classification.LogisticRegressionModel$.load(LogisticRegression.scala:183)	at org.apache.spark.mllib.classification.LogisticRegressionModel.load(LogisticRegression.scala)	at my.test.spark.assembling.TopicClassifier.load(TopicClassifier.java:35)	at my.test.spark.assembling.Main.main(Main.java:23)