java.lang.ClassNotFoundException: Failed to find data source: parquet. Please find packages at

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

Expert tip

A few things cause this exception:
1) Check if you have all jars and if they're in the correct path.
2) Your classpath might be broken, you can define it in the command line with java -cp yourClassPath or at your IDE if you're using one.


you can change your scala version to 2.11.11

Solutions on the web

via Stack Overflow by Valeria Chernenko
, 1 year ago
Failed to find data source: parquet. Please find packages at
via Stack Overflow by Hello lad
, 11 months ago
Failed to find data source: com.databricks.spark.redshift. Please find packages at
via Stack Overflow by kiran kumar
, 10 months ago
Failed to find data source: com.databricks.spark.xml. Please find packages at
via GitHub by VishnuVR1988
, 1 year ago
Failed to find data source:com.databricks.spark.xml. Please find packages at
via GitHub by lujea
, 11 months ago
Detected an incompatible DataSourceRegister. Please remove the incompatible library from classpath or upgrade it. Error: org.apache.spark.sql.sources.DataSourceRegister: Provider could not be instantiated
via Stack Overflow by nastia klochkova
, 1 year ago
Failed to find data source: text. Please find packages at
java.lang.ClassNotFoundException: parquet.DefaultSource
at java.lang.ClassLoader.loadClass(
at sun.misc.Launcher$AppClassLoader.loadClass(
at java.lang.ClassLoader.loadClass(
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$5$$anonfun$apply$1.apply(DataSource.scala:130)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$5$$anonfun$apply$1.apply(DataSource.scala:130)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$5.apply(DataSource.scala:130)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$5.apply(DataSource.scala:130)
at scala.util.Try.orElse(Try.scala:84)
at org.apache.spark.sql.execution.datasources.DataSource.lookupDataSource(DataSource.scala:130)
at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:78)
at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:78)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:310)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:427)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:411)
at org.apache.spark.mllib.classification.impl.GLMClassificationModel$SaveLoadV1_0$.loadData(GLMClassificationModel.scala:77)
at org.apache.spark.mllib.classification.LogisticRegressionModel$.load(LogisticRegression.scala:183)
at my.test.spark.assembling.TopicClassifier.load(
at my.test.spark.assembling.Main.main(

Users with the same issue

Once, 13 hours ago
2 times, 3 days ago
2 times, 3 days ago
Once, 3 days ago
Once, 4 days ago

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.