java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.orc.DefaultSource could not be instantiated

Stack Overflow | Jaffer Wilson | 2 weeks ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    Fail to write to hdfs when elasticsearch-spark in classpath

    GitHub | 1 month ago | mozinrat
    java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.elasticsearch.spark.sql.DefaultSource15 not found
  2. 0

    GitHub comment 10#51852979

    GitHub | 3 years ago | dlwh
    java.util.ServiceConfigurationError: epic.models.NerModelLoader: jar:file:/Users/sammerry/.m2/repository/org/scalanlp/epic-ner-en-conll_2.10/0.1/epic-ner-en-conll_2.10-0.1.jar!/META-INF/services/epic.models.NerModelLoader:1: Illegal configuration-file syntax
  3. 0

    GitHub comment 10#51852548

    GitHub | 3 years ago | sammerry
    java.util.ServiceConfigurationError: epic.models.NerModelLoader: jar:file:/Users/sammerry/.m2/repository/org/scalanlp/epic-ner-en-conll_2.10/0.1/epic-ner-en-conll_2.10-0.1.jar!/META-INF/services/epic.models.NerModelLoader:1: Illegal configuration-file syntax
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Why I am getting the exception while running this java program?

    Stack Overflow | 2 weeks ago | Jaffer Wilson
    java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.orc.DefaultSource could not be instantiated
  6. 0

    Spark 2.0 DataSourceRegister configuration error while saving DataFrame as cvs

    Stack Overflow | 2 months ago | Sarah Bergquist
    java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.orc.DefaultSource could not be instantiated
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.VerifyError

    Bad return type Exception Details: Location: org/apache/spark/sql/hive/orc/DefaultSource.createRelation(Lorg/apache/spark/sql/SQLContext;[Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/collection/immutable/Map;)Lorg/apache/spark/sql/sources/HadoopFsRelation; @35: areturn Reason: Type 'org/apache/spark/sql/hive/orc/OrcRelation' (current frame, stack[0]) is not assignable to 'org/apache/spark/sql/sources/HadoopFsRelation' (from method signature) Current Frame: bci: @35 flags: { } locals: { 'org/apache/spark/sql/hive/orc/DefaultSource', 'org/apache/spark/sql/SQLContext', '[Ljava/lang/String;', 'scala/Option', 'scala/Option', 'scala/collection/immutable/Map' } stack: { 'org/apache/spark/sql/hive/orc/OrcRelation' } Bytecode: 0x0000000: b200 1c2b c100 1ebb 000e 592a b700 22b6 0x0000010: 0026 bb00 2859 2c2d b200 2d19 0419 052b 0x0000020: b700 30b0

    at java.lang.Class.getDeclaredConstructors0()
  2. Java RT
    ServiceLoader$1.next
    1. java.lang.Class.getDeclaredConstructors0(Native Method)
    2. java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
    3. java.lang.Class.getConstructor0(Class.java:3075)
    4. java.lang.Class.newInstance(Class.java:412)
    5. java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
    6. java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
    7. java.util.ServiceLoader$1.next(ServiceLoader.java:480)
    7 frames
  3. Scala
    AbstractTraversable.filter
    1. scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
    2. scala.collection.Iterator$class.foreach(Iterator.scala:893)
    3. scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    4. scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    5. scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    6. scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
    7. scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
    8. scala.collection.AbstractTraversable.filter(Traversable.scala:104)
    8 frames
  4. org.apache.spark
    ResolveDataSource$$anonfun$apply$1.applyOrElse
    1. org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:550)
    2. org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86)
    3. org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86)
    4. org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:325)
    5. org.apache.spark.sql.execution.datasources.ResolveDataSource$$anonfun$apply$1.applyOrElse(rules.scala:58)
    6. org.apache.spark.sql.execution.datasources.ResolveDataSource$$anonfun$apply$1.applyOrElse(rules.scala:41)
    6 frames
  5. Spark Project Catalyst
    LogicalPlan.resolveOperators
    1. org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
    2. org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)
    3. org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    4. org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:60)
    5. org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:58)
    6. org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:58)
    7. org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:331)
    8. org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188)
    9. org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:329)
    10. org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:58)
    10 frames
  6. org.apache.spark
    ResolveDataSource.apply
    1. org.apache.spark.sql.execution.datasources.ResolveDataSource.apply(rules.scala:41)
    2. org.apache.spark.sql.execution.datasources.ResolveDataSource.apply(rules.scala:40)
    2 frames
  7. Spark Project Catalyst
    RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply
    1. org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
    2. org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
    2 frames
  8. Scala
    List.foldLeft
    1. scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
    2. scala.collection.immutable.List.foldLeft(List.scala:84)
    2 frames
  9. Spark Project Catalyst
    RuleExecutor$$anonfun$execute$1.apply
    1. org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
    2. org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
    2 frames
  10. Scala
    List.foreach
    1. scala.collection.immutable.List.foreach(List.scala:381)
    1 frame
  11. Spark Project Catalyst
    RuleExecutor.execute
    1. org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
    1 frame
  12. Spark Project SQL
    SQLContext.sql
    1. org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:64)
    2. org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:62)
    3. org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
    4. org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
    5. org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
    6. org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
    6 frames
  13. SparkHiveSql.sparkhivesql
    queryhive.main
    1. SparkHiveSql.sparkhivesql.queryhive.main(queryhive.java:27)
    1 frame