Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by Jaffer Wilson
, 1 year ago
org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.orc.DefaultSource could not be instantiated
via Stack Overflow by hpkong
, 7 months ago
org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.orc.DefaultSource could not be instantiated
via GitHub by tomwhite
, 1 year ago
java.nio.file.spi.FileSystemProvider: Provider com.google.cloud.storage.contrib.nio.CloudStorageFileSystemProvider could not be instantiated
via GitHub by ivangonzalezsaenz
, 1 year ago
javax.batch.operations.JobOperator: Provider com.ibm.jbatch.container.api.impl.JobOperatorImpl could not be instantiated
via GitHub by ivangonzalezsaenz
, 1 year ago
javax.batch.operations.JobOperator: Provider com.ibm.jbatch.container.api.impl.JobOperatorImpl could not be instantiated
via Stack Overflow by bsd
, 1 year ago
org.apache.nifi.processor.Processor: Provider org.apache.nifi.processors.standard.DetectDuplicate could not be instantiated
java.lang.VerifyError: Bad return type
Exception Details:
  Location:
    org/apache/spark/sql/hive/orc/DefaultSource.createRelation(Lorg/apache/spark/sql/SQLContext;[Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/collection/immutable/Map;)Lorg/apache/spark/sql/sources/HadoopFsRelation; @35: areturn
  Reason:
    Type 'org/apache/spark/sql/hive/orc/OrcRelation' (current frame, stack[0]) is not assignable to 'org/apache/spark/sql/sources/HadoopFsRelation' (from method signature)
  Current Frame:
    bci: @35
    flags: { }
    locals: { 'org/apache/spark/sql/hive/orc/DefaultSource', 'org/apache/spark/sql/SQLContext', '[Ljava/lang/String;', 'scala/Option', 'scala/Option', 'scala/collection/immutable/Map' }
    stack: { 'org/apache/spark/sql/hive/orc/OrcRelation' }
  Bytecode:
    0x0000000: b200 1c2b c100 1ebb 000e 592a b700 22b6
    0x0000010: 0026 bb00 2859 2c2d b200 2d19 0419 052b
    0x0000020: b700 30b0	at java.lang.Class.getDeclaredConstructors0(Native Method)	at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)	at java.lang.Class.getConstructor0(Class.java:3075)	at java.lang.Class.newInstance(Class.java:412)	at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)	at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)	at java.util.ServiceLoader$1.next(ServiceLoader.java:480)	at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)	at scala.collection.Iterator$class.foreach(Iterator.scala:893)	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)	at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)	at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)	at scala.collection.AbstractTraversable.filter(Traversable.scala:104)	at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:550)	at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86)	at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86)	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:325)	at org.apache.spark.sql.execution.datasources.ResolveDataSource$$anonfun$apply$1.applyOrElse(rules.scala:58)	at org.apache.spark.sql.execution.datasources.ResolveDataSource$$anonfun$apply$1.applyOrElse(rules.scala:41)	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:61)	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:60)	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:58)	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:58)	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:331)	at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188)	at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:329)	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:58)	at org.apache.spark.sql.execution.datasources.ResolveDataSource.apply(rules.scala:41)	at org.apache.spark.sql.execution.datasources.ResolveDataSource.apply(rules.scala:40)	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)	at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)	at scala.collection.immutable.List.foldLeft(List.scala:84)	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)	at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)	at scala.collection.immutable.List.foreach(List.scala:381)	at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:64)	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:62)	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)	at SparkHiveSql.sparkhivesql.queryhive.main(queryhive.java:27)