Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

  1. ,
    via playframework.com by Unknown author

    Add guice to the dependencies.

    libraryDependencies += guice
    

Solutions on the web

via Stack Overflow by deadlock89
, 2 years ago
Unsupported parquet datatype optional fixed_len_byte_array(11) amount (DECIMAL(24,7))
via GitHub by olafurpg
, 1 year ago
unsupported file <console>
via GitHub by thoughtpoet
, 1 year ago
Expected a single top-level record, found a union of more than one type: List({"type":"record","name":"EmailFactor","namespace":"bom.aaa.types","fields":[{"name":"email","type":"string"}]}, {"type":"record","name":"PhoneFactor","namespace
via GitHub by Antwnis
, 1 year ago
Unsupported source csv:file1.csv
java.lang.RuntimeException: Unsupported datatype StructType(List())	at scala.sys.package$.error(package.scala:27)	at org.apache.spark.sql.parquet.ParquetTypesConverter$.fromDataType(ParquetRelation.scala:201)	at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$1.apply(ParquetRelation.scala:235)	at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$1.apply(ParquetRelation.scala:235)	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)	at scala.collection.immutable.List.foreach(List.scala:318)	at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)	at scala.collection.AbstractTraversable.map(Traversable.scala:105)	at org.apache.spark.sql.parquet.ParquetTypesConverter$.convertFromAttributes(ParquetRelation.scala:234)	at org.apache.spark.sql.parquet.ParquetTypesConverter$.writeMetaData(ParquetRelation.scala:267)	at org.apache.spark.sql.parquet.ParquetRelation$.createEmpty(ParquetRelation.scala:143)	at org.apache.spark.sql.parquet.ParquetRelation$.create(ParquetRelation.scala:122)	at org.apache.spark.sql.execution.SparkStrategies$ParquetOperations$.apply(SparkStrategies.scala:139)	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)	at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)	at org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)	at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:264)	at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:264)	at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:265)	at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:265)	at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:268)	at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:268)	at org.apache.spark.sql.SchemaRDDLike$class.saveAsParquetFile(SchemaRDDLike.scala:66)	at org.apache.spark.sql.SchemaRDD.saveAsParquetFile(SchemaRDD.scala:98)