java.lang.RuntimeException: Unsupported datatype StructType(List())

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

,
via playframework.com by Unknown author

Add guice to the dependencies.

libraryDependencies += guice

Solutions on the web

via Stack Overflow by deadlock89
, 1 year ago
Unsupported parquet datatype optional fixed_len_byte_array(11) amount (DECIMAL(24,7))
via GitHub by olafurpg
, 9 months ago
unsupported file <console>
via GitHub by thoughtpoet
, 1 year ago
Expected a single top-level record, found a union of more than one type: List({"type":"record","name":"EmailFactor","namespace":"bom.aaa.types","fields":[{"name":"email","type":"string"}]}, {"type":"record","name":"PhoneFactor","namespace
via GitHub by Antwnis
, 1 year ago
Unsupported source csv:file1.csv
java.lang.RuntimeException: Unsupported datatype StructType(List())
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.parquet.ParquetTypesConverter$.fromDataType(ParquetRelation.scala:201)
at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$1.apply(ParquetRelation.scala:235)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.apache.spark.sql.parquet.ParquetTypesConverter$.convertFromAttributes(ParquetRelation.scala:234)
at org.apache.spark.sql.parquet.ParquetTypesConverter$.writeMetaData(ParquetRelation.scala:267)
at org.apache.spark.sql.parquet.ParquetRelation$.createEmpty(ParquetRelation.scala:143)
at org.apache.spark.sql.parquet.ParquetRelation$.create(ParquetRelation.scala:122)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:264)
at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:264)
at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:265)
at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:265)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:268)
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:268)
at org.apache.spark.sql.SchemaRDDLike$class.saveAsParquetFile(SchemaRDDLike.scala:66)

Users with the same issue

Samebug visitor profile picture
Unknown user
Once, 1 year ago
12 times, 6 months ago
Once, 1 year ago
Once, 10 months ago
Samebug visitor profile picture
Unknown user
Once, 1 year ago
92 more bugmates

Know the solutions? Share your knowledge to help other developers to debug faster.