java.lang.RuntimeException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • GitHub comment 97#168706671
    via GitHub by matanster
    ,
  • Can I create a function in Spark SQL?
    via Stack Overflow by user3826955
    ,
  • loading SparkR data frame in Hive
    via Stack Overflow by Arun Gunalan
    ,
  • crash report
    via GitHub by syf097
    ,
  • Client crush
    via GitHub by takilazy
    ,
  • master does not build/run?
    via GitHub by hffmnn
    ,
    • java.lang.RuntimeException: Unsupported datatype StructType(List()) at scala.sys.package$.error(package.scala:27) at org.apache.spark.sql.parquet.ParquetTypesConverter$.fromDataType(ParquetRelation.scala:201) at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$1.apply(ParquetRelation.scala:235) at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$1.apply(ParquetRelation.scala:235) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.immutable.List.foreach(List.scala:318) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at org.apache.spark.sql.parquet.ParquetTypesConverter$.convertFromAttributes(ParquetRelation.scala:234) at org.apache.spark.sql.parquet.ParquetTypesConverter$.writeMetaData(ParquetRelation.scala:267) at org.apache.spark.sql.parquet.ParquetRelation$.createEmpty(ParquetRelation.scala:143) at org.apache.spark.sql.parquet.ParquetRelation$.create(ParquetRelation.scala:122) at org.apache.spark.sql.execution.SparkStrategies$ParquetOperations$.apply(SparkStrategies.scala:139) at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58) at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59) at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:264) at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:264) at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:265) at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:265) at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:268) at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:268) at org.apache.spark.sql.SchemaRDDLike$class.saveAsParquetFile(SchemaRDDLike.scala:66) at org.apache.spark.sql.SchemaRDD.saveAsParquetFile(SchemaRDD.scala:98)

    Users with the same issue

    guizmaiiguizmaii
    7 times, last one,
    gpgekkogpgekko
    1 times, last one,
    rprp
    12 times, last one,
    neowulf33neowulf33
    1 times, last one,
    HandemelindoHandemelindo
    1 times, last one,
    90 more bugmates