java.lang.UnsupportedOperationException

org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary

Solutions on the web3

  • via Apache's JIRA Issue Tracker by Egor Pahomov, 4 months ago
    org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
  • org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
  • org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
  • Stack trace

    • java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary at org.apache.parquet.column.Dictionary.decodeToLong(Dictionary.java:52)[parquet-column-1.7.0.jar:1.7.0] at org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getLong(OnHeapColumnVector.java:274)[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.execution.vectorized.ColumnVector.getDecimal(ColumnVector.java:588)[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)[na:na] at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)[spark-sql_2.11-2.0.1.jar:2.0.1] at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)[spark-core_2.11-2.0.1.jar:2.0.1] at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)[spark-core_2.11-2.0.1.jar:2.0.1] at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)[spark-core_2.11-2.0.1.jar:2.0.1] at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)[spark-core_2.11-2.0.1.jar:2.0.1] at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)[spark-core_2.11-2.0.1.jar:2.0.1] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)[spark-core_2.11-2.0.1.jar:2.0.1] at org.apache.spark.scheduler.Task.run(Task.scala:86)[spark-core_2.11-2.0.1.jar:2.0.1] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)[spark-core_2.11-2.0.1.jar:2.0.1]

    Write tip

    You have a different solution? A short tip here would help you and many other users who saw this issue last week.

    Users with the same issue

    You are the first who have seen this exception. Write a tip to help other users and build your expert profile.