java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary

GitHub | ash211 | 2 weeks ago
  1. 0

    Support reading DECIMAL(18,2) columns from Parquet

    GitHub | 2 weeks ago | ash211
    java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
  2. 0

    Spark: error reading DateType columns in partitioned parquet data

    Stack Overflow | 1 month ago | capitalistpug
    java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    java.io.IOException: Block 5922357248 is not available in Alluxio

    Google Groups | 6 months ago | Chanh Le
    java.io.IOException: Block 124537274368 is not available in Alluxio

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.UnsupportedOperationException

      org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary

      at org.apache.parquet.column.Dictionary.decodeToLong()
    2. org.apache.parquet
      Dictionary.decodeToLong
      1. org.apache.parquet.column.Dictionary.decodeToLong(Dictionary.java:52)[parquet-column-1.7.0.jar:1.7.0]
      1 frame
    3. org.apache.spark
      ColumnVector.getDecimal
      1. org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getLong(OnHeapColumnVector.java:274)[spark-sql_2.11-2.0.1.jar:2.0.1]
      2. org.apache.spark.sql.execution.vectorized.ColumnVector.getDecimal(ColumnVector.java:588)[spark-sql_2.11-2.0.1.jar:2.0.1]
      2 frames
    4. Spark Project Catalyst
      GeneratedClass$GeneratedIterator.processNext
      1. org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)[na:na]
      1 frame
    5. Spark Project SQL
      SparkPlan$$anonfun$4.apply
      1. org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)[spark-sql_2.11-2.0.1.jar:2.0.1]
      2. org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)[spark-sql_2.11-2.0.1.jar:2.0.1]
      3. org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)[spark-sql_2.11-2.0.1.jar:2.0.1]
      4. org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)[spark-sql_2.11-2.0.1.jar:2.0.1]
      4 frames
    6. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)[spark-core_2.11-2.0.1.jar:2.0.1]
      2. org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)[spark-core_2.11-2.0.1.jar:2.0.1]
      3. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)[spark-core_2.11-2.0.1.jar:2.0.1]
      4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)[spark-core_2.11-2.0.1.jar:2.0.1]
      5. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)[spark-core_2.11-2.0.1.jar:2.0.1]
      6. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)[spark-core_2.11-2.0.1.jar:2.0.1]
      7. org.apache.spark.scheduler.Task.run(Task.scala:86)[spark-core_2.11-2.0.1.jar:2.0.1]
      8. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)[spark-core_2.11-2.0.1.jar:2.0.1]
      8 frames