java.io.IOException: Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException at >>>> parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:189) at >>>> parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:145) at >>>> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354) at >>>> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339) at >>>> parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246) at >>>> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85) at >>>> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207) at >>>> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)

Google Groups | Uri Laserson | 3 years ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    Re: Using Parquet from an interactive Spark shell

    Google Groups | 3 years ago | Uri Laserson
    java.io.IOException: Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException at >>>> parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:189) at >>>> parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:145) at >>>> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354) at >>>> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339) at >>>> parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246) at >>>> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85) at >>>> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207) at >>>> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)

    Root Cause Analysis

    1. java.io.IOException

      Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException at >>>> parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:189) at >>>> parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:145) at >>>> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354) at >>>> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339) at >>>> parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246) at >>>> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85) at >>>> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207) at >>>> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)

      at scala.Option.getOrElse()
    2. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:120)
      1 frame
    3. Spark
      RDD.partitions
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
      1 frame