java.io.IOException: Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException at > parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:189) at > parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:145) at > parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354) at > parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339) at > parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246) at > org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85) at > org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207) at > org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

Do you know how to solve this issue? Write a tip to help other users and build your expert profile.

Solutions on the web

via Google Groups by Uri Laserson, 1 year ago
>>> parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246) at >>> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85) at >>> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207) at >>> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
java.io.IOException: Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException at > parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:189) at > parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:145) at > parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354) at > parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339) at > parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246) at > org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85) at > org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207) at > org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:863)
at undefined.$iwC$$iwC$$iwC$$iwC.(:20)
at undefined.$iwC$$iwC$$iwC.(:25)
at undefined.$iwC$$iwC.(:27)

Users with the same issue

You are the first who have seen this exception. Write a tip to help other users and build your expert profile.

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.