java.io.IOException: Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

Do you know how to solve this issue? Write a tip to help other users and build your expert profile.

Solutions on the web

via Google Groups by Uri Laserson, 1 year ago
Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException
via apache.org by Unknown author, 1 year ago
Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException
via spark-user by Andrew Ash, 1 year ago
Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException
via GitHub by andypetrella
, 2 years ago
Could not read footer: java.lang.RuntimeException: file:/home/noootsab/data/genomics/sim_reads_aligned.bam.adam/part-r-00000.gz.parquet.crc is not a Parquet file. expected magic number at tail [80, 65, 82, 49] but found [88, 94, 33, 60]
via GitHub by ryan-williams
, 2 years ago
Could not read footer: java.lang.RuntimeException: hdfs://demeter-nn1.demeter.hpc.mssm.edu:8020/user/willir31/data/set3/normal/set3.normal.fq/part-01487 is not a Parquet file. expected magic number at tail [80, 65, 82, 49] but found [67, 68, 67, 10]
via github.com by Unknown author, 1 year ago
Could not read footer: java.lang.RuntimeException: file:/home/noootsab/data/genomics/sim_reads_aligned.bam.adam/part-r-00000.gz.parquet.crc is not a Parquet file. expected magic number at tail [80, 65, 82, 49] but found [88, 94, 33, 60]
java.io.IOException: Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException
at parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:189)
at parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:145)
at parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354)
at parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339)
at parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:863)
at undefined.$iwC$$iwC$$iwC$$iwC.(:20)
at undefined.$iwC$$iwC$$iwC.(:25)
at undefined.$iwC$$iwC.(:27)

Users with the same issue

You are the first who have seen this exception. Write a tip to help other users and build your expert profile.

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.