java.io.IOException: Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException at >> parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:189) at >> parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:145) at >> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354) at >> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339) at >> parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246) at >> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85)

Google Groups | Uri Laserson | 3 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Re: Using Parquet from an interactive Spark shell

    Google Groups | 3 years ago | Uri Laserson
    java.io.IOException: Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException at >> parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:189) at >> parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:145) at >> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354) at >> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339) at >> parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246) at >> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85)
  2. 0

    Re: spark with standalone HBase

    apache.org | 2 years ago
    java.io.IOException: No table was provided. at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:154)
  3. 0

    Spark, mail # user - Error in "java.io.IOException: No input paths specified in job" - 2016-03-17, 13:22

    search-hadoop.com | 1 year ago
    java.io.IOException: No input paths specified in job at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:201) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    GitHub comment 4#242973525

    GitHub | 8 months ago | ww102111
    java.io.IOException: Not a file: file:/run at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja va:320)

    Root Cause Analysis

    1. java.io.IOException

      Could not read footer: java.lang.NoClassDefFoundError: parquet/org/codehaus/jackson/JsonGenerationException at >> parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:189) at >> parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:145) at >> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:354) at >> parquet.hadoop.ParquetInputFormat.getFooters(ParquetInputFormat.java:339) at >> parquet.hadoop.ParquetInputFormat.getSplits(ParquetInputFormat.java:246) at >> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:85)

      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply()
    2. Spark
      RDD$$anonfun$partitions$2.apply
      1. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:207)
      2. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
      2 frames
    3. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:120)
      1 frame
    4. Spark
      RDD.collect
      1. org.apache.spark.rdd.RDD.partitions(RDD.scala:205)
      2. org.apache.spark.SparkContext.runJob(SparkContext.scala:863)
      3. org.apache.spark.rdd.RDD.collect(RDD.scala:602)
      3 frames
    5. Unknown
      $iwC.<init>
      1. $iwC$$iwC$$iwC$$iwC.<init>(<console>:20)
      2. $iwC$$iwC$$iwC.<init>(<console>:25)
      3. $iwC$$iwC.<init>(<console>:27)
      4. $iwC.<init>(<console>:29)
      4 frames