parquet.io.ParquetDecodingException: Can not read value at 0 in block 0 in file file:/home/file/ALL.adam/part-r-00082.gz.parquet

GitHub | car2008 | 6 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    GitHub comment 1121#241643536

    GitHub | 6 months ago | car2008
    parquet.io.ParquetDecodingException: Can not read value at 0 in block 0 in file file:/home/file/ALL.adam/part-r-00027.gz.parquet
  2. 0

    GitHub comment 1121#242003738

    GitHub | 6 months ago | car2008
    parquet.io.ParquetDecodingException: Can not read value at 0 in block 0 in file hdfs://192.168.2.85:9000/user/ALL.adam/part-r-00001.gz.parquet

    Root Cause Analysis

    1. java.lang.ClassCastException

      org.apache.avro.generic.GenericData$Record cannot be cast to org.bdgenomics.formats.avro.Variant

      at org.bdgenomics.formats.avro.Genotype.put()
    2. org.bdgenomics.formats
      Genotype.put
      1. org.bdgenomics.formats.avro.Genotype.put(Genotype.java:148)
      1 frame
    3. parquet.avro
      AvroIndexedRecordConverter.end
      1. parquet.avro.AvroIndexedRecordConverter.set(AvroIndexedRecordConverter.java:143)
      2. parquet.avro.AvroIndexedRecordConverter.access$000(AvroIndexedRecordConverter.java:39)
      3. parquet.avro.AvroIndexedRecordConverter$1.add(AvroIndexedRecordConverter.java:78)
      4. parquet.avro.AvroIndexedRecordConverter.end(AvroIndexedRecordConverter.java:163)
      4 frames
    4. Parquet
      ParquetRecordReader.nextKeyValue
      1. parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:413)
      2. parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:209)
      3. parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)
      3 frames
    5. Spark
      InterruptibleIterator.hasNext
      1. org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:168)
      2. org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
      2 frames
    6. Scala
      Iterator$$anon$11.hasNext
      1. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      2. scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
      3. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      4. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      4 frames
    7. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:197)
      2. org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)
      3. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
      4. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      5. org.apache.spark.scheduler.Task.run(Task.scala:89)
      6. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      6 frames
    8. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames