org.apache.spark.SparkException: Job aborted due to stage failure: Task 8 in stage 0.0 failed 4 times, most recent failure: Lost task 8.3 in stage 0.0 (TID 16, spark04): parquet.io.ParquetDecodingException: Can not read value at 0 in block 0 in file file:/home/file/ALL.adam/part-r-00027.gz.parquet

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

Do you know how to solve this issue? Write a tip to help other users and build your expert profile.

Solutions on the web

via GitHub by car2008
, 1 year ago
Job aborted due to stage failure: Task 8 in stage 0.0 failed 4 times, most recent failure: Lost task 8.3 in stage 0.0 (TID 16, spark04): parquet.io.ParquetDecodingException: Can not read value at 0 in block 0 in file file:/home/file/ALL.adam/part-r-00027.gz.parquet
org.apache.spark.SparkException: Job aborted due to stage failure: Task 8 in stage 0.0 failed 4 times, most recent failure: Lost task 8.3 in stage 0.0 (TID 16, spark04): parquet.io.ParquetDecodingException: Can not read value at 0 in block 0 in file file:/home/file/ALL.adam/part-r-00027.gz.parquet
at org.bdgenomics.formats.avro.Genotype.put(Genotype.java:148)
at parquet.avro.AvroIndexedRecordConverter.set(AvroIndexedRecordConverter.java:143)
at parquet.avro.AvroIndexedRecordConverter.access$000(AvroIndexedRecordConverter.java:39)
at parquet.avro.AvroIndexedRecordConverter$1.add(AvroIndexedRecordConverter.java:78)
at parquet.avro.AvroIndexedRecordConverter.end(AvroIndexedRecordConverter.java:163)
at parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:413)
at parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:209)
at parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)
at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:168)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:197)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Users with the same issue

You are the first who have seen this exception. Write a tip to help other users and build your expert profile.

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.