Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by car2008
, 1 year ago
Can not read value at 0 in block 0 in file file:/home/file/ALL.adam/part-r-00091.gz.parquet
via GitHub by car2008
, 1 year ago
Can not read value at 0 in block 0 in file hdfs://192.168.2.85:9000/user/ALL.adam/part-r-00001.gz.parquet
via GitHub by Fantoccini
, 2 years ago
Can not read value at 0 in block -1 in file hdfs://nameservice1/prod/view/warehouse/HLS/LOAN_MASTER_SZLNMST/year=2015/month=07/day=06/part-1436198627511-00000-00000-m-00000.parquet
via Google Groups by Unknown author, 1 year ago
Can not read value at 0 in block 0 in file file:/.../src/test/resources/test_data/test.align.adam/part-r-00000.gz.parquet
via Stack Overflow by Nagaraj Malaiappan
, 2 years ago
Can not read value at 1 in block 0 in file hdfs://quickstart.cloudera:8020/parq/customer/wocomp/part-m-00000.parquet
via Google Groups by gr...@cloudera.com, 2 years ago
Can not read value at 0 in block -1 in file hdfs://jenkins-parquet-1.ent.cloudera.com:8020/user/hive/warehouse/n1pu/-5600290658369385718--7673663513613548897_995880915_data.0
java.lang.ClassCastException: org.apache.avro.generic.GenericData$Record cannot be cast to org.bdgenomics.formats.avro.Variant	at org.bdgenomics.formats.avro.Genotype.put(Genotype.java:148)	at parquet.avro.AvroIndexedRecordConverter.set(AvroIndexedRecordConverter.java:143)	at parquet.avro.AvroIndexedRecordConverter.access$000(AvroIndexedRecordConverter.java:39)	at parquet.avro.AvroIndexedRecordConverter$1.add(AvroIndexedRecordConverter.java:78)	at parquet.avro.AvroIndexedRecordConverter.end(AvroIndexedRecordConverter.java:163)	at parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:413)	at parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:209)	at parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)	at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:168)	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)	at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:197)	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)	at org.apache.spark.scheduler.Task.run(Task.scala:89)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)