org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 123.0 failed 1 times, most recent failure: Lost task 1.0 in stage 123.0 (TID 131, localhost): java.lang.ArrayIndexOutOfBoundsException: 65536

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by lokm01
, 1 year ago
Job aborted due to stage failure: Task 1 in stage 123.0 failed 1 times, most recent failure: Lost task 1.0 in stage 123.0 (TID 131, localhost): java.lang.ArrayIndexOutOfBoundsException: 65536
via GitHub by lokm01
, 1 year ago
Job aborted due to stage failure: Task 1 in stage 123.0 failed 1 times, most recent failure: Lost task 1.0 in stage 123.0 (TID 131, localhost): java.lang.ArrayIndexOutOfBoundsException: 65536
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 123.0 failed 1 times, most recent failure: Lost task 1.0 in stage 123.0 (TID 131, localhost): java.lang.ArrayIndexOutOfBoundsException: 65536
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.hbCreateDecodeTables(CBZip2InputStream.java:666)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.createHuffmanDecodingTables(CBZip2InputStream.java:793)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.recvDecodingTables(CBZip2InputStream.java:765)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.getAndMoveToFrontDecode(CBZip2InputStream.java:801)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.initBlock(CBZip2InputStream.java:504)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.changeStateToProcessABlock(CBZip2InputStream.java:333)
at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.read(CBZip2InputStream.java:399)
at org.apache.hadoop.io.compress.BZip2Codec$BZip2CompressionInputStream.read(BZip2Codec.java:483)
at com.databricks.spark.xml.XmlRecordReader.readUntilEndElement(XmlInputFormat.scala:194)
at com.databricks.spark.xml.XmlRecordReader.next(XmlInputFormat.scala:143)
at com.databricks.spark.xml.XmlRecordReader.nextKeyValue(XmlInputFormat.scala:128)

Users with the same issue

You are the first who have seen this exception. Write a tip to help other users and build your expert profile.

Know the solutions? Share your knowledge to help other developers to debug faster.