org.apache.spark.SparkException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • GitHub comment 171#248970021
    via GitHub by lokm01
    ,
    • org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 123.0 failed 1 times, most recent failure: Lost task 1.0 in stage 123.0 (TID 131, localhost): java.lang.ArrayIndexOutOfBoundsException: 65536 at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.hbCreateDecodeTables(CBZip2InputStream.java:666) at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.createHuffmanDecodingTables(CBZip2InputStream.java:793) at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.recvDecodingTables(CBZip2InputStream.java:765) at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.getAndMoveToFrontDecode(CBZip2InputStream.java:801) at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.initBlock(CBZip2InputStream.java:504) at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.changeStateToProcessABlock(CBZip2InputStream.java:333) at org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.read(CBZip2InputStream.java:399) at org.apache.hadoop.io.compress.BZip2Codec$BZip2CompressionInputStream.read(BZip2Codec.java:483) at org.apache.hadoop.io.compress.BZip2Codec$BZip2CompressionInputStream.read(BZip2Codec.java:502) at com.databricks.spark.xml.XmlRecordReader.readUntilEndElement(XmlInputFormat.scala:194) at com.databricks.spark.xml.XmlRecordReader.next(XmlInputFormat.scala:143) at com.databricks.spark.xml.XmlRecordReader.nextKeyValue(XmlInputFormat.scala:128)
    No Bugmate found.