java.io.IOException: Block 5922357248 is not available in Alluxio

Google Groups | Chanh Le | 7 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    java.io.IOException: Block 5922357248 is not available in Alluxio

    Google Groups | 7 months ago | Chanh Le
    java.io.IOException: Block 5922357248 is not available in Alluxio

    Root Cause Analysis

    1. java.io.IOException

      Block 5922357248 is not available in Alluxio

      at alluxio.client.block.AlluxioBlockStore.getInStream()
    2. alluxio.client.block
      AlluxioBlockStore.getInStream
      1. alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
      1 frame
    3. alluxio.client.file
      FileInStream.close
      1. alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
      2. alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
      3. alluxio.client.file.FileInStream.close(FileInStream.java:147)
      3 frames
    4. alluxio.hadoop
      HdfsFileInputStream.close
      1. alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
      1 frame
    5. Java RT
      FilterInputStream.close
      1. java.io.FilterInputStream.close(FilterInputStream.java:181)
      1 frame
    6. org.apache.parquet
      ParquetRecordReader.initialize
      1. org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
      2. org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
      3. org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
      4. org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
      4 frames
    7. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
      2. org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
      3. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      4. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      5. org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
      6. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      7. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      8. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      9. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      10. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      11. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      12. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      13. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      14. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
      15. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      16. org.apache.spark.scheduler.Task.run(Task.scala:89)
      17. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      17 frames
    8. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames