java.io.IOException: Block 5922357248 is not available in Alluxio

Google Groups | Chanh Le | 6 months ago
  1. 0

    java.io.IOException: Block 5922357248 is not available in Alluxio

    Google Groups | 6 months ago | Chanh Le
    java.io.IOException: Block 5922357248 is not available in Alluxio
  2. 0
    Download the winutils.exe for your Hadoop version: https://github.com/steveloughran/winutils . Save it to HADOOP_HOME/bin
  3. 0

    Alluxio Fuse Connector

    Google Groups | 4 months ago | Amran Chen
    java.io.IOException: No available Alluxio worker found
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Re: How could I make sure the famous "xceiver" parameters works in the data node?

    hbase-user | 6 years ago | Stanley Xu
    java.io.IOException: Block blk_2440422069461309270_3925117 is not valid.
  6. 0
    Check for bad records in the input data (like '(null)')

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      Block 5922357248 is not available in Alluxio

      at alluxio.client.block.AlluxioBlockStore.getInStream()
    2. alluxio.client.block
      AlluxioBlockStore.getInStream
      1. alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
      1 frame
    3. alluxio.client.file
      FileInStream.close
      1. alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
      2. alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
      3. alluxio.client.file.FileInStream.close(FileInStream.java:147)
      3 frames
    4. alluxio.hadoop
      HdfsFileInputStream.close
      1. alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
      1 frame
    5. Java RT
      FilterInputStream.close
      1. java.io.FilterInputStream.close(FilterInputStream.java:181)
      1 frame
    6. org.apache.parquet
      ParquetRecordReader.initialize
      1. org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
      2. org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
      3. org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
      4. org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
      4 frames
    7. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
      2. org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
      3. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      4. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      5. org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
      6. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      7. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      8. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      9. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      10. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      11. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      12. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      13. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      14. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
      15. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      16. org.apache.spark.scheduler.Task.run(Task.scala:89)
      17. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      17 frames
    8. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames