java.io.IOException: BlockIndex 0 is out of the bound in file ClientFileInfo(id:29, name:rdd_1_1, path:/tmp_rdd/spark-c220e135-34cb-43ec-81fa-bac305ef4a46/2/spark-tachyon-20150923205718-fae3/16/rdd_1_1, ufsPath:, length:0, blockSizeByte:1073741824, creationTimeMs:1443013038744, isComplete:true, isFolder:false, isPinned:false, isCache:true, blockIds:[], dependencyId:-1, inMemoryPercentage:100)

Google Groups | Unknown author | 1 year ago
  1. 0

    关于tachyon与spark集成的错误

    Google Groups | 1 year ago | Unknown author
    java.io.IOException: BlockIndex 0 is out of the bound in file ClientFileInfo(id:29, name:rdd_1_1, path:/tmp_rdd/spark-c220e135-34cb-43ec-81fa-bac305ef4a46/2/spark-tachyon-20150923205718-fae3/16/rdd_1_1, ufsPath:, length:0, blockSizeByte:1073741824, creationTimeMs:1443013038744, isComplete:true, isFolder:false, isPinned:false, isCache:true, blockIds:[], dependencyId:-1, inMemoryPercentage:100)
  2. 0

    关于tachyon与spark集成的错误

    Google Groups | 1 year ago | sal mon
    java.io.IOException: BlockIndex 0 is out of the bound in file ClientFileInfo(id:29, name:rdd_1_1, path:/tmp_rdd/spark-c220e135-34cb-43ec-81fa-bac305ef4a46/2/spark-tachyon-20150923205718-fae3/16/rdd_1_1, ufsPath:, length:0, blockSizeByte:1073741824, creationTimeMs:1443013038744, isComplete:true, isFolder:false, isPinned:false, isCache:true, blockIds:[], dependencyId:-1, inMemoryPercentage:100)
  3. 0

    How can I spark connect to Tachyon master?

    Google Groups | 1 year ago | june Choi
    java.io.IOException: BlockIndex 0 is out of the bound in file ClientFileInfo(id:22, name:rdd_1_0, path:/tmp_spark_tachyon/spark-b379d09e-27b3-43d6-9588-24dfdd217ad9/1/spark-tachyon-20150717045632-1607/15/rdd_1_0, ufsPath:, length:0, blockSizeByte:1073741824, creationTimeMs:1437076596789, isComplete:true, isFolder:false, isPinned:false, isCache:true, blockIds:[], dependencyId:-1, inMemoryPercentage:100)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    does a live tachyonWorker must exist on the node on which an application running?

    Google Groups | 2 years ago | shibo
    java.io.IOException: BlockIndex 0 is out of the bound in file ClientFileInfo(id:41, name:BasicFile_TRY_CACHE, path:/BasicFile_TRY_CACHE, checkpointPath:, length:0, blockSizeByte:67108864, creationTimeMs:1404440448903, complete:true, folder:false, inMemory:true, needPin:false, needCache:true, blockIds:[], d ependencyId:-1, inMemoryPercentage:100)
  6. 0

    apache-spark org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120

    Stack Overflow | 10 months ago | Miren
    java.io.IOException: Failed to create directory /srv/spark/work/app-20160218102438-0000/0)] in 2 attempts org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      BlockIndex 0 is out of the bound in file ClientFileInfo(id:29, name:rdd_1_1, path:/tmp_rdd/spark-c220e135-34cb-43ec-81fa-bac305ef4a46/2/spark-tachyon-20150923205718-fae3/16/rdd_1_1, ufsPath:, length:0, blockSizeByte:1073741824, creationTimeMs:1443013038744, isComplete:true, isFolder:false, isPinned:false, isCache:true, blockIds:[], dependencyId:-1, inMemoryPercentage:100)

      at tachyon.client.TachyonFS.getClientBlockInfo()
    2. Tachyon Project Core
      TachyonFile.getLocationHosts
      1. tachyon.client.TachyonFS.getClientBlockInfo(TachyonFS.java:785)
      2. tachyon.client.TachyonFile.getLocationHosts(TachyonFile.java:172)
      2 frames
    3. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.storage.TachyonStore.getBytes(TachyonStore.scala:105)
      2. org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:499)
      3. org.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:431)
      4. org.apache.spark.storage.BlockManager.get(BlockManager.scala:617)
      5. org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:44)
      6. org.apache.spark.rdd.RDD.iterator(RDD.scala:242)
      7. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
      8. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
      9. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
      10. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
      11. org.apache.spark.scheduler.Task.run(Task.scala:64)
      12. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
      12 frames
    4. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames