java.io.IOException: Filesystem closed

spark-user | Akhil Das | 2 years ago
tip
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. Speed up your debug routine!

    Automated exception search integrated into your IDE

  2. 0

    Re: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]

    spark-user | 2 years ago | Akhil Das
    java.io.IOException: Filesystem closed

    1 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      Filesystem closed

      at org.apache.hadoop.hdfs.DFSClient.checkOpen()
    2. Apache Hadoop HDFS
      DFSInputStream.read
      1. org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:707)
      2. org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:776)
      3. org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:837)
      3 frames
    3. Java RT
      DataInputStream.read
      1. java.io.DataInputStream.read(DataInputStream.java:83)
      1 frame
    4. Hadoop
      LineReader.readLine
      1. org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
      2. org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
      3. org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
      3 frames
    5. Hadoop
      LineRecordReader.next
      1. org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:209)
      2. org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:47)
      2 frames
    6. Spark
      InterruptibleIterator.hasNext
      1. org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:201)
      2. org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:184)
      3. org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
      4. org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
      4 frames
    7. Scala
      ArrayBuffer.$plus$plus$eq
      1. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      2. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      3. scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
      4. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
      5. scala.collection.Iterator$class.foreach(Iterator.scala:727)
      6. scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      7. scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
      8. scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
      8 frames
    8. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107)
      2. org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
      3. org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
      4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      5. org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
      6. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
      7. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
      8. org.apache.spark.scheduler.Task.run(Task.scala:51)
      9. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
      9 frames
    9. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      3. java.lang.Thread.run(Thread.java:662)
      3 frames