java.io.IOException: Filesystem closed

Apache's JIRA Issue Tracker | Thomas Graves | 3 years ago
  1. 0

    When running on spark on yarn and accessing an HDFS file (like in the SparkHdfsLR example) while using the event logging configured to write logs to HDFS, it throws an exception at the end of the application. SPARK_JAVA_OPTS=-Dspark.eventLog.enabled=true -Dspark.eventLog.dir=hdfs:///history/spark/ 14/04/03 13:41:31 INFO yarn.ApplicationMaster$$anon$1: Invoking sc stop from shutdown hook Exception in thread "Thread-41" java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:398) at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1465) at org.apache.hadoop.hdfs.DFSOutputStream.sync(DFSOutputStream.java:1450) at org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:116) at org.apache.spark.util.FileLogger$$anonfun$flush$2.apply(FileLogger.scala:137) at org.apache.spark.util.FileLogger$$anonfun$flush$2.apply(FileLogger.scala:137) at scala.Option.foreach(Option.scala:236) at org.apache.spark.util.FileLogger.flush(FileLogger.scala:137) at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:69) at org.apache.spark.scheduler.EventLoggingListener.onApplicationEnd(EventLoggingListener.scala:101) at org.apache.spark.scheduler.SparkListenerBus$$anonfun$postToAll$13.apply(SparkListenerBus.scala:67) at org.apache.spark.scheduler.SparkListenerBus$$anonfun$postToAll$13.apply(SparkListenerBus.scala:67) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.SparkListenerBus$class.postToAll(SparkListenerBus.scala:67) at org.apache.spark.scheduler.LiveListenerBus.postToAll(LiveListenerBus.scala:31) at org.apache.spark.scheduler.LiveListenerBus.post(LiveListenerBus.scala:78) at org.apache.spark.SparkContext.postApplicationEnd(SparkContext.scala:1081) at org.apache.spark.SparkContext.stop(SparkContext.scala:828) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:460)

    Apache's JIRA Issue Tracker | 3 years ago | Thomas Graves
    java.io.IOException: Filesystem closed
  2. 0

    Why is My Spark Job Failing? by Sandy Ryza of Cloudera

    slideshare.net | 1 year ago
    java.io.IOException: Filesystem closed
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    Spark HistoryServer not coming up

    Stack Overflow | 2 years ago
    java.io.IOException: Filesystem closed
  5. 0

    Unresponsive master in Hbase 0.90.0

    Google Groups | 6 years ago | Vidhyashankar Venkataraman
    java.io.IOException: Filesystem closed

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      Filesystem closed

      at org.apache.hadoop.hdfs.DFSClient.checkOpen()
    2. Apache Hadoop HDFS
      DFSOutputStream.sync
      1. org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:398)
      2. org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1465)
      3. org.apache.hadoop.hdfs.DFSOutputStream.sync(DFSOutputStream.java:1450)
      3 frames
    3. Hadoop
      FSDataOutputStream.sync
      1. org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:116)
      1 frame
    4. Spark
      FileLogger$$anonfun$flush$2.apply
      1. org.apache.spark.util.FileLogger$$anonfun$flush$2.apply(FileLogger.scala:137)
      2. org.apache.spark.util.FileLogger$$anonfun$flush$2.apply(FileLogger.scala:137)
      2 frames
    5. Scala
      Option.foreach
      1. scala.Option.foreach(Option.scala:236)
      1 frame
    6. Spark
      SparkListenerBus$$anonfun$postToAll$13.apply
      1. org.apache.spark.util.FileLogger.flush(FileLogger.scala:137)
      2. org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:69)
      3. org.apache.spark.scheduler.EventLoggingListener.onApplicationEnd(EventLoggingListener.scala:101)
      4. org.apache.spark.scheduler.SparkListenerBus$$anonfun$postToAll$13.apply(SparkListenerBus.scala:67)
      5. org.apache.spark.scheduler.SparkListenerBus$$anonfun$postToAll$13.apply(SparkListenerBus.scala:67)
      5 frames
    7. Scala
      ArrayBuffer.foreach
      1. scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      2. scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
      2 frames
    8. Spark
      SparkContext.stop
      1. org.apache.spark.scheduler.SparkListenerBus$class.postToAll(SparkListenerBus.scala:67)
      2. org.apache.spark.scheduler.LiveListenerBus.postToAll(LiveListenerBus.scala:31)
      3. org.apache.spark.scheduler.LiveListenerBus.post(LiveListenerBus.scala:78)
      4. org.apache.spark.SparkContext.postApplicationEnd(SparkContext.scala:1081)
      5. org.apache.spark.SparkContext.stop(SparkContext.scala:828)
      5 frames
    9. Spark Project YARN Stable API
      ApplicationMaster$$anon$1.run
      1. org.apache.spark.deploy.yarn.ApplicationMaster$$anon$1.run(ApplicationMaster.scala:460)
      1 frame