java.nio.channels.ClosedChannelException

Server Fault | Bacon | 1 year ago
  1. 0

    datacenter - HDFS performances on apache spark - Server Fault

    serverfault.com | 1 year ago
    java.nio.channels.ClosedChannelException
  2. 0

    HDFS performances on apache spark

    Server Fault | 1 year ago | Bacon
    java.nio.channels.ClosedChannelException
  3. 0

    HDFS performances + unexpected death of executors.

    spark-user | 1 year ago | maxdml
    java.nio.channels.ClosedChannelException
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Executor lost failure when running avocado-submit ?

    Google Groups | 1 year ago | Jaeki Hong
    java.nio.channels.ClosedChannelException
  6. 0

    oozie java action issue

    hadoop-common-user | 9 months ago | Immanuel Fredrick
    java.nio.channels.ClosedChannelException

    3 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.nio.channels.ClosedChannelException

      No message provided

      at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed()
    2. Apache Hadoop HDFS
      DFSOutputStream.checkClosed
      1. org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1528)
      1 frame
    3. Hadoop
      FSDataOutputStream$PositionCache.write
      1. org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:98)
      2. org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
      2 frames
    4. Java RT
      DataOutputStream.write
      1. java.io.DataOutputStream.write(DataOutputStream.java:107)
      1 frame
    5. Hadoop
      TextOutputFormat$LineRecordWriter.write
      1. org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:81)
      2. org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:102)
      2 frames
    6. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.SparkHadoopWriter.write(SparkHadoopWriter.scala:95)
      2. org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1110)
      3. org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1108)
      4. org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1108)
      5. org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1285)
      6. org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1116)
      7. org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1095)
      8. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
      9. org.apache.spark.scheduler.Task.run(Task.scala:70)
      10. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      10 frames
    7. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames