java.io.IOException: DFSOutputStream is closed* at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3669) at org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97) at org.apache.flume.sink.hdfs.HDFSCompressedDataStream.sync(HDFSCompressedDataStream.java:96) at org.apache.flume.sink.hdfs.BucketWriter.doFlush(BucketWriter.java:345) at org.apache.flume.sink.hdfs.BucketWriter.access$500(BucketWriter.java:53) at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:310) at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:308) at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143) at org.apache.flume.sink.hdfs.BucketWriter.flush(BucketWriter.java:308) at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:257) at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382) at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729) at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

flume-user | Ashish Tadose | 4 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Re: HDFS Sink stops writing events because HDFSWriter failed to append and close a file

    apache.org | 2 years ago
    java.io.IOException: DFSOutputStream is closed* at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3669) at org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97) at org.apache.flume.sink.hdfs.HDFSCompressedDataStream.sync(HDFSCompressedDataStream.java:96) at org.apache.flume.sink.hdfs.BucketWriter.doFlush(BucketWriter.java:345) at org.apache.flume.sink.hdfs.BucketWriter.access$500(BucketWriter.java:53) at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:310) at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:308) at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143) at org.apache.flume.sink.hdfs.BucketWriter.flush(BucketWriter.java:308) at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:257) at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382) at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729) at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  2. 0

    Re: HDFS Sink stops writing events because HDFSWriter failed to append and close a file

    flume-user | 4 years ago | Ashish Tadose
    java.io.IOException: DFSOutputStream is closed* at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3669) at org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97) at org.apache.flume.sink.hdfs.HDFSCompressedDataStream.sync(HDFSCompressedDataStream.java:96) at org.apache.flume.sink.hdfs.BucketWriter.doFlush(BucketWriter.java:345) at org.apache.flume.sink.hdfs.BucketWriter.access$500(BucketWriter.java:53) at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:310) at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:308) at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143) at org.apache.flume.sink.hdfs.BucketWriter.flush(BucketWriter.java:308) at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:257) at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382) at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729) at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
  3. 0

    Re: Aggregator problem

    hama-user | 3 years ago | Edward J. Yoon
    java.io.IOException: can't find class: corever2.MessageCore because > corever2.MessageCore > at > org.apache.hadoop.io.AbstractMapWritable.readFields(AbstractMapWritable.java:204) > at org.apache.hadoop.io.MapWritable.readFields(MapWritable.java:146) > at > org.apache.hama.graph.GraphJobMessage.readFields(GraphJobMessage.java:129) > at > org.apache.hama.bsp.BSPMessageBundle$1.next(BSPMessageBundle.java:108) > at > org.apache.hama.bsp.BSPMessageBundle$1.next(BSPMessageBundle.java:78) > at > org.apache.hama.bsp.LocalBSPRunner$LocalMessageManager.transfer(LocalBSPRunner.java:356) > at org.apache.hama.bsp.BSPPeerImpl.sync(BSPPeerImpl.java:381) > at org.apache.hama.graph.GraphJobRunner.bsp(GraphJobRunner.java:133) > at > org.apache.hama.bsp.LocalBSPRunner$BSPRunner.run(LocalBSPRunner.java:258) > at > org.apache.hama.bsp.LocalBSPRunner$BSPRunner.call(LocalBSPRunner.java:288) > at > org.apache.hama.bsp.LocalBSPRunner$BSPRunner.call(LocalBSPRunner.java:212) > at > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Re: Aggregator problem

    hama-user | 3 years ago | ikapoura@csd.auth.gr
    java.io.IOException: can't find class: corever2.MessageCore because >> corever2.MessageCore >> at >> org.apache.hadoop.io.AbstractMapWritable.readFields(AbstractMapWritable.java:204) >> at org.apache.hadoop.io.MapWritable.readFields(MapWritable.java:146) >> at >> org.apache.hama.graph.GraphJobMessage.readFields(GraphJobMessage.java:129) >> at >> org.apache.hama.bsp.BSPMessageBundle$1.next(BSPMessageBundle.java:108) >> at >> org.apache.hama.bsp.BSPMessageBundle$1.next(BSPMessageBundle.java:78) >> at >> org.apache.hama.bsp.LocalBSPRunner$LocalMessageManager.transfer(LocalBSPRunner.java:356) >> at org.apache.hama.bsp.BSPPeerImpl.sync(BSPPeerImpl.java:381) >> at org.apache.hama.graph.GraphJobRunner.bsp(GraphJobRunner.java:133) >> at >> org.apache.hama.bsp.LocalBSPRunner$BSPRunner.run(LocalBSPRunner.java:258) >> at >> org.apache.hama.bsp.LocalBSPRunner$BSPRunner.call(LocalBSPRunner.java:288) >> at >> org.apache.hama.bsp.LocalBSPRunner$BSPRunner.call(LocalBSPRunner.java:212) >> at >> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) >> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
  6. 0

    Re: Aggregator problem

    hama-user | 3 years ago | Edward J. Yoon
    java.io.IOException: can't find class: corever2.MessageCore because >>> corever2.MessageCore >>> at >>> >>> org.apache.hadoop.io.AbstractMapWritable.readFields(AbstractMapWritable.java:204) >>> at >>> org.apache.hadoop.io.MapWritable.readFields(MapWritable.java:146) >>> at >>> >>> org.apache.hama.graph.GraphJobMessage.readFields(GraphJobMessage.java:129) >>> at >>> org.apache.hama.bsp.BSPMessageBundle$1.next(BSPMessageBundle.java:108) >>> at >>> org.apache.hama.bsp.BSPMessageBundle$1.next(BSPMessageBundle.java:78) >>> at >>> >>> org.apache.hama.bsp.LocalBSPRunner$LocalMessageManager.transfer(LocalBSPRunner.java:356) >>> at org.apache.hama.bsp.BSPPeerImpl.sync(BSPPeerImpl.java:381) >>> at >>> org.apache.hama.graph.GraphJobRunner.bsp(GraphJobRunner.java:133) >>> at >>> org.apache.hama.bsp.LocalBSPRunner$BSPRunner.run(LocalBSPRunner.java:258) >>> at >>> >>> org.apache.hama.bsp.LocalBSPRunner$BSPRunner.call(LocalBSPRunner.java:288) >>> at >>> >>> org.apache.hama.bsp.LocalBSPRunner$BSPRunner.call(LocalBSPRunner.java:212) >>> at >>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)

  1. rp 1 times, last 2 months ago
  2. jk 1 times, last 8 months ago
1 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.io.IOException

    DFSOutputStream is closed* at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.sync(DFSClient.java:3669) at org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:97) at org.apache.flume.sink.hdfs.HDFSCompressedDataStream.sync(HDFSCompressedDataStream.java:96) at org.apache.flume.sink.hdfs.BucketWriter.doFlush(BucketWriter.java:345) at org.apache.flume.sink.hdfs.BucketWriter.access$500(BucketWriter.java:53) at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:310) at org.apache.flume.sink.hdfs.BucketWriter$4.run(BucketWriter.java:308) at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143) at org.apache.flume.sink.hdfs.BucketWriter.flush(BucketWriter.java:308) at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:257) at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382) at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729) at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

    at java.util.concurrent.FutureTask.run()
  2. Java RT
    FutureTask.run
    1. java.util.concurrent.FutureTask.run(FutureTask.java:138)
    1 frame