java.io.IOException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Hey, I've seen that this error has been around for some time and I hope that this description is complete and will be helpful in reproducing and fixing. System: a logback.xml with a file appender with prudent=true. the log path should be to a volume with little available space. Scenario: start writing to the log file. as soon as the space is depleted, errors start happening: 15:44:40,595 |-ERROR in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - IO failure while writing to file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] java.io.IOException: No space left on device at java.io.IOException: No space left on device at at java.io.FileOutputStream.writeBytes(Native Method) at at java.io.FileOutputStream.write(FileOutputStream.java:345) at at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at at ch.qos.logback.core.recovery.ResilientOutputStreamBase.flush(ResilientOutputStreamBase.java:79) [...] 15:44:51,064 |-INFO in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - Attempting to recover from IO failure on file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] 15:44:51,064 |-INFO in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - Recovered from IO failure on file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] 15:44:51,064 |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[MY_LOG] - IO failure in appender java.nio.channels.ClosedChannelException at java.nio.channels.ClosedChannelException at at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58) and then: 15:44:51,069 |-WARN in ch.qos.logback.core.rolling.RollingFileAppender[MY_LOG] - Attempted to append to non started appender [MY_LOG]. Debugging: i have investigated this issue and found the culprit to be line 204 in FileAppender: finally { if (fileLock != null) { ---> fileLock.release(); } [...] the problem is that when the original IOException was thrown, the channel was closed as part of the attemptRecovery method in ResilientOutputStreamBase. the release will throw a ClosedChannelException if the file channel is closed. the appender is then set to started=false in OutputStreamAppender subAppend Method and stays this way until restarted. Fix suggestion: the easy fix here is changing the guard of the release: finally { if (fileLock != null && fileChannel.isOpen()) { fileLock.release(); } [...] this prevents the release from throwing the exception. for now, an easy mitigation (if possible) is to set prudent=false. hope this helps and the bug will be fixed.
    via by Nadav Wexler,
  • Hey, I've seen that this error has been around for some time and I hope that this description is complete and will be helpful in reproducing and fixing. System: a logback.xml with a file appender with prudent=true. the log path should be to a volume with little available space. Scenario: start writing to the log file. as soon as the space is depleted, errors start happening: 15:44:40,595 |-ERROR in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - IO failure while writing to file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] java.io.IOException: No space left on device at java.io.IOException: No space left on device at at java.io.FileOutputStream.writeBytes(Native Method) at at java.io.FileOutputStream.write(FileOutputStream.java:345) at at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at at ch.qos.logback.core.recovery.ResilientOutputStreamBase.flush(ResilientOutputStreamBase.java:79) [...] 15:44:51,064 |-INFO in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - Attempting to recover from IO failure on file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] 15:44:51,064 |-INFO in c.q.l.c.recovery.ResilientFileOutputStream@1944673755 - Recovered from IO failure on file [/Volumes/TESTVOL/logs/my-log.2015-02-01.log] 15:44:51,064 |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[MY_LOG] - IO failure in appender java.nio.channels.ClosedChannelException at java.nio.channels.ClosedChannelException at at sun.nio.ch.FileLockImpl.release(FileLockImpl.java:58) and then: 15:44:51,069 |-WARN in ch.qos.logback.core.rolling.RollingFileAppender[MY_LOG] - Attempted to append to non started appender [MY_LOG]. Debugging: i have investigated this issue and found the culprit to be line 204 in FileAppender: finally { if (fileLock != null) { ---> fileLock.release(); } [...] the problem is that when the original IOException was thrown, the channel was closed as part of the attemptRecovery method in ResilientOutputStreamBase. the release will throw a ClosedChannelException if the file channel is closed. the appender is then set to started=false in OutputStreamAppender subAppend Method and stays this way until restarted. Fix suggestion: the easy fix here is changing the guard of the release: finally { if (fileLock != null && fileChannel.isOpen()) { fileLock.release(); } [...] this prevents the release from throwing the exception. for now, an easy mitigation (if possible) is to set prudent=false. hope this helps and the bug will be fixed.
    via by Nadav Wexler,
  • The disk that ZooKeeper was using filled up. During a snapshot write, I got the following exception 2013-01-16 03:11:14,098 - ERROR [SyncThread:0:SyncRequestProcessor@151] - Severe unrecoverable error, exiting java.io.IOException: No space left on device at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:282) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) at org.apache.zookeeper.server.persistence.FileTxnLog.commit(FileTxnLog.java:309) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.commit(FileTxnSnapLog.java:306) at org.apache.zookeeper.server.ZKDatabase.commit(ZKDatabase.java:484) at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:162) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:101) Then many subsequent exceptions like: 2013-01-16 15:02:23,984 - ERROR [main:Util@239] - Last transaction was partial. 2013-01-16 15:02:23,985 - ERROR [main:ZooKeeperServerMain@63] - Unexpected exception, exiting abnormally java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:558) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:577) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:543) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:625) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.init(FileTxnLog.java:529) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.<init>(FileTxnLog.java:504) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:341) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:130) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:259) at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:386) at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:138) at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:112) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) It seems to me that writing the transaction log should be fully atomic to avoid such situations. Is this not the case?
    via by David Arthur,
  • Sort out logging in tests
    via GitHub by jroper
    ,
  • GitHub comment 14#53468394
    via GitHub by Arasthel
    ,
    • java.io.IOException: No space left on device at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:345) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at ch.qos.logback.core.recovery.ResilientOutputStreamBase.flush(ResilientOutputStreamBase.java:79)[...]

    Users with the same issue

    abrazeneb
    1 times, last one,
    Unknown visitor1 times, last one,
    batwalrus76
    1 times, last one,
    Nikolay Rybak
    1 times, last one,
    rexgreenza
    24 times, last one,
    36 more bugmates