java.io.IOException: Disk quota exceeded

Apache's JIRA Issue Tracker | Ramnatthan Alagappan | 4 months ago
  1. 0

    ZooKeeper cluster completely stalls with *no* transactions making progress when a storage related error (such as *ENOSPC, EDQUOT, EIO*) is encountered by the current *leader*. Surprisingly, the same errors in some circumstances cause the node to completely crash and therefore allowing other nodes in the cluster to become the leader and make progress with transactions. Interestingly, the same errors if encountered while initializing a new log file causes the current leader to go to weird state (but does not crash) where it thinks it is the leader (and so does not allow others to become the leader). *This causes the entire cluster to freeze. * Here is the stacktrace of the leader: ------------------------------------------------ 2016-07-11 15:42:27,502 [myid:3] - INFO [SyncThread:3:FileTxnLog@199] - Creating new log file: log.200000001 2016-07-11 15:42:27,505 [myid:3] - ERROR [SyncThread:3:ZooKeeperCriticalThread@49] - Severe unrecoverable error, from thread : SyncThread:3 java.io.IOException: Disk quota exceeded at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:345) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at org.apache.zookeeper.server.persistence.FileTxnLog.append(FileTxnLog.java:211) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.append(FileTxnSnapLog.java:314) at org.apache.zookeeper.server.ZKDatabase.append(ZKDatabase.java:476) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:140) ------------------------------------------------ From the trace and the code, it looks like the problem happens only when a new log file is initialized and only when there are errors in two cases: 1. Error during the append of *log header*. 2. Error during *padding zero bytes to the end of the log*. If similar errors happen when writing some other blocks of data, then the node just completely crashes allowing others to be elected as a new leader. These two blocks of the newly created log file are special as they take a different error recovery code path -- the node does not completely crash but rather certain threads are killed but supposedly the quorum holding thread stays up thereby preventing others to become the new leader. This causes the other nodes to think that there is no problem with the leader but the cluster just becomes unavailable for any subsequent operations such as read/write.

    Apache's JIRA Issue Tracker | 4 months ago | Ramnatthan Alagappan
    java.io.IOException: Disk quota exceeded
  2. 0

    AppScale startup hangs when there is no disk space left

    GitHub | 4 years ago | jovanchohan
    java.io.IOException: No space left on device
  3. 0

    The disk that ZooKeeper was using filled up. During a snapshot write, I got the following exception 2013-01-16 03:11:14,098 - ERROR [SyncThread:0:SyncRequestProcessor@151] - Severe unrecoverable error, exiting java.io.IOException: No space left on device at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:282) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) at org.apache.zookeeper.server.persistence.FileTxnLog.commit(FileTxnLog.java:309) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.commit(FileTxnSnapLog.java:306) at org.apache.zookeeper.server.ZKDatabase.commit(ZKDatabase.java:484) at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:162) at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:101) Then many subsequent exceptions like: 2013-01-16 15:02:23,984 - ERROR [main:Util@239] - Last transaction was partial. 2013-01-16 15:02:23,985 - ERROR [main:ZooKeeperServerMain@63] - Unexpected exception, exiting abnormally java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:558) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:577) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:543) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:625) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.init(FileTxnLog.java:529) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.<init>(FileTxnLog.java:504) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:341) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:130) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:259) at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:386) at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:138) at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:112) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) It seems to me that writing the transaction log should be fully atomic to avoid such situations. Is this not the case?

    Apache's JIRA Issue Tracker | 4 years ago | David Arthur
    java.io.IOException: No space left on device
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  1. rexgreenza 24 times, last 3 months ago
  2. abrazeneb 1 times, last 5 months ago
2 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.io.IOException

    Disk quota exceeded

    at java.io.FileOutputStream.writeBytes()
  2. Java RT
    BufferedOutputStream.flush
    1. java.io.FileOutputStream.writeBytes(Native Method)
    2. java.io.FileOutputStream.write(FileOutputStream.java:345)
    3. java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    4. java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
    4 frames
  3. Zookeeper
    SyncRequestProcessor.run
    1. org.apache.zookeeper.server.persistence.FileTxnLog.append(FileTxnLog.java:211)
    2. org.apache.zookeeper.server.persistence.FileTxnSnapLog.append(FileTxnSnapLog.java:314)
    3. org.apache.zookeeper.server.ZKDatabase.append(ZKDatabase.java:476)
    4. org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:140)
    4 frames