javax.jms.JMSException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • When the persistent store runs out of space the broker starts receiving exceptions ("java.io.IOException: No space left on device"). Unless configured to shutdown, these errors continue to occur halting the system as both clients and servers receive these Exceptions. After a restart, the store is corrupted. Re-building the index does not correct the problem. *Test Case Procedure*: 1. Create a partition with small amount of space, I used 50mb one on an external hard drive. 2. Update / add the following lines to activemq.xml: <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="/Volumes/Stuff2/data" destroyApplicationContextOnStop="true"> .... <persistenceAdapter> <!--kahaDB directory=" ${activemq.base} /Volumes/Stuff2/data/kahadb"/--> <kahaDB directory="/Volumes/Stuff2/data/kahadb" journalMaxFileLength="30mb" ignoreMissingJournalfiles="true" checksumJournalFiles="true" checkForCorruptJournalFiles="true" /> ** </persistenceAdapter> * o These extra items were added to help correct the database corruption after the failure. 3. Update the log4j file to point to your external hard drive. Just takes up more space faster. 4. Start activemq 5. Using the shipped examples, run a producer: {code} ant -Ddurable=true -Dmax=100000 producer {code} 6. Wait, eventually you run out of space or if you are impatient like me, I started copying junk files to my hard drive to speed up the process. 7. The broker will start printing out of space messages: {code} INFO | Slow KahaDB access: Journal append took: 1208 ms, Index update took 1 ms ERROR | KahaDB failed to store to Journal java.io.IOException: No space left on device at java.io.RandomAccessFile.setLength(Native Method) at org.apache.kahadb.journal.DataFileAppender.processQueue(DataFileAppender.java:342) at org.apache.kahadb.journal.DataFileAppender$2.run(DataFileAppender.java:227) INFO | Ignoring no space left exception, java.io.IOException: No space left on device java.io.IOException: No space left on device at java.io.RandomAccessFile.setLength(Native Method) at org.apache.kahadb.journal.DataFileAppender.processQueue(DataFileAppender.java:342) at org.apache.kahadb.journal.DataFileAppender$2.run(DataFileAppender.java:227) INFO | Slow KahaDB access: cleanup took 1159 {code} 8. The client will eventually fail as well: {code} [java] Sending message: Message: 23874 sent at: Wed Dec 29 15:40:43 GMT 20... [java] javax.jms.JMSException: No space left on device [java] at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:49) [java] at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1273) [java] at org.apache.activemq.ActiveMQSession.send(ActiveMQSession.java:1755) [java] at org.apache.activemq.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:231) [java] at org.apache.activemq.ActiveMQMessageProducerSupport.send(ActiveMQMessageProducerSupport.java:241) [java] at ProducerTool.sendLoop(Unknown Source) [java] at ProducerTool.run(Unknown Source) [java] at ProducerTool.main(Unknown Source) [java] Caused by: java.io.IOException: No space left on device [java] at java.io.RandomAccessFile.setLength(Native Method) [java] at org.apache.kahadb.journal.DataFileAppender.processQueue(DataFileAppender.java:342) [java] at org.apache.kahadb.journal.DataFileAppender$2.run(DataFileAppender.java:227) [java] Caught: javax.jms.JMSException: No space left on device {code} I then tried to read the 23874 message but could not: {code} ./bin/activemq-admin browse --amqurl tcp://localhost:61616 --msgsel JMSMessageID="'*:23874'" TEST.FOO {code} 9. To reproduce the reported error, delete localhost/tmp_storage file in attempt to free up space. 10. Shutdown the broker and try to restart it. I receive the same error as the customer, however, if you don't delete tmp_storage or run in the later versions where the tmp_storage is not used, you still have startup issues. Please see attached file "SJ_BrokerRestartFailureLog". *Other notes*: * Before shutting down the broker, clients can consume messages but often fails with out of space messages. Once the broker is stopped, it is unable to restart. * Rebuilding the index files fails with the error in noted in the attached log even if additional space is freed up. * It seems one of the journals is always 0: {code} -rw-r--r-- 1 sjavurek staff 31458423 Dec 30 18:39 db-1.log -rw-r--r-- 1 sjavurek staff 0 Dec 30 18:39 db-2.log -rw-r--r-- 1 sjavurek staff 1789952 Dec 30 19:18 db.data -rw-r--r-- 1 sjavurek staff 2376208 Dec 30 19:18 db.redo {code} To recover from this error, stop the broker and delete the index files (db.data and db.redo) as well as the zero sized journal file (db-2.log in the example above). Before restarting the broker, ensure that additional space has been added or freed equal to the size of the next journal file. The broker will need this space when it tries to allocated db-2.log. Setting journalMaxFileLength will help determine the additional space required.
    via by Susan Javurek,
  • Allocate Big file
    via Stack Overflow by BIV1991
    ,
  • detailed reply
    via by mhe123,
  • open « error « Java I/O Q&A
    via by Unknown author,
    • javax.jms.JMSException: No space left on device at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:49) at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1273) at org.apache.activemq.ActiveMQSession.send(ActiveMQSession.java:1755) at org.apache.activemq.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:231) at org.apache.activemq.ActiveMQMessageProducerSupport.send(ActiveMQMessageProducerSupport.java:241) at ProducerTool.sendLoop(Unknown Source) at ProducerTool.run(Unknown Source) at ProducerTool.main(Unknown Source) Caused by: java.io.IOException: No space left on device at java.io.RandomAccessFile.setLength(Native Method) at org.apache.kahadb.journal.DataFileAppender.processQueue(DataFileAppender.java:342) at org.apache.kahadb.journal.DataFileAppender$2.run(DataFileAppender.java:227)

    Users with the same issue

    Unknown visitor1 times, last one,