java.lang.OutOfMemoryError

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • When sending fragmented binary message (message with message payload of length 4 * 2**20 (4M). Sent out in fragments of 64). The server throws {{java.lang.OutOfMemoryError: Direct buffer memory}} [1] The memory for direct buffer by default depends on the size set by -Xmx, which is in EAP 7.0.0.DR4 by default set to -Xmx512m. Increasing it just increases the time before the limit is hit (it is enough to send those messages multiple times to hit the limit again). I believe the issue is similar to the one for EAP 6.4: [https://bugzilla.redhat.com/show_bug.cgi?id=1223708] [1] {noformat} 15:10:55,463 ERROR [org.xnio.listener] (default I/O-1) XNIO001007: A channel event listener threw an exception: java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:658) at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:57) at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:55) at org.xnio.ByteBufferSlicePool.allocate(ByteBufferSlicePool.java:143) at io.undertow.websockets.core.BufferedBinaryMessage$1.handleEvent(BufferedBinaryMessage.java:106) at io.undertow.websockets.core.BufferedBinaryMessage$1.handleEvent(BufferedBinaryMessage.java:97) at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) at io.undertow.server.protocol.framed.AbstractFramedStreamSourceChannel$1.run(AbstractFramedStreamSourceChannel.java:264) at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:560) at org.xnio.nio.WorkerThread.run(WorkerThread.java:462) {noformat}
    via by Radim Hatlapatka,
  • It is a VPS instance with 1024MB memory, it runs a Jenkins. The Jenkins starts integration tests, the issue appears exactly after the 16th connect-disconnect in same JVM (possible connection leak?), the 16th connect to the cluster always throw the exception above. I've tested the all version of the Datastax driver (successful test cycles / test cycles): 2.1.3: ok (4/4) 2.1.4: ok (4/4) 2.1.5: ok (4/4) 2.1.6: failed (0/4) 2.1.7: failed (0/4) 2.1.8: failed (0/4) 2.2.0-rc1: failed (0/4) 2.2.0-rc2: failed (0/4) 2.2.0-rc3: failed (0/4) 3.0.0-alpha5: failed (0/4) The stack trace in the getErrors: {code} com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Error writing at com.datastax.driver.core.Connection$10.operationComplete(Connection.java:554) at com.datastax.driver.core.Connection$10.operationComplete(Connection.java:538) at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) at io.netty.util.concurrent.DefaultPromise.notifyLateListener(DefaultPromise.java:621) at io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:138) at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:93) at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:28) at com.datastax.driver.core.Connection$Flusher.run(Connection.java:875) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) at java.lang.Thread.run(Thread.java:745) Caused by: io.netty.handler.codec.EncoderException: java.lang.OutOfMemoryError: Direct buffer memory at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:643) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:700) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:636) at io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:284) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:643) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:700) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:636) at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:622) at io.netty.channel.DefaultChannelPipeline.write(DefaultChannelPipeline.java:939) at io.netty.channel.AbstractChannel.write(AbstractChannel.java:234) ... 5 more Caused by: java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:658) at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:645) at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:229) at io.netty.buffer.PoolArena.allocate(PoolArena.java:205) at io.netty.buffer.PoolArena.allocate(PoolArena.java:133) at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:271) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155) at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:146) at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:83) at com.datastax.driver.core.Message$ProtocolEncoder.encode(Message.java:288) at com.datastax.driver.core.Message$ProtocolEncoder.encode(Message.java:257) at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89) ... 15 more {code} I've tried to set log level, and found a possible root cause of the issue, it is thrown after every disconnect from Cassandra cluster: {code} 2015-11-22 21:08:59,954 WARN [io.netty.util.ThreadDeathWatcher] (threadDeathWatcher-2-1) Thread death watcher task raised an exception:: java.lang.NoClassDefFoundError: io/netty/util/Recycler $WeakOrderQueue at io.netty.util.Recycler$DefaultHandle.recycle(Recycler.java:150) [netty-common-4.0.33.Final.jar:4.0.33.Final] at io.netty.util.Recycler.recycle(Recycler.java:111) [netty-common-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry.recycle(PoolThreadCache.java:459) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache$MemoryRegionCache.freeEntry(PoolThreadCache.java:442) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:414) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:406) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:263) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:254) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache.free0(PoolThreadCache.java:235) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache.access$000(PoolThreadCache.java:38) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache$1.run(PoolThreadCache.java:64) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.util.ThreadDeathWatcher$Watcher.notifyWatchees(ThreadDeathWatcher.java:195) [netty-common-4.0.33.Final.jar:4.0.33.Final] at io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:130) [netty-common-4.0.33.Final.jar:4.0.33.Final] at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) [netty-common-4.0.33.Final.jar:4.0.33.Final] at java.lang.Thread.run(Thread.java:744) [rt.jar:1.8.0] Caused by: java.lang.ClassNotFoundException: io.netty.util.Recycler$WeakOrderQueue from [Module "deployment.gacivs-backend-dao-services-0.0.27-SNAPSHOT.war:main" from Service Module Loader] at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:213) [jboss-modules-1.3.3.Final.jar:1.3.3.Final] at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:459) [jboss-modules-1.3.3.Final.jar:1.3.3.Final] at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:408) [jboss-modules-1.3.3.Final.jar:1.3.3.Final] at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:389) [jboss-modules-1.3.3.Final.jar:1.3.3.Final] at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:134) [jboss-modules-1.3.3.Final.jar:1.3.3.Final] ... 15 more 2015-11-22 21:08:59,961 WARN [io.netty.util.ThreadDeathWatcher] (threadDeathWatcher-2-1) Thread death watcher task raised an exception:: java.lang.NoClassDefFoundError: io/netty/util/Recycler $WeakOrderQueue at io.netty.util.Recycler$DefaultHandle.recycle(Recycler.java:150) [netty-common-4.0.33.Final.jar:4.0.33.Final] at io.netty.util.Recycler.recycle(Recycler.java:111) [netty-common-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry.recycle(PoolThreadCache.java:459) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache$MemoryRegionCache.freeEntry(PoolThreadCache.java:442) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:414) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:406) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:263) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:254) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache.free0(PoolThreadCache.java:235) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache.access$000(PoolThreadCache.java:38) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.buffer.PoolThreadCache$1.run(PoolThreadCache.java:64) [netty-buffer-4.0.33.Final.jar:4.0.33.Final] at io.netty.util.ThreadDeathWatcher$Watcher.notifyWatchees(ThreadDeathWatcher.java:195) [netty-common-4.0.33.Final.jar:4.0.33.Final] at io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:130) [netty-common-4.0.33.Final.jar:4.0.33.Final] at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) [netty-common-4.0.33.Final.jar:4.0.33.Final] at java.lang.Thread.run(Thread.java:744) [rt.jar:1.8.0] {code} P.S.: the "Netty upgrade to 4.x" (https://datastax-oss.atlassian.net/browse/JAVA-622) issue resolved in the 2.1.6 version and the OutOfMemoryError: Direct buffer memory" appeared in the 2.1.6 version...
    via by Gábor AUTH,
  • Eclipse Community Forums
    via by Unknown author,
    • java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:658) at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:57) at org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:55) at org.xnio.ByteBufferSlicePool.allocate(ByteBufferSlicePool.java:143) at io.undertow.websockets.core.BufferedBinaryMessage$1.handleEvent(BufferedBinaryMessage.java:106) at io.undertow.websockets.core.BufferedBinaryMessage$1.handleEvent(BufferedBinaryMessage.java:97) at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) at io.undertow.server.protocol.framed.AbstractFramedStreamSourceChannel$1.run(AbstractFramedStreamSourceChannel.java:264) at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:560) at org.xnio.nio.WorkerThread.run(WorkerThread.java:462)

    Users with the same issue

    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    5 more bugmates