Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Apache's JIRA Issue Tracker by Enis Soztutar, 1 year ago
region: IntegrationTestLoadAndVerify,yC^P\xD7\x945\xD4,1363388517630.24655343d8d356ef708732f34cfe8946.
via Google Groups by Ramkrishna S Vasudevan, 2 years ago
region: guard_tb_20111226,Szzzzzzzzzzz,1322710771589.89644afd8e35f06181706e87 4747129e.
via hbase-user by Ramkrishna S Vasudevan, 9 months ago
region: guard_tb_20111226,Szzzzzzzzzzz,1322710771589.89644afd8e35f06181706e87 4747129e.
via Google Groups by Tianwei, 2 years ago
region: test_table,alex yost,1329368982672.f55282926b9e8652f7bb0b98616a8216.
via Google Groups by jia.li, 2 years ago
region: lbc_zte_1_imei_index,3333332A,1376364729049.4469e6b0500bf3f5ed0ac1247d249537.
via grokbase.com by Unknown author, 1 year ago
java.lang.OutOfMemoryError: Direct buffer memory	at java.nio.Bits.reserveMemory(Bits.java:632)	at java.nio.DirectByteBuffer.(DirectByteBuffer.java:97)	at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)	at org.apache.hadoop.hdfs.util.DirectBufferPool.getBuffer(DirectBufferPool.java:70)	at org.apache.hadoop.hdfs.BlockReaderLocal.(BlockReaderLocal.java:315)	at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:208)	at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:790)	at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:888)	at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:455)	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:645)	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:689)	at java.io.DataInputStream.readFully(DataInputStream.java:178)	at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:312)	at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:543)	at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:589)	at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1261)	at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:512)	at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:603)	at org.apache.hadoop.hbase.regionserver.Store.validateStoreFile(Store.java:1568)	at org.apache.hadoop.hbase.regionserver.Store.commitFile(Store.java:845)	at org.apache.hadoop.hbase.regionserver.Store.access$500(Store.java:109)	at org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.commit(Store.java:2209)	at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1541)