Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via apache.org by Unknown author, 1 year ago
HBase is able to connect to ZooKeeper but the connection closes immediately. This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK server logs for that error and then make sure you are reusing
via apache.org by Unknown author, 1 year ago
HBase is able to connect to ZooKeeper but the connection closes immediately. This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK server logs for that error and then make sure you are reusing
org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to
connect to ZooKeeper but the connection closes immediately. This could
be a sign that the server has too many connections (30 is the
default). Consider inspecting your ZK server logs for that error and
then make sure you are reusing HBaseConfiguration as often as you can.
See HTable's javadoc for more information.
        at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1
002)
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.jav
a:304)	at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:295)	at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)	at org.apache.hadoop.hbase.client.HBaseAdmin.(HBaseAdmin.java:90)	at org.apache.gora.hbase.store.HBaseStore.initialize(HBaseStore.java:108)	at org.apache.gora.store.impl.DataStoreBase.readFields(DataStoreBase.java:181)	at org.apache.gora.query.impl.QueryBase.readFields(QueryBase.java:222)	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)	at org.apache.gora.util.IOUtils.deserialize(IOUtils.java:217)	at org.apache.gora.util.IOUtils.deserialize(IOUtils.java:237)	at org.apache.gora.query.impl.PartitionQueryImpl.readFields(PartitionQueryImpl.java:141)	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)	at org.apache.gora.util.IOUtils.deserialize(IOUtils.java:217)	at org.apache.gora.util.IOUtils.deserialize(IOUtils.java:237)	at org.apache.gora.mapreduce.GoraInputSplit.readFields(GoraInputSplit.java:76)	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:396)	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:728)	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)