java.io.IOException: hconnection-0x47bf79d7 closed

Apache's JIRA Issue Tracker | Lydia Ickler | 1 year ago
  1. 0

    If I fill a default table (create 'test-table', 'someCf') with the HBaseWriteExample.java program from the HBase addon library then a table without start/end key is created. The data reading works great with the HBaseReadExample.java. Nevertheless, if I manually create a "test-table" that is distributed over more than one region server (create 'test-table2', 'someCf',{NUMREGIONS => 3,SPLITALGO => 'HexStringSplit'}) the run is canceled with the following error message: {noformat} grips2 Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions: Fri Aug 07 11:18:29 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:18:38 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:18:48 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:18:58 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:19:08 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:19:18 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:19:28 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:19:38 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:19:48 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:19:58 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:20:18 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:20:38 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:20:58 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:21:19 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:21:39 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:21:59 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:22:19 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:22:39 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:22:59 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:23:19 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:23:39 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:24:00 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:24:20 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:24:40 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:25:00 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:25:20 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:25:40 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:26:00 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:26:20 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:26:40 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:27:00 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:27:20 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:27:40 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:28:01 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed Fri Aug 07 11:28:21 CEST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@28961f68, java.io.IOException: hconnection-0x47bf79d7 closed at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:131) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:91) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:284) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:189) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:184) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:110) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:775) at org.apache.flink.addons.hbase.TableInputFormat.open(TableInputFormat.java:153) at org.apache.flink.addons.hbase.TableInputFormat.open(TableInputFormat.java:48) at org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:151) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: hconnection-0x47bf79d7 closed at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1812) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:100) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isTableDisabled(HConnectionManager.java:980) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:1133) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:958) at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:72) at org.apache.hadoop.hbase.client.ScannerCallable.prepare(ScannerCallable.java:125) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114) ... 11 more {noformat}

    Apache's JIRA Issue Tracker | 1 year ago | Lydia Ickler
    java.io.IOException: hconnection-0x47bf79d7 closed
  2. 0

    DataXceiver java.io.InterruptedIOException error on scannning Hbase table

    Google Groups | 3 years ago | AnushaGuntaka
    java.io.IOException: Could not seek StoreFileScanner[HFileScanner for reader reader=hdfs://172.20.193.234:9000/assortmentLinking/performance_weekly_sku/16b9d994146958a8ab66c709d077a3c7/cf/4df17d21166640 298824c9680ffb5bf3, compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false], firstKey=SKU125600STORE3938WEEK13/cf:facings/1397825072820/Put, lastKey=SKU126145STORE3971WEEK22/cf:week_id/1397848697370/Put, avgKeyLen=53, avgValueLen=3, entries=155950935, length=10173377648, cur=null] to key SKU125600STORE3938WEEK13/cf:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/ts=0
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    An internal python client has been getting below stack trace since HBASE-134347 {code} 2015-09-30 11:27:31,670 runner ERROR : scheduler executor error 2015-09-30 11:27:31,674 runner ERROR : Traceback (most recent call last): File "/opt/cops/cops-related-ticket-info-fetcher/fetcher/.virtenv/lib/python2.6/site-packages/CopsRtiFetcher-0.1-py2.6.egg/cops_rti/fetcher/runner.py", line 82, in run fetch_list = self.__scheduler_executor.run() File "/opt/cops/cops-related-ticket-info-fetcher/fetcher/.virtenv/lib/python2.6/site-packages/CopsRtiFetcher-0.1-py2.6.egg/cops_rti/fetcher/scheduler.py", line 35, in run with self.__fetch_db_dao.get_scanner() as scanner: File "/opt/cops/cops-related-ticket-info-fetcher/fetcher/.virtenv/lib/python2.6/site-packages/CopsHbaseCommon-f796bf2929be11c26536c3e8f3e9c0b0ecb382b3-py2.6.egg/cops/hbase/common/hbase_dao.py", line 57, in get_scanner caching=caching, field_filter_list=field_filter_list) File "/opt/cops/cops-related-ticket-info-fetcher/fetcher/.virtenv/lib/python2.6/site-packages/CopsHbaseCommon-f796bf2929be11c26536c3e8f3e9c0b0ecb382b3-py2.6.egg/cops/hbase/common/hbase_client_template.py", line 104, in get_entity_scanner self.__fix_cfs(self.__filter_columns(field_filter_list)), caching) File "/opt/cops/cops-related-ticket-info-fetcher/fetcher/.virtenv/lib/python2.6/site-packages/CopsHbaseCommon-f796bf2929be11c26536c3e8f3e9c0b0ecb382b3-py2.6.egg/cops/hbase/common/hbase_entity_scanner.py", line 81, in open self.__scanner_id = client.scannerOpenWithScan(table_name, scan) File "/opt/cops/cops-related-ticket-info-fetcher/.crepo/cops-hbase-common/ext-py/hbase/Hbase.py", line 1494, in scannerOpenWithScan return self.recv_scannerOpenWithScan() File "/opt/cops/cops-related-ticket-info-fetcher/.crepo/cops-hbase-common/ext-py/hbase/Hbase.py", line 1518, in recv_scannerOpenWithScan raise result.io IOError: IOError(message="org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location\n\tat org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:308)\n\tat org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:149)\n\tat org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)\n\tat org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)\n\tat org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:293)\n\tat org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268)\n\tat org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140)\n\tat org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:135)\n\tat org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:888)\n\tat org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.scannerOpenWithScan(ThriftServerRunner.java:1446)\n\tat sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:606)\n\tat org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(HbaseHandlerMetricsProxy.java:67)\n\tat com.sun.proxy.$Proxy14.scannerOpenWithScan(Unknown Source)\n\tat org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerOpenWithScan.getResult(Hbase.java:4609)\n\tat org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerOpenWithScan.getResult(Hbase.java:4593)\n\tat org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)\n\tat org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)\n\tat org.apache.hadoop.hbase.thrift.ThriftServerRunner$3.process(ThriftServerRunner.java:502)\n\tat org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:289)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat java.lang.Thread.run(Thread.java:745)\nCaused by: java.io.IOException: hconnection-0xa8e1bf9 closed\n\tat org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1117)\n\tat org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:299)\n\t... 23 more\n") {code} On the thrift server side we see this: {code} 2015-09-30 07:22:59,427 ERROR org.apache.hadoop.hbase.client.AsyncProcess: Failed to get region location java.io.IOException: hconnection-0x4142991e closed at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1117) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:369) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:320) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1496) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1107) at org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.mutateRowTs(ThriftServerRunner.java:1256) at org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.mutateRow(ThriftServerRunner.java:1209) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(HbaseHandlerMetricsProxy.java:67) at com.sun.proxy.$Proxy14.mutateRow(Unknown Source) at org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$mutateRow.getResult(Hbase.java:4334) at org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$mutateRow.getResult(Hbase.java:4318) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hbase.thrift.ThriftServerRunner$3.process(ThriftServerRunner.java:502) at org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:289) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} HBASE-13437 has us actual execute a close on timeout -- before we'd mark connection closed but would never call close on it. A background chore is going around stamping Connections in the ConnectionCache as 'closed' if they have not been used in ten minutes. The 'close' can come in at any time..... In particular between the point at which we get the table/connection and when we go to use it: i.e. flush puts. It is at the flush puts point that we get the above 'AsyncProcess: Failed to get region location' (It is not a failure to find region location but rather our noticing that the connection has been closed). Attempts at reproducing this issue locally letting the Connection timeout can generate the above exception if a certain dance is done but it is hard to do; I am not reproducing the actual usage by the aforementioned client. Next steps would be setting up python client talking via thrift and then try using connection after it has been evicted from the connection cache. Another thing to try is a pool of connections on the python side...connections are identified by user and table.

    Apache's JIRA Issue Tracker | 1 year ago | stack
    java.io.IOException: hconnection-0xa8e1bf9 closed
  5. 0

    Thrift Server is crashing due to "RetriesExhaustedException"

    Stack Overflow | 1 year ago | freebourn
    org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      hconnection-0x47bf79d7 closed

      at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher()
    2. HBase - Client
      HTable.getScanner
      1. org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(HConnectionManager.java:1812)
      2. org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:100)
      3. org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isTableDisabled(HConnectionManager.java:980)
      4. org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:1133)
      5. org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionLocation(HConnectionManager.java:958)
      6. org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:72)
      7. org.apache.hadoop.hbase.client.ScannerCallable.prepare(ScannerCallable.java:125)
      8. org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
      9. org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:91)
      10. org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:284)
      11. org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:189)
      12. org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:184)
      13. org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:110)
      14. org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:775)
      14 frames
    3. org.apache.flink
      TableInputFormat.open
      1. org.apache.flink.addons.hbase.TableInputFormat.open(TableInputFormat.java:153)
      2. org.apache.flink.addons.hbase.TableInputFormat.open(TableInputFormat.java:48)
      2 frames
    4. flink-runtime
      Task.run
      1. org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:151)
      2. org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
      2 frames
    5. Java RT
      Thread.run
      1. java.lang.Thread.run(Thread.java:745)
      1 frame