There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • We have encountered a few issues with handling an HttpAsyncClient request which is made when the process has too many open file descriptors: 1) A .get() call made on the future returned from the httpclient's .execute() method hangs indefinitely. 2) No call is made to any of the methods on the {{FutureCallback<HttpResponse>}} object provided as an argument to the httpclient's .execute() method. I was expecting a call to .failed() to occur. 3) A call to the httpclient's .close() method does not error out but also does not result in all of the IOReactor "I/O Dispatcher" threads being shut down. In normal cases, these threads would get shut down when the .close() call is made. I've attached a couple of Java source files which demonstrate the problem - and is based on the example at Just after the httpclient is started, this class allocates bare sockets repeatedly and holds onto them. This is done to force the application to get into the state where file descriptors have been exhausted. The httpclient.execute() call does not throw an error. The class repeatedly calls .get() on the future response with a prolonged timeout, breaking out of the loop when the client moves from the state of having .isRunning() be true to .isRunning() being false. Checking for a change in the .isRunning() state is a bit contrived here, ideally something that an API client would not have to do. I noticed from the implementation of the {{CloseableHttpAsyncClientBase}} constructor at that the reactor thread's run() method will catch an exception at request startup and set the client status to {{STOPPED}} and this seemed like the only way that a client could detect that a problem had occurred in this situation. {code:java} @Override public void run() { try { final IOEventDispatch ioEventDispatch = new InternalIODispatch(handler); connmgr.execute(ioEventDispatch); } catch (final Exception ex) { log.error("I/O reactor terminated abnormally", ex); } finally { status.set(Status.STOPPED); } } {code} The implementation of writes a message to stdout when any of the methods is invoked. In this case, no message is written to stdout even though the request has effectively failed. When this program is run from the command line like {{java abnormalioreactor.AbnormalIOReactor}}, it will hang after the word "Done" appears, which occurs after the .close() method is called on the httpclient. When running jstack from the command line, I see that various "I/O Dispatcher" threads are still running for the client even though I had expected them to have been closed: {noformat} "I/O dispatcher 8" #19 prio=5 os_prio=31 tid=0x00007f978e800000 nid=0x6503 runnable [0x0000000120cf6000] java.lang.Thread.State: RUNNABLE at Method) at at at - locked <0x000000076ba79bf8> (a$2) - locked <0x000000076ba79be8> (a java.util.Collections$UnmodifiableSet) - locked <0x000000076ba79ac8> (a at at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute( at org.apache.http.impl.nio.reactor.BaseIOReactor.execute( at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$ at {noformat} I had been thinking that in this situation it would be better for the httpclient to not be implicitly stopped - so it could be used for subsequent requests if/when the number of open file descriptors has dropped. Also, it would be good for either the httpclient.execute() call to throw an exception right away or at least for calls to .get() on the future returned from the .execute() call to throw an exception and for the .failed() method on {{FutureCallback<HttpResponse>}} object to be called. Note that I ran this application on my MacBook Pro, running OS-X Yosemite. We've seen this problem on other OSes as well, though, including CentOS 7. I reproduced this problem with a few different versions of HttpAsyncClient - 4.0.2, 4.1.1, custom build from trunk, custom build from the 4.1.x branch. This isn't a situation that we run into often with our applications - mostly in cases where users haven't tuned their application settings properly to account for the number of file descriptors that a Java process would need to use. It would be nice if the HttpAsyncClient handling could gracefully fail a request when open file descriptors have temporarily been exhausted under load spikes but leave the client in a usable state for future requests.
    via by Jeremy Barlow,
    • java.lang.ExceptionInInitializerError at at at java.nio.channels.spi.AbstractSelector.close( at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.doShutdown( at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute( at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute( at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$ at Caused by: Too many open files in system at Method) at<clinit>( ... 8 more
    No Bugmate found.