java.util.concurrent.RejectedExecutionException: Worker has already been shutdown

GitHub | DaveChapman | 4 months ago
  1. 0

    OOM -> LockObtainFailedException -> All data lost

    GitHub | 4 months ago | DaveChapman
    java.util.concurrent.RejectedExecutionException: Worker has already been shutdown
  2. 0

    GitHub comment 14498#157461451

    GitHub | 1 year ago | mdiehm
    java.util.concurrent.RejectedExecutionException: Worker has already been shutdown
  3. 0

    GitHub comment 3317#191659623

    GitHub | 9 months ago | benjaminrigaud
    java.util.concurrent.RejectedExecutionException: Worker has already been shutdown
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Warning on shutdown

    GitHub | 1 year ago | schleichardt
    java.util.concurrent.RejectedExecutionException: Worker has already been shutdown
  6. 0

    2.2.3 Noisy shutdown w/exception

    Google Groups | 3 years ago | tigerfoot
    java.util.concurrent.RejectedExecutionException: Worker has already been shutdown

    1 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.util.concurrent.RejectedExecutionException

      Worker has already been shutdown

      at org.jboss.netty.channel.socket.nio.AbstractNioSelector.registerTask()
    2. Netty
      DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream
      1. org.jboss.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:120)
      2. org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:72)
      3. org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
      4. org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:56)
      5. org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
      6. org.jboss.netty.channel.socket.nio.AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34)
      7. org.jboss.netty.channel.DefaultChannelPipeline.execute(DefaultChannelPipeline.java:636)
      8. org.jboss.netty.channel.Channels.fireExceptionCaughtLater(Channels.java:496)
      9. org.jboss.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:46)
      10. org.jboss.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:658)
      11. org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:781)
      12. org.jboss.netty.channel.Channels.write(Channels.java:725)
      13. org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
      14. org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
      15. org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
      16. org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:784)
      16 frames
    3. ElasticSearch
      HttpPipeliningHandler.handleDownstream
      1. org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.handleDownstream(HttpPipeliningHandler.java:87)
      1 frame
    4. Netty
      DefaultChannelPipeline.sendDownstream
      1. org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
      2. org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
      2 frames
    5. ElasticSearch
      PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run
      1. org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:146)
      2. org.elasticsearch.rest.action.support.RestResponseListener.processResponse(RestResponseListener.java:43)
      3. org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:49)
      4. org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:89)
      5. org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:85)
      6. org.elasticsearch.action.bulk.TransportBulkAction$2.finishHim(TransportBulkAction.java:356)
      7. org.elasticsearch.action.bulk.TransportBulkAction$2.onFailure(TransportBulkAction.java:351)
      8. org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:95)
      9. org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:567)
      10. org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onClusterServiceClose(TransportReplicationAction.java:552)
      11. org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onClose(ClusterStateObserver.java:222)
      12. org.elasticsearch.cluster.service.InternalClusterService.add(InternalClusterService.java:282)
      13. org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:153)
      14. org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:98)
      15. org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:90)
      16. org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:544)
      17. org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:596)
      18. org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:465)
      19. org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
      20. org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:547)
      21. org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.postAdded(ClusterStateObserver.java:206)
      22. org.elasticsearch.cluster.service.InternalClusterService$1.run(InternalClusterService.java:296)
      23. org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
      24. org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
      24 frames
    6. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames