java.util.concurrent.TimeoutException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Two days ago we encountered an issue where all builds attempted on two of our nodes failed. Console output for each failed build was the following: {noformat} ERROR: SEVERE ERROR occurs org.jenkinsci.lib.envinject.EnvInjectException: hudson.remoting.ChannelClosedException: channel is already closed at org.jenkinsci.plugins.envinject.service.EnvironmentVariablesNodeLoader.gatherEnvironmentVariablesNode(EnvironmentVariablesNodeLoader.java:75) at org.jenkinsci.plugins.envinject.EnvInjectListener.loadEnvironmentVariablesNode(EnvInjectListener.java:81) at org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:39) at hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:574) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:494) at hudson.model.Run.execute(Run.java:1741) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:98) at hudson.model.Executor.run(Executor.java:374) Caused by: hudson.remoting.ChannelClosedException: channel is already closed at hudson.remoting.Channel.send(Channel.java:550) at hudson.remoting.Request.call(Request.java:129) at hudson.remoting.Channel.call(Channel.java:752) at hudson.FilePath.act(FilePath.java:1073) at org.jenkinsci.plugins.envinject.service.EnvironmentVariablesNodeLoader.gatherEnvironmentVariablesNode(EnvironmentVariablesNodeLoader.java:44) ... 8 more Caused by: java.io.IOException at hudson.remoting.Channel.close(Channel.java:1109) at hudson.slaves.ChannelPinger$1.onDead(ChannelPinger.java:118) at hudson.remoting.PingThread.ping(PingThread.java:126) at hudson.remoting.PingThread.run(PingThread.java:85) Caused by: java.util.concurrent.TimeoutException: Ping started at 1444039585493 hasn't completed by 1444039825494 ... 2 more {noformat} These are windows slaves (JNLP) which run the slave jar as a service. We attempted to restart the slave services first, then rebooted the slaves and ultimately has to restart the Jenkins master to recover from the problem. Our after action root cause analysis turned up the following in the Jenkins masters's log file: {noformat} Exception in thread "Channel reader thread: DC-JENWIN-001" java.lang.OutOfMemoryError: Java heap space at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75) Exception in thread "Channel reader thread: DC-JENWIN-003" java.lang.OutOfMemoryError: Java heap space at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75) {noformat} These entries are remarkable in that they don't have any date-time stamps. It seems as if they were not emitted by the normal logger. None the less by looking at the log entries immediately preceding and following them we were able to localize the entries to within a minute accuracy and they are clearly the cause of the problem. Moreover, after these two entires occurred, entries similar to the following appears in the master log a regular intervals: {noformat} Oct 05, 2015 10:08:54 AM com.cloudbees.jenkins.support.AsyncResultCache run INFO: Could not retrieve metrics data from DC-JENWIN-001 for caching java.util.concurrent.TimeoutException at hudson.remoting.Request$1.get(Request.java:272) at hudson.remoting.Request$1.get(Request.java:206) at hudson.remoting.FutureAdapter.get(FutureAdapter.java:59) at com.cloudbees.jenkins.support.AsyncResultCache.run(AsyncResultCache.java:95) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {noformat} These errors appeared for both of and only the affected slave nodes for retrieving "metrics" retrieving the "java version", retrieving the "environment", and all the various other statuses. h2. Bottom Line Connectivity to these slaves suffered a critical and apparently unrecoverable failure. This failure originated on the the Master side of the connection due apparently to a heap space issue. Yet the master did not properly recognizing the nodes as off line and continued to send builds to them. Given that jobs tend to be "sticky" to a certain slave as long as it is available, this effectively rendered several critical jobs un-buildable. h2. Recommended Resolution. Though exact reproduction of this problem may be impossible, it is hopped that some steps can be taken to enable Jenkins to correctly identify this problem and take corrective or at least mitigating action, such as re-initializing whatever thread/component was inoperable or marking the node as off-line.
    via by Kenneth Baltrinic,
  • Two days ago we encountered an issue where all builds attempted on two of our nodes failed. Console output for each failed build was the following: {noformat} ERROR: SEVERE ERROR occurs org.jenkinsci.lib.envinject.EnvInjectException: hudson.remoting.ChannelClosedException: channel is already closed at org.jenkinsci.plugins.envinject.service.EnvironmentVariablesNodeLoader.gatherEnvironmentVariablesNode(EnvironmentVariablesNodeLoader.java:75) at org.jenkinsci.plugins.envinject.EnvInjectListener.loadEnvironmentVariablesNode(EnvInjectListener.java:81) at org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:39) at hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:574) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:494) at hudson.model.Run.execute(Run.java:1741) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:98) at hudson.model.Executor.run(Executor.java:374) Caused by: hudson.remoting.ChannelClosedException: channel is already closed at hudson.remoting.Channel.send(Channel.java:550) at hudson.remoting.Request.call(Request.java:129) at hudson.remoting.Channel.call(Channel.java:752) at hudson.FilePath.act(FilePath.java:1073) at org.jenkinsci.plugins.envinject.service.EnvironmentVariablesNodeLoader.gatherEnvironmentVariablesNode(EnvironmentVariablesNodeLoader.java:44) ... 8 more Caused by: java.io.IOException at hudson.remoting.Channel.close(Channel.java:1109) at hudson.slaves.ChannelPinger$1.onDead(ChannelPinger.java:118) at hudson.remoting.PingThread.ping(PingThread.java:126) at hudson.remoting.PingThread.run(PingThread.java:85) Caused by: java.util.concurrent.TimeoutException: Ping started at 1444039585493 hasn't completed by 1444039825494 ... 2 more {noformat} These are windows slaves (JNLP) which run the slave jar as a service. We attempted to restart the slave services first, then rebooted the slaves and ultimately has to restart the Jenkins master to recover from the problem. Our after action root cause analysis turned up the following in the Jenkins masters's log file: {noformat} Exception in thread "Channel reader thread: DC-JENWIN-001" java.lang.OutOfMemoryError: Java heap space at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75) Exception in thread "Channel reader thread: DC-JENWIN-003" java.lang.OutOfMemoryError: Java heap space at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75) {noformat} These entries are remarkable in that they don't have any date-time stamps. It seems as if they were not emitted by the normal logger. None the less by looking at the log entries immediately preceding and following them we were able to localize the entries to within a minute accuracy and they are clearly the cause of the problem. Moreover, after these two entires occurred, entries similar to the following appears in the master log a regular intervals: {noformat} Oct 05, 2015 10:08:54 AM com.cloudbees.jenkins.support.AsyncResultCache run INFO: Could not retrieve metrics data from DC-JENWIN-001 for caching java.util.concurrent.TimeoutException at hudson.remoting.Request$1.get(Request.java:272) at hudson.remoting.Request$1.get(Request.java:206) at hudson.remoting.FutureAdapter.get(FutureAdapter.java:59) at com.cloudbees.jenkins.support.AsyncResultCache.run(AsyncResultCache.java:95) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {noformat} These errors appeared for both of and only the affected slave nodes for retrieving "metrics" retrieving the "java version", retrieving the "environment", and all the various other statuses. h2. Bottom Line Connectivity to these slaves suffered a critical and apparently unrecoverable failure. This failure originated on the the Master side of the connection due apparently to a heap space issue. Yet the master did not properly recognizing the nodes as off line and continued to send builds to them. Given that jobs tend to be "sticky" to a certain slave as long as it is available, this effectively rendered several critical jobs un-buildable. h2. Recommended Resolution. Though exact reproduction of this problem may be impossible, it is hopped that some steps can be taken to enable Jenkins to correctly identify this problem and take corrective or at least mitigating action, such as re-initializing whatever thread/component was inoperable or marking the node as off-line.
    via by Kenneth Baltrinic,
  • When JNLP slave detect ping timeout, it tries to reconnect. But if master have not noticed the timeout yet, it rejects the new connection from slave. JNLP slave agent process aborts once the connection is rejected in such a way. STDOUT of JNLP process: {noformat} INFO: Ping failed. Terminating the channel. java.util.concurrent.TimeoutException: Ping started on 1456918109582 hasn't completed at 1456918349582 at hudson.remoting.PingThread.ping(PingThread.java:125) at hudson.remoting.PingThread.run(PingThread.java:86) Mar 02, 2016 6:32:29 AM hudson.remoting.SynchronousCommandTransport$ReaderThread run SEVERE: I/O error in channel channel java.net.SocketException: Socket closed at java.net.SocketInputStream.read(SocketInputStream.java:190) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:82) at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72) at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103) at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:33) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48) Mar 02, 2016 6:32:29 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Terminated Mar 02, 2016 6:32:39 AM jenkins.slaves.restarter.JnlpSlaveRestarterInstaller$2$1 onReconnect INFO: Restarting slave via jenkins.slaves.restarter.UnixSlaveRestarter@6523ff4a Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main createEngine INFO: Setting up slave: dev127-virt2 Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main$CuiListener <init> INFO: Jenkins agent is running in headless mode. Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Locating server among [http://jenkins.acme.com/hudson/, http://hudson.acme.com/hudson/] Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Connecting to jenkins.acme.com:37003 Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Handshaking Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main$CuiListener error SEVERE: The server rejected the connection: dev127-virt2 is already connected to this master. Rejecting this connection. java.lang.Exception: The server rejected the connection: dev127-virt2 is already connected to this master. Rejecting this connection. at hudson.remoting.Engine.onConnectionRejected(Engine.java:306) at hudson.remoting.Engine.run(Engine.java:276) {noformat} Slave log on master: {noformat} JNLP agent connected from /10.16.180.145 <===[JENKINS REMOTING CAPACITY]===>ERROR: Connection terminated Connection terminated ha:AAAAWB+LCAAAAAAAAP9b85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=java.io.IOException: Connection aborted: org.jenkinsci.remoting.nio.NioChannelHub$MonoNioTransport@131dbee3[name=dev127-virt2] at org.jenkinsci.remoting.nio.NioChannelHub$NioTransport.abort(NioChannelHub.java:211) at org.jenkinsci.remoting.nio.NioChannelHub.run(NioChannelHub.java:631) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69) at sun.nio.ch.IOUtil.write(IOUtil.java:40) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:336) at org.jenkinsci.remoting.nio.FifoBuffer$Pointer.send(FifoBuffer.java:130) at org.jenkinsci.remoting.nio.FifoBuffer.send(FifoBuffer.java:254) at org.jenkinsci.remoting.nio.NioChannelHub.run(NioChannelHub.java:622) ... 7 more Slave.jar version: 2.47 This is a Unix slave Slave successfully connected and online Connection terminated {noformat} Master log: {noformat} 2016-03-02 06:32:38,352 WARNING [hudson.node_monitors.AbstractAsyncNodeMonitorDescriptor] (Monitoring thread for Clock Difference started on Wed Mar 02 06:32:08 EST 2016) Failed to monitor dev127-virt2 for Clock Difference java.util.concurrent.TimeoutException at hudson.remoting.Request$1.get(Request.java:271) at hudson.remoting.Request$1.get(Request.java:206) at hudson.remoting.FutureAdapter.get(FutureAdapter.java:59) at hudson.node_monitors.AbstractAsyncNodeMonitorDescriptor.monitor(AbstractAsyncNodeMonitorDescriptor.java:97) at hudson.node_monitors.AbstractNodeMonitorDescriptor$Record.run(AbstractNodeMonitorDescriptor.java:280) ... (All monitors times out) 2016-03-02 06:32:42,860 INFO [hudson.TcpSlaveAgentListener] (TCP slave agent connection handler #41773 with /10.16.180.145:58248) Accepted connection #41773 from /10.16.180.145:58248 2016-03-02 06:32:42,865 WARNING [jenkins.slaves.JnlpSlaveHandshake] (TCP slave agent connection handler #41773 with /10.16.180.145:58248) TCP slave agent connection handler #41773 with /10.16.180.145:58248 is aborted: dev127-virt2 is already connected to this master. Rejecting this connection. 2016-03-02 06:32:42,866 WARNING [jenkins.slaves.JnlpSlaveHandshake] (TCP slave agent connection handler #41773 with /10.16.180.145:58248) TCP slave agent connection handler #41773 with /10.16.180.145:58248 is aborted: Unrecognized name: dev127-virt2 ... 2016-03-02 06:33:43,630 INFO [hudson.slaves.ChannelPinger] (Ping thread for channel hudson.remoting.Channel@5caca20e:dev127-virt2) Ping failed. Terminating the channel. java.util.concurrent.TimeoutException: Ping started on 1456918183629 hasn't completed at 1456918423630 at hudson.remoting.PingThread.ping(PingThread.java:125) at hudson.remoting.PingThread.run(PingThread.java:86) ... 2016-03-02 06:38:26,902 WARNING [hudson.node_monitors.AbstractAsyncNodeMonitorDescriptor] (Monitoring thread for Free Temp Space started on Wed Mar 02 06:38:26 EST 2016) Failed to monitor dev127-virt2 for Free Temp Space hudson.remoting.ChannelClosedException: channel is already closed at hudson.remoting.Channel.send(Channel.java:549) at hudson.remoting.Request.callAsync(Request.java:204) at hudson.remoting.Channel.callAsync(Channel.java:778) at hudson.node_monitors.AbstractAsyncNodeMonitorDescriptor.monitor(AbstractAsyncNodeMonitorDescriptor.java:76) at hudson.node_monitors.AbstractNodeMonitorDescriptor$Record.run(AbstractNodeMonitorDescriptor.java:280) Caused by: java.io.IOException at hudson.remoting.Channel.close(Channel.java:1105) at hudson.slaves.ChannelPinger$1.onDead(ChannelPinger.java:110) at hudson.remoting.PingThread.ping(PingThread.java:125) at hudson.remoting.PingThread.run(PingThread.java:86) Caused by: java.util.concurrent.TimeoutException: Ping started on 1456918183629 hasn't completed at 1456918423630 ... 2 more ... (All monitors fails with "channel is already closed") {noformat}
    via by Oliver Gondža,
  • When JNLP slave detect ping timeout, it tries to reconnect. But if master have not noticed the timeout yet, it rejects the new connection from slave. JNLP slave agent process aborts once the connection is rejected in such a way. STDOUT of JNLP process: {noformat} INFO: Ping failed. Terminating the channel. java.util.concurrent.TimeoutException: Ping started on 1456918109582 hasn't completed at 1456918349582 at hudson.remoting.PingThread.ping(PingThread.java:125) at hudson.remoting.PingThread.run(PingThread.java:86) Mar 02, 2016 6:32:29 AM hudson.remoting.SynchronousCommandTransport$ReaderThread run SEVERE: I/O error in channel channel java.net.SocketException: Socket closed at java.net.SocketInputStream.read(SocketInputStream.java:190) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:82) at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72) at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103) at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:33) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48) Mar 02, 2016 6:32:29 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Terminated Mar 02, 2016 6:32:39 AM jenkins.slaves.restarter.JnlpSlaveRestarterInstaller$2$1 onReconnect INFO: Restarting slave via jenkins.slaves.restarter.UnixSlaveRestarter@6523ff4a Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main createEngine INFO: Setting up slave: dev127-virt2 Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main$CuiListener <init> INFO: Jenkins agent is running in headless mode. Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Locating server among [http://jenkins.acme.com/hudson/, http://hudson.acme.com/hudson/] Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Connecting to jenkins.acme.com:37003 Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Handshaking Mar 02, 2016 6:32:42 AM hudson.remoting.jnlp.Main$CuiListener error SEVERE: The server rejected the connection: dev127-virt2 is already connected to this master. Rejecting this connection. java.lang.Exception: The server rejected the connection: dev127-virt2 is already connected to this master. Rejecting this connection. at hudson.remoting.Engine.onConnectionRejected(Engine.java:306) at hudson.remoting.Engine.run(Engine.java:276) {noformat} Slave log on master: {noformat} JNLP agent connected from /10.16.180.145 <===[JENKINS REMOTING CAPACITY]===>ERROR: Connection terminated Connection terminated ha:AAAAWB+LCAAAAAAAAP9b85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=java.io.IOException: Connection aborted: org.jenkinsci.remoting.nio.NioChannelHub$MonoNioTransport@131dbee3[name=dev127-virt2] at org.jenkinsci.remoting.nio.NioChannelHub$NioTransport.abort(NioChannelHub.java:211) at org.jenkinsci.remoting.nio.NioChannelHub.run(NioChannelHub.java:631) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcher.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69) at sun.nio.ch.IOUtil.write(IOUtil.java:40) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:336) at org.jenkinsci.remoting.nio.FifoBuffer$Pointer.send(FifoBuffer.java:130) at org.jenkinsci.remoting.nio.FifoBuffer.send(FifoBuffer.java:254) at org.jenkinsci.remoting.nio.NioChannelHub.run(NioChannelHub.java:622) ... 7 more Slave.jar version: 2.47 This is a Unix slave Slave successfully connected and online Connection terminated {noformat} Master log: {noformat} 2016-03-02 06:32:38,352 WARNING [hudson.node_monitors.AbstractAsyncNodeMonitorDescriptor] (Monitoring thread for Clock Difference started on Wed Mar 02 06:32:08 EST 2016) Failed to monitor dev127-virt2 for Clock Difference java.util.concurrent.TimeoutException at hudson.remoting.Request$1.get(Request.java:271) at hudson.remoting.Request$1.get(Request.java:206) at hudson.remoting.FutureAdapter.get(FutureAdapter.java:59) at hudson.node_monitors.AbstractAsyncNodeMonitorDescriptor.monitor(AbstractAsyncNodeMonitorDescriptor.java:97) at hudson.node_monitors.AbstractNodeMonitorDescriptor$Record.run(AbstractNodeMonitorDescriptor.java:280) ... (All monitors times out) 2016-03-02 06:32:42,860 INFO [hudson.TcpSlaveAgentListener] (TCP slave agent connection handler #41773 with /10.16.180.145:58248) Accepted connection #41773 from /10.16.180.145:58248 2016-03-02 06:32:42,865 WARNING [jenkins.slaves.JnlpSlaveHandshake] (TCP slave agent connection handler #41773 with /10.16.180.145:58248) TCP slave agent connection handler #41773 with /10.16.180.145:58248 is aborted: dev127-virt2 is already connected to this master. Rejecting this connection. 2016-03-02 06:32:42,866 WARNING [jenkins.slaves.JnlpSlaveHandshake] (TCP slave agent connection handler #41773 with /10.16.180.145:58248) TCP slave agent connection handler #41773 with /10.16.180.145:58248 is aborted: Unrecognized name: dev127-virt2 ... 2016-03-02 06:33:43,630 INFO [hudson.slaves.ChannelPinger] (Ping thread for channel hudson.remoting.Channel@5caca20e:dev127-virt2) Ping failed. Terminating the channel. java.util.concurrent.TimeoutException: Ping started on 1456918183629 hasn't completed at 1456918423630 at hudson.remoting.PingThread.ping(PingThread.java:125) at hudson.remoting.PingThread.run(PingThread.java:86) ... 2016-03-02 06:38:26,902 WARNING [hudson.node_monitors.AbstractAsyncNodeMonitorDescriptor] (Monitoring thread for Free Temp Space started on Wed Mar 02 06:38:26 EST 2016) Failed to monitor dev127-virt2 for Free Temp Space hudson.remoting.ChannelClosedException: channel is already closed at hudson.remoting.Channel.send(Channel.java:549) at hudson.remoting.Request.callAsync(Request.java:204) at hudson.remoting.Channel.callAsync(Channel.java:778) at hudson.node_monitors.AbstractAsyncNodeMonitorDescriptor.monitor(AbstractAsyncNodeMonitorDescriptor.java:76) at hudson.node_monitors.AbstractNodeMonitorDescriptor$Record.run(AbstractNodeMonitorDescriptor.java:280) Caused by: java.io.IOException at hudson.remoting.Channel.close(Channel.java:1105) at hudson.slaves.ChannelPinger$1.onDead(ChannelPinger.java:110) at hudson.remoting.PingThread.ping(PingThread.java:125) at hudson.remoting.PingThread.run(PingThread.java:86) Caused by: java.util.concurrent.TimeoutException: Ping started on 1456918183629 hasn't completed at 1456918423630 ... 2 more ... (All monitors fails with "channel is already closed") {noformat}
    via by Oliver Gondža,
    • java.util.concurrent.TimeoutException at hudson.remoting.Request$1.get(Request.java:272) at hudson.remoting.Request$1.get(Request.java:206) at hudson.remoting.FutureAdapter.get(FutureAdapter.java:59) at com.cloudbees.jenkins.support.AsyncResultCache.run(AsyncResultCache.java:95) at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

    Users with the same issue

    Unknown visitor1 times, last one,