java.net.SocketException: Socket closed

Jenkins JIRA | Amit Naudiyal | 9 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    My only Jenkins slave dies every day almost at the same time giving below error : {code:java} Jul 29, 2016 3:02:42 AM hudson.slaves.ChannelPinger$1 onDead INFO: Ping failed. Terminating the channel channel. java.util.concurrent.TimeoutException: Ping started at 1469775521795 hasn't completed by 1469775762018 at hudson.remoting.PingThread.ping(PingThread.java:126) at hudson.remoting.PingThread.run(PingThread.java:85) Jul 29, 2016 3:02:46 AM hudson.remoting.SynchronousCommandTransport$ReaderThread run SEVERE: I/O error in channel channel java.net.SocketException: Socket closed at java.net.SocketInputStream.read(SocketInputStream.java:203) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) Jul 29, 2016 3:02:47 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Terminated Jul 29, 2016 3:03:00 AM jenkins.slaves.restarter.JnlpSlaveRestarterInstaller$2$1 onReconnect INFO: Restarting agent via jenkins.slaves.restarter.UnixSlaveRestarter@e374354 Jul 29, 2016 3:03:39 AM hudson.remoting.jnlp.Main createEngine INFO: Trying protocol: JNLP2-connect Jul 29, 2016 3:03:41 AM hudson.remoting.jnlp.Main$CuiListener status INFO: Server didn't accept the handshake: ICI-Internal_1 is already connected to this master. Rejecting this connection. {code} AT the same time, Jenkins Server (master) exhibits following logs: {code:java} Jul 29, 2016 3:02:37 AM hudson.slaves.ChannelPinger$1 onDead INFO: Ping failed. Terminating the channel ICI-Internal_1. java.util.concurrent.TimeoutException: Ping started at 1469775517149 hasn't completed by 1469775757149 at hudson.remoting.PingThread.ping(PingThread.java:126) at hudson.remoting.PingThread.run(PingThread.java:85) Jul 29, 2016 3:03:40 AM hudson.TcpSlaveAgentListener$ConnectionHandler run INFO: Accepted connection #2 from /12.170.11.58:46412 Jul 29, 2016 3:03:41 AM org.jenkinsci.remoting.engine.JnlpServerHandshake error WARNING: TCP agent connection handler #2 with /12.170.11.58:46412 is aborted: ICI-Internal_1 is already connected to this master. Rejecting this connection. Jul 29, 2016 3:03:41 AM hudson.TcpSlaveAgentListener$ConnectionHandler run INFO: Accepted connection #3 from /12.170.11.58:46413 Jul 29, 2016 3:03:41 AM org.jenkinsci.remoting.engine.JnlpServerHandshake error WARNING: TCP agent connection handler #3 with /12.170.11.58:46413 is aborted: ICI-Internal_1 is already connected to this master. Rejecting this connection. {code} I only understood that Client/Server both are not able to connect to each other due to some reason and failed. However, it works for rest of the day when it is started manually after above error. There is no firewall/selinux running on any of the server. -Amit

    Jenkins JIRA | 9 months ago | Amit Naudiyal
    java.net.SocketException: Socket closed
  2. 0

    A large Fuse fabric installation runs on a collection of virtual machines. After an outage at the VM networking level, customer reports that the ensemble did not recover normal operation, and a complete restart of the installation was required. While I can't reproduce the customer's exact problem, I can reproduce what I believe is a very similar one. All that is needed is to create a 3-node ensemble on virtual machines, and suspend one of the VMs for some time, then wake it up. If I do container-list on the machine that was suspended, it fails completely -- the command does not exist. This is the expected result for a container does not consider itself part of a fabric. However the VM and the container are live, and there is network connectivity between the VMs. Looking at the logs for the container that gets resumed, I see a whole slew of zookeeper-related network exceptions. "java.lang.IllegalStateException: Client has been stopped" seems to be particularly relevant here. It does look as if there are some connection-related problems from which Zookeeper simply never recovers. {code} per.server.quorum.LearnerHandler 562 | 53 - io.fabric8.fabric-zookeeper - 1.0.0.redhat-379 | Unexpected exception causing shutdown while sock still open java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method)[:1.7.0_55] at java.net.SocketInputStream.read(SocketInputStream.java:152)[:1.7.0_55 orum.QuorumCnxManager$RecvWorker 762 | 53 - io.fabric8.fabric-zookeeper - 1.0.0.redhat-379 | Connection broken for id 1, my id = 2, error = java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:196)[:1.7.0_55] at java.net.SocketInputStream.read(SocketInputStream.java:122)[:1.7.0_55] 2014-05-15 18:56:02,521 | ERROR | ZooKeeperGroup-0 | ConnectionState | g.apache.curator.ConnectionState 194 | 53 - io.fabric8.fabric-zookeeper - 1.0.0.redhat-379 | Connection timed out for connection string (lars:2182,toot:2181,zoot:2181) and timeout (15000) / elapsed (15004) org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:191)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:86)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:116)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:456)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at ... 2014-05-15 18:57:12,460 | WARN | 0:0:0:0:0:0:2181 | Learner | zookeeper.server.quorum.Follower 89 | 53 - io.fabric8.fabric-zookeeper - 1.0.0.redhat-379 | Exception when following the leader java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:196)[:1.7.0_55] at java.net.SocketInputStream.read(SocketInputStream.java:122)[:1.7.0_55] at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)[:1.7.0_55] 2014-05-15 18:57:12,520 | INFO | 0:0:0:0:0:0:2181 | Learner | zookeeper.server.quorum.Follower 166 | 53 - io.fabric8.fabric-zookeeper - 1.0.0.redhat-379 | shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:744)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] 2014-05-15 19:05:31,695 | WARN | pool-61-thread-1 | GitDataStore | abric8.git.internal.GitDataStore 1208 | 85 - io.fabric8.fabric-git - 1.0.0.redhat-379 | Failed to perform a pull java.lang.IllegalStateException: Client has been stopped java.lang.IllegalStateException: Client has been stopped at com.google.common.base.Preconditions.checkState(Preconditions.java:150)[83:com.google.guava:15.0.0] at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:320)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:252)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:239)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:39)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] at io.fabric8.zookeeper.utils.ZooKeeperUtils.setData(ZooKeeperUtils.java:204)[53:io.fabric8.fabric-zookeeper:1.0.0.redhat-379] {code}

    JBoss Issue Tracker | 2 years ago | Kevin Boone
    java.net.SocketException: Connection reset
  3. 0

    Mule Flow fails on first try but works after with the LDAP connector

    Stack Overflow | 2 years ago
    java.net.SocketException: Connection reset
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    HELP: Http proxy's shows different behavior in JVM and JIT!

    Google Groups | 2 decades ago | Marco Jacob
    java.net.SocketException: Connection shutdown
  6. 0

    Mule LDAP bug Socket Connection reset (java.net.SocketException) - MuleSoft

    mulesoft.com | 1 year ago
    java.net.SocketException: Connection reset

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.net.SocketException

      Socket closed

      at java.net.SocketInputStream.read()
    2. Java RT
      BufferedInputStream.fill
      1. java.net.SocketInputStream.read(SocketInputStream.java:203)
      2. java.net.SocketInputStream.read(SocketInputStream.java:141)
      3. java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
      3 frames