Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

  1. ,

    Normally, the connection to an HTTP server is closed after each response. Change the network connection and make sure the network is stable when making the request. When the server closes such a connection the client usually reopens it again.

Solutions on the web

via Stack Overflow by Charles.Zhu
, 1 year ago
。(In english means: Remote host closed an existing connection)
via Stack Overflow by Albert Chu
, 2 years ago
An existing connection was forcibly closed by the remote host
via nabble.com by Unknown author, 2 years ago
An established connection was aborted by the software in your host machine
via j-interop by jeremystone
, 2 years ago
An existing connection was forcibly closed by the remote host
via Google Groups by Manoj Santhakumaran, 1 year ago
An existing connection was forcibly closed by the remote host
via j-interop by r_markham
, 2 years ago
An existing connection was forcibly closed by the remote host
java.io.IOException: 远程主机强迫关闭了一个现有的连接。(In english means: Remote host closed an existing connection)	at sun.nio.ch.SocketDispatcher.read0(Native Method)	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)	at sun.nio.ch.IOUtil.read(IOUtil.java:197)	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)	at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:206)	at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)	at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)	at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:81)	at kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)	at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)	at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:86)	at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83)	at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149)	at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79)	at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:77)	at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:67)	at storm.kafka.PartitionManager.(PartitionManager.java:83)	at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98)	at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69)	at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135)	at backtype.storm.daemon.executor$fn__3371$fn__3386$fn__3415.invoke(executor.clj:565)	at backtype.storm.util$async_loop$fn__460.invoke(util.clj:463)	at clojure.lang.AFn.run(AFn.java:24)	at java.lang.Thread.run(Thread.java:745)