Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Google Groups by LeBlanc, Jacob, 1 year ago
60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.55.30.235:50010]
via Stack Overflow by Unknown author, 2 years ago
60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.110.80.177:50010]
via Stack Overflow by Unknown author, 2 years ago
60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.100.100.6:50010]
via Stack Overflow by N.M.
, 10 months ago
60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/123.123.123:50010]
via Stack Overflow by sag
, 1 year ago
60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/ec2-instance-private-ip:50010]
via Stack Overflow by npaluskar
, 1 year ago
20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=104.239.213.7/104.239.213.7:60640]
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.55.30.235:50010]	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)	at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3444)	at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:777)	at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:694)	at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355)	at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:618)	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844)	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896)	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)	at java.io.DataInputStream.readInt(Unknown Source)	at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.setTrailerIfPresent(ProtobufLogReader.java:186)	at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:155)	at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initReader(ProtobufLogReader.java:106)	at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:69)	at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:126)	at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:89)	at org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createReader(HLogFactory.java:77)	at org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:68)	at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:503)	at org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:309)