java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526]

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Configure hdfs-site.xml with the following properties: {code} <property> <name>dfs.socket.timeout</name> <value>20000</value> </property> <property> <name>dfs.datanode.socket.write.timeout</name> <value>20000</value> </property> {code} then write to a file with a timeout higher than 20000, and let it time out, should see something like this on the datanode logs: {code} 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted. 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782 received exception java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] 2014-10-13 14:49:33,324 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: carbon:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.0.110:56526 dest: /192.168.0.111:50010 java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:435) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:693) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:569) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) at java.lang.Thread.run(Thread.java:701) {code} On the client side we should see something like this: {code} java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721) 14:49:59,813 ERROR taskExecutor-1 output.TextFileWriter - error closing java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1008) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) {code}
    via by Thomas Risberg,
  • Configure hdfs-site.xml with the following properties: {code} <property> <name>dfs.socket.timeout</name> <value>20000</value> </property> <property> <name>dfs.datanode.socket.write.timeout</name> <value>20000</value> </property> {code} then write to a file with a timeout higher than 20000, and let it time out, should see something like this on the datanode logs: {code} 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted. 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782 received exception java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] 2014-10-13 14:49:33,324 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: carbon:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.0.110:56526 dest: /192.168.0.111:50010 java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:435) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:693) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:569) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) at java.lang.Thread.run(Thread.java:701) {code} On the client side we should see something like this: {code} java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721) 14:49:59,813 ERROR taskExecutor-1 output.TextFileWriter - error closing java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1008) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) {code}
    via by Thomas Risberg,
  • WRITE_BLOCK Error in HDFS logs - Hortonworks
    via by Unknown author,
  • Environment: - Hadoop Installation: PHD Service for PCF (PHD1.1 based on Apache Hadoop 2.0.5: 2.0.5-alpha-gphd-2.1.0.0 ) running on vCHS - Spring XD running in singlenode mode (version 1.0.0.RC1) on a vCHS VM Steps to reproduce: 1- Setup a stream in Spring XD shell: "http --port=9000 | hdfs --rollover=10M --idleTimeout=60000" --deploy 2- Hit port 9000 every second with 1-10KB of JSON data 3- Observe the temp file being created in HDFS under /xd/<stream name> 4- Run `hadoop fs tail <file> --follow` to see that data is being written to HDFS Expected result: - HDFS sink continues to operate and eventually roll-over at 10MB Actual: - After about 2 minutes of successful HDFS writes, the HDFS sink crashes and starts throwing exceptions (see full log attached): "'java.io.IOException: All datanodes 192.168.109.61:50010 are bad. Aborting..." - The temp file is never closed even after the stream is undeployed or destroyed. Here are some details of our investigation that may be useful: - I start both the shell and the singlenode runner with --hadoopDistro phd1; I also configured the hadoop fs namenode correctly in the XD shell. - "http <options> | file <options>" work as expected; so does "http <options> | log" - "time | hdfs" does not show the same crash problem. Up until now only the http source combined with hdfs sink presents this problem - Putting a 4-10MB file in HDFS via the `Hadoop fs put` commands in Spring XD worked fine; so it's not a disk limitation. - This could be related to PHD service running on vCHS since supporting this configuration is fairly new. But it's only reproducable (consistently) with Spring XD's "http | hdfs" stream.
    via by Sina Sojoodi,
    • java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:435) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:693) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:569) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) at java.lang.Thread.run(Thread.java:701)

    Users with the same issue

    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,