java.io.IOException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Configure hdfs-site.xml with the following properties: {code} <property> <name>dfs.socket.timeout</name> <value>20000</value> </property> <property> <name>dfs.datanode.socket.write.timeout</name> <value>20000</value> </property> {code} then write to a file with a timeout higher than 20000, and let it time out, should see something like this on the datanode logs: {code} 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted. 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782 received exception java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] 2014-10-13 14:49:33,324 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: carbon:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.0.110:56526 dest: /192.168.0.111:50010 java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:435) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:693) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:569) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) at java.lang.Thread.run(Thread.java:701) {code} On the client side we should see something like this: {code} java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721) 14:49:59,813 ERROR taskExecutor-1 output.TextFileWriter - error closing java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1008) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) {code}
    via by Thomas Risberg,
  • Configure hdfs-site.xml with the following properties: {code} <property> <name>dfs.socket.timeout</name> <value>20000</value> </property> <property> <name>dfs.datanode.socket.write.timeout</name> <value>20000</value> </property> {code} then write to a file with a timeout higher than 20000, and let it time out, should see something like this on the datanode logs: {code} 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted. 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782 received exception java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] 2014-10-13 14:49:33,324 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: carbon:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.0.110:56526 dest: /192.168.0.111:50010 java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:435) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:693) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:569) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) at java.lang.Thread.run(Thread.java:701) {code} On the client side we should see something like this: {code} java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721) 14:49:59,813 ERROR taskExecutor-1 output.TextFileWriter - error closing java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1008) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) {code}
    via by Thomas Risberg,
  • Map Reduce Error
    via by Unknown author,
  • Hadoop bad connect ack exception
    via Stack Overflow by Istvan
    ,
  • Spark 1.2 cannot connect to HDFS on HDP 2.2
    via Stack Overflow by John
    ,
    • java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1008) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)

    Users with the same issue

    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor2 times, last one,
    2 more bugmates