java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting...

Spring JIRA | Thomas Risberg | 3 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Configure hdfs-site.xml with the following properties: {code} <property> <name>dfs.socket.timeout</name> <value>20000</value> </property> <property> <name>dfs.datanode.socket.write.timeout</name> <value>20000</value> </property> {code} then write to a file with a timeout higher than 20000, and let it time out, should see something like this on the datanode logs: {code} 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted. 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782 received exception java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] 2014-10-13 14:49:33,324 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: carbon:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.0.110:56526 dest: /192.168.0.111:50010 java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:435) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:693) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:569) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) at java.lang.Thread.run(Thread.java:701) {code} On the client side we should see something like this: {code} java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721) 14:49:59,813 ERROR taskExecutor-1 output.TextFileWriter - error closing java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1008) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) {code}

    Spring JIRA | 3 years ago | Thomas Risberg
    java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting...
  2. 0

    Configure hdfs-site.xml with the following properties: {code} <property> <name>dfs.socket.timeout</name> <value>20000</value> </property> <property> <name>dfs.datanode.socket.write.timeout</name> <value>20000</value> </property> {code} then write to a file with a timeout higher than 20000, and let it time out, should see something like this on the datanode logs: {code} 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted. 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782, type=LAST_IN_PIPELINE, downstreams=0:[] terminating 2014-10-13 14:49:33,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1412236653-127.0.0.1-1404394045731:blk_1073751606_10782 received exception java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] 2014-10-13 14:49:33,324 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: carbon:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.0.110:56526 dest: /192.168.0.111:50010 java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.0.111:50010 remote=/192.168.0.110:56526] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:435) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:693) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:569) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) at java.lang.Thread.run(Thread.java:701) {code} On the client side we should see something like this: {code} java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721) 14:49:59,813 ERROR taskExecutor-1 output.TextFileWriter - error closing java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1008) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) {code}

    Spring JIRA | 3 years ago | Thomas Risberg
    java.io.IOException: All datanodes 192.168.0.111:50010 are bad. Aborting...
  3. 0

    Map Reduce Error

    apache.org | 1 year ago
    java.io.IOException: All datanodes 10.1.21.12:50010 are bad. Aborting...
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Error while copying a file from local to hdfs in cloudlab | Simplilearn - Discussions on Certifications

    simplilearn.com | 8 months ago
    java.io.IOException: Got error, status message , ack with firstBadLink as 139.162.22.151:50010
  6. 0

    Hadoop bad connect ack exception

    Stack Overflow | 2 years ago | Istvan
    java.io.IOException: Bad connect ack with firstBadLink as 10.90.80.44:50010

    6 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      All datanodes 192.168.0.111:50010 are bad. Aborting...

      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
    2. Apache Hadoop HDFS
      DFSOutputStream$DataStreamer.run
      1. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1008)
      2. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
      3. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
      3 frames