java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.18.0.2:50010,DS-f97943bf-2cad-45e5-ae40-9ba947e54404,DISK]], original=[DatanodeInfoWithStorage[172.18.0.2:50010,DS-f97943bf-2cad-45e5-ae40-9ba947e54404,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Server Fault | Robert Fraser | 7 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Running HDFS with only 1 data node - appending fails

    Server Fault | 7 months ago | Robert Fraser
    java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.18.0.2:50010,DS-f97943bf-2cad-45e5-ae40-9ba947e54404,DISK]], original=[DatanodeInfoWithStorage[172.18.0.2:50010,DS-f97943bf-2cad-45e5-ae40-9ba947e54404,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
  2. 0

    Error while copying a file from local to hdfs in cloudlab | Simplilearn - Discussions on Certifications

    simplilearn.com | 8 months ago
    java.io.IOException: Got error, status message , ack with firstBadLink as 139.162.22.151:50010
  3. 0

    Hadoop bad connect ack exception

    Stack Overflow | 2 years ago | Istvan
    java.io.IOException: Bad connect ack with firstBadLink as 10.90.80.44:50010
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    Stack Overflow | 5 years ago | user1429825
    java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010
  6. 0

    Exception in createBlockOutputStream when copying data into HDFS

    Stack Overflow | 3 years ago | Naveen R
    java.io.IOException: Bad connect ack with firstBadLink as 192.168.226.136:50010

    6 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.18.0.2:50010,DS-f97943bf-2cad-45e5-ae40-9ba947e54404,DISK]], original=[DatanodeInfoWithStorage[172.18.0.2:50010,DS-f97943bf-2cad-45e5-ae40-9ba947e54404,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode()
    2. Apache Hadoop HDFS
      DFSOutputStream$DataStreamer.run
      1. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:929)
      2. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:992)
      3. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1160)
      4. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)
      4 frames