java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[127.0.0.1:51791], original=[127.0.0.1:51791])

Apache's JIRA Issue Tracker | Zhanwei Wang | 5 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Create a single datanode cluster disable permissions enable webhfds start hdfs run the test script expected result: a file named "test" is created and the content is "testtest" the result I got: hdfs throw an exception on the second append operation. {code} ./test.sh {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]"}} {code} Log in datanode: {code} 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to close file /test java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) {code} test.sh {code} #!/bin/sh echo "test" > test.txt curl -L -X PUT "http://localhost:50070/webhdfs/v1/test?op=CREATE" curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND" curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND" {code}

    Apache's JIRA Issue Tracker | 5 years ago | Zhanwei Wang
    java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[127.0.0.1:51791], original=[127.0.0.1:51791])
  2. 0

    Create a single datanode cluster disable permissions enable webhfds start hdfs run the test script expected result: a file named "test" is created and the content is "testtest" the result I got: hdfs throw an exception on the second append operation. {code} ./test.sh {"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010]"}} {code} Log in datanode: {code} 2012-04-02 14:34:21,058 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) 2012-04-02 14:34:21,059 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to close file /test java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:50010], original=[127.0.0.1:50010] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:834) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:930) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461) {code} test.sh {code} #!/bin/sh echo "test" > test.txt curl -L -X PUT "http://localhost:50070/webhdfs/v1/test?op=CREATE" curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND" curl -L -X POST -T test.txt "http://localhost:50070/webhdfs/v1/test?op=APPEND" {code}

    Apache's JIRA Issue Tracker | 5 years ago | Zhanwei Wang
    java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[127.0.0.1:51791], original=[127.0.0.1:51791])
  3. 0

    [HDFS-3179] Improve the error message: DataStreamer throw an exception, "nodes.length != original.length + 1" on single datanode cluster - ASF JIRA

    apache.org | 1 year ago
    java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current= , original= )
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Hadoop bad connect ack exception

    Stack Overflow | 2 years ago | Istvan
    java.io.IOException: Bad connect ack with firstBadLink as 10.90.80.32:50010
  6. 0

    Error while copying a file from local to hdfs in cloudlab | Simplilearn - Discussions on Certifications

    simplilearn.com | 9 months ago
    java.io.IOException: Got error, status message , ack with firstBadLink as 139.162.22.151:50010

    6 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[127.0.0.1:51791], original=[127.0.0.1:51791])

      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode()
    2. Apache Hadoop HDFS
      DFSOutputStream$DataStreamer.run
      1. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
      2. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:838)
      3. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
      4. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)
      4 frames