java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[127.0.0.1:51791], original=[127.0.0.1:51791])

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

Do you know how to solve this issue? Write a tip to help other users and build your expert profile.

Solutions on the web

via apache.org by Unknown author, 1 year ago
Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current= [127.0.0.1:51791] , original= [127.0.0.1:51791] )
via Apache's JIRA Issue Tracker by Zhanwei Wang, 1 year ago
Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[127.0.0.1:51791], original=[127.0.0.1:51791])
via Apache's JIRA Issue Tracker by Zhanwei Wang, 1 year ago
Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[127.0.0.1:51791], original=[127.0.0.1:51791])
via rssing.com by Unknown author, 1 year ago
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[127.0.0.1:50010], original=[127.0.0.1:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
via apache.org by Unknown author, 2 years ago
replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
via wordpress.com by Unknown author, 2 years ago
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.22.7.81:50010], original=[10.22.7.81:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[127.0.0.1:51791], original=[127.0.0.1:51791])
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:838)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)

Users with the same issue

Samebug visitor profile picture
Unknown user
Once, 2 years ago
Samebug visitor profile picture
Unknown user
Once, 1 year ago
Samebug visitor profile picture
Unknown user
2 times, 1 year ago
Samebug visitor profile picture
Unknown user
Once, 1 year ago
Samebug visitor profile picture
Unknown user
Once, 1 year ago

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.