Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Google Groups by Steve Chadsey, 1 year ago
via Google Groups by Christian Heindl, 2 years ago
via Google Groups by jon...@gmail.com, 2 years ago
via Google Groups by justin, 2 years ago
via Atlassian JIRA by Chuck Burt, 1 year ago
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.6.25.184:50010,DS-a67e1753-7160-4c87-8a87-330732f6ac30,DISK], DatanodeInfoWithStorage[10.6.25.189:50010,DS-f42cdbb3-7981-4630-8b98-0ac04bdf92a2,DISK]], original=[DatanodeInfoWithStorage[10.6.25.184:50010,DS-a67e1753-7160-4c87-8a87-330732f6ac30,DISK], DatanodeInfoWithStorage[10.6.25.189:50010,DS-f42cdbb3-7981-4630-8b98-0ac04bdf92a2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1162)	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1228)	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1375)	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1119)	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:622)