Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Google Groups by Shannon Ma, 1 year ago
Error closing hdfs://VM03cent7:8020/tmp/TRAFFIC_DATA_P/0/log
via GitHub by sboettcher
, 8 months ago
Error closing hdfs://hdfs-namenode:8020/logs/application_uptime/0/log
via GitHub by sboettcher
, 8 months ago
Error closing hdfs://hdfs-namenode:8020/logs/android_empatica_e4_acceleration/0/log
via GitHub by lakeofsand
, 1 year ago
Error closing hdfs://192.168.101.55:8020/logs/*****/1/log
java.io.IOException: Failed to replace a bad datanode on the 
existing pipeline due to no more good datanodes being available to try. 
(Nodes: current=[10.200.5.162:50010], original=[10.200.5.162:50010]). The 
current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 
'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
configuration.	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1040)	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1106)	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1253)	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:594)