java.lang.RuntimeException

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[ 10.88.131.233:50010,DS-5080e110-5907-4e31-84f2-d7308e722562,DISK], DatanodeInfoWithStorage[10.88.131.235:50010,DS-b5dea108-94a8-4232-a849-eba697a4a3ab,DISK]], original=[DatanodeInfoWithStorage[10.88.131.233:50010,DS-5080e110-5907-4e31-84f2-d7308e722562,DISK], DatanodeInfoWithStorage[10.88.131.235:50010,DS-b5dea108-94a8-4232-a849-eba697a4a3ab,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Solutions on the web55055

  • via Google Groups by Antonio Si, 2 months ago
    java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.88.131.233:50010,DS-5080e110-5907-4e31-84f2-d7308e722562,DISK
  • via gmane.org by Unknown author, 1 year ago
    java.io.IOException: All datanodes DatanodeInfoWithStorage[10.240.187.182:50010,DS-8c63ac70-2f98-4084-91ee-a847b4f48ce2,DISK] are bad. Aborting...
  • via Stack Overflow by Robin Singh
    , 1 year ago
    org.terracotta.toolkit.ToolkitRuntimeException: net.sf.ehcache.config.InvalidConfigurationException: The disk path for this cache manager is the default path. You must define a specific unique disk path for this manager in order to use restartable caches.
  • Stack trace

    • java.lang.RuntimeException: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[ 10.88.131.233:50010,DS-5080e110-5907-4e31-84f2-d7308e722562,DISK], DatanodeInfoWithStorage[10.88.131.235:50010,DS-b5dea108-94a8-4232-a849-eba697a4a3ab,DISK]], original=[DatanodeInfoWithStorage[10.88.131.233:50010,DS-5080e110-5907-4e31-84f2-d7308e722562,DISK], DatanodeInfoWithStorage[10.88.131.235:50010,DS-b5dea108-94a8-4232-a849-eba697a4a3ab,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at alluxio.master.AbstractMaster.waitForJournalFlush(AbstractMaster.java:248) at alluxio.master.file.FileSystemMaster.createDirectory(FileSystemMaster.java:1247) at alluxio.master.file.FileSystemMasterClientServiceHandler$2.call(FileSystemMasterClientServiceHandler.java:89) at alluxio.master.file.FileSystemMasterClientServiceHandler$2.call(FileSystemMasterClientServiceHandler.java:86) at alluxio.RpcUtils.call(RpcUtils.java:61) at alluxio.master.file.FileSystemMasterClientServiceHandler.createDirectory(FileSystemMasterClientServiceHandler.java:86) at alluxio.thrift.FileSystemMasterClientService$Processor$createDirectory.getResult(FileSystemMasterClientService.java:1360) at alluxio.thrift.FileSystemMasterClientService$Processor$createDirectory.getResult(FileSystemMasterClientService.java:1344) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:123) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.88.131.233:50010,DS-5080e110-5907-4e31-84f2-d7308e722562,DISK], DatanodeInfoWithStorage[10.88.131.235:50010,DS-b5dea108-94a8-4232-a849-eba697a4a3ab,DISK]], original=[DatanodeInfoWithStorage[10.88.131.233:50010,DS-5080e110-5907-4e31-84f2-d7308e722562,DISK], DatanodeInfoWithStorage[10.88.131.235:50010,DS-b5dea108-94a8-4232-a849-eba697a4a3ab,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:929) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:984) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1131) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)

    Write tip

    You have a different solution? A short tip here would help you and many other users who saw this issue last week.

    Users with the same issue

    Unknown user
    Once, 11 months ago
    Unknown user
    Once, 1 year ago
    Unknown user
    Once, 1 year ago
    Unknown user
    Once, 1 year ago
    Unknown user
    2 times, 1 year ago
    1 more bugmates