java.io.IOException: All datanodes 192.168.109.61:50010 are bad. Aborting...

Spring JIRA | Sina Sojoodi | 3 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Environment: - Hadoop Installation: PHD Service for PCF (PHD1.1 based on Apache Hadoop 2.0.5: 2.0.5-alpha-gphd-2.1.0.0 ) running on vCHS - Spring XD running in singlenode mode (version 1.0.0.RC1) on a vCHS VM Steps to reproduce: 1- Setup a stream in Spring XD shell: "http --port=9000 | hdfs --rollover=10M --idleTimeout=60000" --deploy 2- Hit port 9000 every second with 1-10KB of JSON data 3- Observe the temp file being created in HDFS under /xd/<stream name> 4- Run `hadoop fs tail <file> --follow` to see that data is being written to HDFS Expected result: - HDFS sink continues to operate and eventually roll-over at 10MB Actual: - After about 2 minutes of successful HDFS writes, the HDFS sink crashes and starts throwing exceptions (see full log attached): "'java.io.IOException: All datanodes 192.168.109.61:50010 are bad. Aborting..." - The temp file is never closed even after the stream is undeployed or destroyed. Here are some details of our investigation that may be useful: - I start both the shell and the singlenode runner with --hadoopDistro phd1; I also configured the hadoop fs namenode correctly in the XD shell. - "http <options> | file <options>" work as expected; so does "http <options> | log" - "time | hdfs" does not show the same crash problem. Up until now only the http source combined with hdfs sink presents this problem - Putting a 4-10MB file in HDFS via the `Hadoop fs put` commands in Spring XD worked fine; so it's not a disk limitation. - This could be related to PHD service running on vCHS since supporting this configuration is fairly new. But it's only reproducable (consistently) with Spring XD's "http | hdfs" stream.

    Spring JIRA | 3 years ago | Sina Sojoodi
    java.io.IOException: All datanodes 192.168.109.61:50010 are bad. Aborting...
  2. 0

    Hadoop bad connect ack exception

    Stack Overflow | 2 years ago | Istvan
    java.io.IOException: Bad connect ack with firstBadLink as 10.90.80.32:50010
  3. 0

    Error while copying a file from local to hdfs in cloudlab | Simplilearn - Discussions on Certifications

    simplilearn.com | 9 months ago
    java.io.IOException: Got error, status message , ack with firstBadLink as 139.162.22.151:50010
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    Stack Overflow | 5 years ago | user1429825
    java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010
  6. 0

    Exception in createBlockOutputStream when copying data into HDFS

    Stack Overflow | 3 years ago | Naveen R
    java.io.IOException: Bad connect ack with firstBadLink as 192.168.226.136:50010

    6 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      All datanodes 192.168.109.61:50010 are bad. Aborting...

      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
    2. Apache Hadoop HDFS
      DFSOutputStream$DataStreamer.run
      1. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:941)
      2. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:756)
      3. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:425)
      3 frames