All datanodes are bad. Aborting...

Spring JIRA | Sina Sojoodi | 2 years ago
  1. 0

    Environment: - Hadoop Installation: PHD Service for PCF (PHD1.1 based on Apache Hadoop 2.0.5: 2.0.5-alpha-gphd- ) running on vCHS - Spring XD running in singlenode mode (version 1.0.0.RC1) on a vCHS VM Steps to reproduce: 1- Setup a stream in Spring XD shell: "http --port=9000 | hdfs --rollover=10M --idleTimeout=60000" --deploy 2- Hit port 9000 every second with 1-10KB of JSON data 3- Observe the temp file being created in HDFS under /xd/<stream name> 4- Run `hadoop fs tail <file> --follow` to see that data is being written to HDFS Expected result: - HDFS sink continues to operate and eventually roll-over at 10MB Actual: - After about 2 minutes of successful HDFS writes, the HDFS sink crashes and starts throwing exceptions (see full log attached): "' All datanodes are bad. Aborting..." - The temp file is never closed even after the stream is undeployed or destroyed. Here are some details of our investigation that may be useful: - I start both the shell and the singlenode runner with --hadoopDistro phd1; I also configured the hadoop fs namenode correctly in the XD shell. - "http <options> | file <options>" work as expected; so does "http <options> | log" - "time | hdfs" does not show the same crash problem. Up until now only the http source combined with hdfs sink presents this problem - Putting a 4-10MB file in HDFS via the `Hadoop fs put` commands in Spring XD worked fine; so it's not a disk limitation. - This could be related to PHD service running on vCHS since supporting this configuration is fairly new. But it's only reproducable (consistently) with Spring XD's "http | hdfs" stream.

    Spring JIRA | 2 years ago | Sina Sojoodi All datanodes are bad. Aborting...
  2. 0

    Hadoop bad connect ack exception

    Stack Overflow | 2 years ago | Istvan Bad connect ack with firstBadLink as
  3. 0

    HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    Stack Overflow | 4 years ago | user1429825 Bad connect ack with firstBadLink as ***.***.***.148:20010
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Exception in createBlockOutputStream when copying data into HDFS

    Stack Overflow | 3 years ago | Naveen R Bad connect ack with firstBadLink as
  6. 0

    Spark 1.2 cannot connect to HDFS on HDP 2.2

    Stack Overflow | 2 years ago | John Unable to create new block.

    6 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis


      All datanodes are bad. Aborting...

      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
    2. Apache Hadoop HDFS
      1. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(
      2. org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(
      3. org.apache.hadoop.hdfs.DFSOutputStream$
      3 frames