java.io.IOException: Error Recovery for block blk_-3348459061636680999_2713 failed because recovery from primary datanode 192.168.1.88:50010 failed 6 times. Pipeline was 192.168.1.88:50010. Aborting...

SpringSource Issue Tracker | Binbin Zhao | 5 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    In a 12 worker nodes setup deployed by serengeti, I tried to run teregen but it failed to complete. The command is: hadoop jar /usr/lib/hadoop/hadoop-examples-1.0.1.jar teragen -Dmapred.map.tasks=240 10000000000 /user/joe/terasort-input Got lots of errors like: 12/08/16 03:29:28 INFO mapred.JobClient: Task Id : attempt_201208150845_0006_m_000191_0, Status : FAILED java.io.IOException: Error Recovery for block blk_-3348459061636680999_2713 failed because recovery from primary datanode 192.168.1.88:50010 failed 6 times. Pipeline was 192.168.1.88:50010. Aborting... at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3154) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2586) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2790) attempt_201208150845_0006_m_000191_0: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient). Then the job failed: 12/08/16 03:36:12 INFO mapred.JobClient: Virtual memory (bytes) snapshot=160994107392 12/08/16 03:36:12 INFO mapred.JobClient: Map input bytes=6874999890 12/08/16 03:36:12 INFO mapred.JobClient: Map output records=6874999890 12/08/16 03:36:12 INFO mapred.JobClient: SPLIT_RAW_BYTES=14437 12/08/16 03:36:12 INFO mapred.JobClient: Job Failed: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201208150845_0006_m_000022 java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265) at org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:352) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:357) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) randomtextwriter has the same issue. Binglin helped to take a look and we suspect the open file limitation may be the root cause. I'm trying to enlarge the limitation and re-run the test. Any further suggestions are also appreciated.

    SpringSource Issue Tracker | 5 years ago | Binbin Zhao
    java.io.IOException: Error Recovery for block blk_-3348459061636680999_2713 failed because recovery from primary datanode 192.168.1.88:50010 failed 6 times. Pipeline was 192.168.1.88:50010. Aborting...
  2. 0

    In a 12 worker nodes setup deployed by serengeti, I tried to run teregen but it failed to complete. The command is: hadoop jar /usr/lib/hadoop/hadoop-examples-1.0.1.jar teragen -Dmapred.map.tasks=240 10000000000 /user/joe/terasort-input Got lots of errors like: 12/08/16 03:29:28 INFO mapred.JobClient: Task Id : attempt_201208150845_0006_m_000191_0, Status : FAILED java.io.IOException: Error Recovery for block blk_-3348459061636680999_2713 failed because recovery from primary datanode 192.168.1.88:50010 failed 6 times. Pipeline was 192.168.1.88:50010. Aborting... at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3154) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2586) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2790) attempt_201208150845_0006_m_000191_0: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient). Then the job failed: 12/08/16 03:36:12 INFO mapred.JobClient: Virtual memory (bytes) snapshot=160994107392 12/08/16 03:36:12 INFO mapred.JobClient: Map input bytes=6874999890 12/08/16 03:36:12 INFO mapred.JobClient: Map output records=6874999890 12/08/16 03:36:12 INFO mapred.JobClient: SPLIT_RAW_BYTES=14437 12/08/16 03:36:12 INFO mapred.JobClient: Job Failed: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201208150845_0006_m_000022 java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265) at org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:352) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:357) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) randomtextwriter has the same issue. Binglin helped to take a look and we suspect the open file limitation may be the root cause. I'm trying to enlarge the limitation and re-run the test. Any further suggestions are also appreciated.

    SpringSource Issue Tracker | 5 years ago | Binbin Zhao
    java.io.IOException: Error Recovery for block blk_-3348459061636680999_2713 failed because recovery from primary datanode 192.168.1.88:50010 failed 6 times. Pipeline was 192.168.1.88:50010. Aborting...
  3. 0

    Hadoop: All datanodes 127.0.0.1:50010 are bad. Aborting

    Stack Overflow | 2 years ago | nostromo
    java.io.IOException: All datanodes 127.0.0.1:50010 are bad. Aborting...
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    HBASE:Regionserver Shutdown when mapreduce job

    Stack Overflow | 2 years ago | Yongjoo Lim
    java.io.IOException: Error Recovery for block blk_-6821037447429065566_798145 failed because recovery from primary datanode 125.209.204.59:50010 failed 6 times. Pipeline was 125.209.204.59:50010. Aborting...
  6. 0

    Getting error while copying the file from linux Local file system to HDFS

    Stack Overflow | 3 years ago | Prashant
    java.io.IOException: Could not get block locations. Source file "/tmp/hadoop-user1/mapred/system/jobtracker.info" - Aborting...

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.io.IOException

      Error Recovery for block blk_-3348459061636680999_2713 failed because recovery from primary datanode 192.168.1.88:50010 failed 6 times. Pipeline was 192.168.1.88:50010. Aborting...

      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError()
    2. Apache Hadoop HDFS
      DFSClient$DFSOutputStream$DataStreamer.run
      1. org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3154)
      2. org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2586)
      3. org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2790)
      3 frames