java.io.IOException: Max block location exceeded for split: InputFormatClass: org.apache.hadoop.mapred.TextInputFormat splitsize: 21 maxsize: 10

cloudera.com | 4 months ago
  1. 0

    Issues Fixed in CDH 5.0.x

    cloudera.com | 7 months ago
    java.io.IOException: Max block location exceeded for split: InputFormatClass: org.apache.hadoop.mapred.TextInputFormat splitsize: 21 maxsize: 10
  2. 0

    Issues Fixed in CDH 5.0.x

    cloudera.com | 4 months ago
    java.io.IOException: Max block location exceeded for split: InputFormatClass: org.apache.hadoop.mapred.TextInputFormat splitsize: 21 maxsize: 10
  3. 0

    Datameer Job Failure in Kerberos Environment - java.io.IOException: Failed to run job : Failed to renew token – Support

    zendesk.com | 5 months ago
    java.lang.RuntimeException: java.lang.RuntimeException: Failed to run cluster job for 'File upload job (123): FileUploadCSV#import(Identity)'
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Reading a file using map reduce in hadoop

    Stack Overflow | 6 months ago | Fazlur
    java.io.IOException: Not a file: hdfs://localhost:54310/TcTest/NewTest
  6. 0

    Pig script in Hue - START_RETRY status - Cloudera Community

    cloudera.com | 6 months ago
    org.apache.oozie.action.ActionExecutorException: JA009: HTTP status [403], message [Forbidden]

  1. rp 1 times, last 3 weeks ago
3 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.io.IOException

    Max block location exceeded for split: InputFormatClass: org.apache.hadoop.mapred.TextInputFormat splitsize: 21 maxsize: 10

    at org.apache.hadoop.mapreduce.split.JobSplitWriter.writeOldSplits()
  2. Hadoop
    Job$10.run
    1. org.apache.hadoop.mapreduce.split.JobSplitWriter.writeOldSplits(JobSplitWriter.java:162)
    2. org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:87)
    3. org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:540)
    4. org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
    5. org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
    6. org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
    7. org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
    7 frames
  3. Java RT
    Subject.doAs
    1. java.security.AccessController.doPrivileged(Native Method)
    2. javax.security.auth.Subject.doAs(Subject.java:415)
    2 frames
  4. Hadoop
    UserGroupInformation.doAs
    1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    1 frame
  5. Hadoop
    JobClient$1.run
    1. org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
    2. org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
    3. org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
    3 frames
  6. Java RT
    Subject.doAs
    1. java.security.AccessController.doPrivileged(Native Method)
    2. javax.security.auth.Subject.doAs(Subject.java:415)
    2 frames
  7. Hadoop
    UserGroupInformation.doAs
    1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    1 frame
  8. Hadoop
    JobClient.submitJob
    1. org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
    2. org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
    2 frames
  9. Hive Query Language
    TaskRunner.runSequential
    1. org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425)
    2. org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
    3. org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
    4. org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
    4 frames