org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message

SpringSource Issue Tracker | Yifeng Xiao | 4 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Reproduce steps: 1. Create a cluster which is a combination of HBase and MapReduce. {code} cluster name: HBaseCustomizedf5, distro: hw, status: RUNNING GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------------------------ master [hadoop_namenode, hadoop_jobtracker] 1 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------- HBaseCustomizedf5-master-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.83 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) --------------------------------------------------------------------- hbaseMaster [hbase_master] 2 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------------- HBaseCustomizedf5-hbaseMaster-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.148 Service Ready HBaseCustomizedf5-hbaseMaster-1 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.34 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ----------------------------------------------------------------- zookeeper [zookeeper] 3 1 1024 SHARED 10 NODE NAME HOST IP STATUS -------------------------------------------------------------------------------------------------- HBaseCustomizedf5-zookeeper-1 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.80 Service Ready HBaseCustomizedf5-zookeeper-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.75 Service Ready HBaseCustomizedf5-zookeeper-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.28 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ----------------------------------------------------------------------------------------------------- HRegionServerWithHDFS [hadoop_datanode, hbase_regionserver] 3 1 1024 LOCAL 10 NODE NAME HOST IP STATUS --------------------------------------------------------------------------------------------------------------- HBaseCustomizedf5-HRegionServerWithHDFS-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.64 Service Ready HBaseCustomizedf5-HRegionServerWithHDFS-1 sin2-pekaurora-bdcqe005.eng.vmware.com 10.111.57.42 Service Ready HBaseCustomizedf5-HRegionServerWithHDFS-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.101 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------- compute [hadoop_tasktracker] 3 1 1024 LOCAL 10 NODE NAME HOST IP STATUS ------------------------------------------------------------------------------------------------- HBaseCustomizedf5-compute-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.193 Service Ready HBaseCustomizedf5-compute-1 sin2-pekaurora-bdcqe005.eng.vmware.com 10.111.57.153 Service Ready HBaseCustomizedf5-compute-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.99 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------------------------------------------ hbaseClient [hadoop_client, pig, hive, hive_server, hbase_client] 1 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------------- HBaseCustomizedf5-hbaseClient-0 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.185 Service Ready {code} 2. Run pig script in client vm. {code} [serengeti@localhost 1]$ ssh joe@10.111.57.185 Warning: Permanently added '10.111.57.185' (RSA) to the list of known hosts. joe@10.111.57.185's password: Permission denied, please try again. joe@10.111.57.185's password: Last login: Fri Feb 8 09:29:40 2013 from pek2-auro-office-dhcp1.eng.vmware.com [joe@10 ~]$ ll total 16 -rw-r--r-- 1 joe joe 97 Feb 8 08:08 create_test_9934ee.hbase drwxr-xr-x 2 joe joe 4096 Feb 8 09:31 pig -rw-rw-r-- 1 joe joe 1029 Feb 8 08:11 pig_1360311079941.log -rw-rw-r-- 1 joe joe 1029 Feb 8 09:30 pig_1360315825437.log [joe@10 ~]$ pig -f pig/script.pig 2013-02-08 09:41:37,054 [main] INFO org.apache.pig.Main - Logging error messages to: /var/lib/joe/pig_1360316497051.log 2013-02-08 09:41:37,301 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://10.111.57.83:8020 2013-02-08 09:41:37,754 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: 10.111.57.83:8021 2013-02-08 09:41:38,224 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN 2013-02-08 09:41:38,340 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false 2013-02-08 09:41:38,363 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1 2013-02-08 09:41:38,363 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1 2013-02-08 09:41:38,620 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job 2013-02-08 09:41:38,649 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 2013-02-08 09:41:38,652 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job4840688475063295283.jar 2013-02-08 09:41:42,560 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job4840688475063295283.jar created 2013-02-08 09:41:42,591 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job 2013-02-08 09:41:42,646 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission. ****hdfs://10.111.57.83:8020/user/joe/password/pwInput 2013-02-08 09:41:42,953 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1 2013-02-08 09:41:42,953 [Thread-5] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1 2013-02-08 09:41:42,962 [Thread-5] INFO org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library 2013-02-08 09:41:42,962 [Thread-5] WARN org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library not loaded 2013-02-08 09:41:42,964 [Thread-5] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1 2013-02-08 09:41:43,152 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete 2013-02-08 09:41:43,807 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201302080803_0007 2013-02-08 09:41:43,807 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://10.111.57.83:50030/jobdetails.jsp?jobid=job_201302080803_0007 2013-02-08 09:41:58,416 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_201302080803_0007 has failed! Stop running all dependent jobs 2013-02-08 09:41:58,416 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete 2013-02-08 09:41:58,420 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed! 2013-02-08 09:41:58,422 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 1.0.2 0.9.2 joe 2013-02-08 09:41:38 2013-02-08 09:41:58 UNKNOWN Failed! Failed Jobs: JobId Alias Feature Message Outputs job_201302080803_0007 A,B MAP_ONLY Message: Job failed! Error - JobCleanup Task Failure, Task: task_201302080803_0007_m_000001 /user/joe/password/pwOutput, Input(s): Failed to read data from "/user/joe/password/pwInput" Output(s): Failed to produce result in "/user/joe/password/pwOutput" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_201302080803_0007 2013-02-08 09:41:58,422 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed! 2013-02-08 09:41:58,441 [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2244: Job failed, hadoop does not return any error message Details at logfile: /var/lib/joe/pig_1360316497051.log {code} 3.See the pig log: {code} [joe@10 ~]$ cat /var/lib/joe/pig_1360316497051.log Pig Stack Trace --------------- ERROR 2244: Job failed, hadoop does not return any error message org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:139) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:192) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:164) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at org.apache.pig.Main.run(Main.java:435) at org.apache.pig.Main.main(Main.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) ================================================================================ {code} 4. See MR job log {code} Error initializing attempt_201302080803_0004_m_000002_1: java.io.FileNotFoundException: /mnt/sde1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/data/TupleRawComparator.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) Error initializing attempt_201302080803_0004_m_000002_2: java.io.FileNotFoundException: /mnt/sdf1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/impl/builtin/FindQuantiles.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) Error initializing attempt_201302080803_0004_m_000002_3: java.io.FileNotFoundException: /mnt/sdf1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/impl/builtin/FindQuantiles.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) {code} But still has a lot of space in the hdfs.See attachment picture.

    SpringSource Issue Tracker | 4 years ago | Yifeng Xiao
    org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message
  2. 0

    Reproduce steps: 1. Create a cluster which is a combination of HBase and MapReduce. {code} cluster name: HBaseCustomizedf5, distro: hw, status: RUNNING GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------------------------ master [hadoop_namenode, hadoop_jobtracker] 1 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------- HBaseCustomizedf5-master-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.83 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) --------------------------------------------------------------------- hbaseMaster [hbase_master] 2 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------------- HBaseCustomizedf5-hbaseMaster-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.148 Service Ready HBaseCustomizedf5-hbaseMaster-1 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.34 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ----------------------------------------------------------------- zookeeper [zookeeper] 3 1 1024 SHARED 10 NODE NAME HOST IP STATUS -------------------------------------------------------------------------------------------------- HBaseCustomizedf5-zookeeper-1 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.80 Service Ready HBaseCustomizedf5-zookeeper-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.75 Service Ready HBaseCustomizedf5-zookeeper-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.28 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ----------------------------------------------------------------------------------------------------- HRegionServerWithHDFS [hadoop_datanode, hbase_regionserver] 3 1 1024 LOCAL 10 NODE NAME HOST IP STATUS --------------------------------------------------------------------------------------------------------------- HBaseCustomizedf5-HRegionServerWithHDFS-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.64 Service Ready HBaseCustomizedf5-HRegionServerWithHDFS-1 sin2-pekaurora-bdcqe005.eng.vmware.com 10.111.57.42 Service Ready HBaseCustomizedf5-HRegionServerWithHDFS-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.101 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------- compute [hadoop_tasktracker] 3 1 1024 LOCAL 10 NODE NAME HOST IP STATUS ------------------------------------------------------------------------------------------------- HBaseCustomizedf5-compute-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.193 Service Ready HBaseCustomizedf5-compute-1 sin2-pekaurora-bdcqe005.eng.vmware.com 10.111.57.153 Service Ready HBaseCustomizedf5-compute-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.99 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------------------------------------------ hbaseClient [hadoop_client, pig, hive, hive_server, hbase_client] 1 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------------- HBaseCustomizedf5-hbaseClient-0 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.185 Service Ready {code} 2. Run pig script in client vm. {code} [serengeti@localhost 1]$ ssh joe@10.111.57.185 Warning: Permanently added '10.111.57.185' (RSA) to the list of known hosts. joe@10.111.57.185's password: Permission denied, please try again. joe@10.111.57.185's password: Last login: Fri Feb 8 09:29:40 2013 from pek2-auro-office-dhcp1.eng.vmware.com [joe@10 ~]$ ll total 16 -rw-r--r-- 1 joe joe 97 Feb 8 08:08 create_test_9934ee.hbase drwxr-xr-x 2 joe joe 4096 Feb 8 09:31 pig -rw-rw-r-- 1 joe joe 1029 Feb 8 08:11 pig_1360311079941.log -rw-rw-r-- 1 joe joe 1029 Feb 8 09:30 pig_1360315825437.log [joe@10 ~]$ pig -f pig/script.pig 2013-02-08 09:41:37,054 [main] INFO org.apache.pig.Main - Logging error messages to: /var/lib/joe/pig_1360316497051.log 2013-02-08 09:41:37,301 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://10.111.57.83:8020 2013-02-08 09:41:37,754 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: 10.111.57.83:8021 2013-02-08 09:41:38,224 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN 2013-02-08 09:41:38,340 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false 2013-02-08 09:41:38,363 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1 2013-02-08 09:41:38,363 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1 2013-02-08 09:41:38,620 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job 2013-02-08 09:41:38,649 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 2013-02-08 09:41:38,652 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job4840688475063295283.jar 2013-02-08 09:41:42,560 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job4840688475063295283.jar created 2013-02-08 09:41:42,591 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job 2013-02-08 09:41:42,646 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission. ****hdfs://10.111.57.83:8020/user/joe/password/pwInput 2013-02-08 09:41:42,953 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1 2013-02-08 09:41:42,953 [Thread-5] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1 2013-02-08 09:41:42,962 [Thread-5] INFO org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library 2013-02-08 09:41:42,962 [Thread-5] WARN org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library not loaded 2013-02-08 09:41:42,964 [Thread-5] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1 2013-02-08 09:41:43,152 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete 2013-02-08 09:41:43,807 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201302080803_0007 2013-02-08 09:41:43,807 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://10.111.57.83:50030/jobdetails.jsp?jobid=job_201302080803_0007 2013-02-08 09:41:58,416 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_201302080803_0007 has failed! Stop running all dependent jobs 2013-02-08 09:41:58,416 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete 2013-02-08 09:41:58,420 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed! 2013-02-08 09:41:58,422 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 1.0.2 0.9.2 joe 2013-02-08 09:41:38 2013-02-08 09:41:58 UNKNOWN Failed! Failed Jobs: JobId Alias Feature Message Outputs job_201302080803_0007 A,B MAP_ONLY Message: Job failed! Error - JobCleanup Task Failure, Task: task_201302080803_0007_m_000001 /user/joe/password/pwOutput, Input(s): Failed to read data from "/user/joe/password/pwInput" Output(s): Failed to produce result in "/user/joe/password/pwOutput" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_201302080803_0007 2013-02-08 09:41:58,422 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed! 2013-02-08 09:41:58,441 [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2244: Job failed, hadoop does not return any error message Details at logfile: /var/lib/joe/pig_1360316497051.log {code} 3.See the pig log: {code} [joe@10 ~]$ cat /var/lib/joe/pig_1360316497051.log Pig Stack Trace --------------- ERROR 2244: Job failed, hadoop does not return any error message org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:139) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:192) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:164) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at org.apache.pig.Main.run(Main.java:435) at org.apache.pig.Main.main(Main.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) ================================================================================ {code} 4. See MR job log {code} Error initializing attempt_201302080803_0004_m_000002_1: java.io.FileNotFoundException: /mnt/sde1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/data/TupleRawComparator.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) Error initializing attempt_201302080803_0004_m_000002_2: java.io.FileNotFoundException: /mnt/sdf1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/impl/builtin/FindQuantiles.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) Error initializing attempt_201302080803_0004_m_000002_3: java.io.FileNotFoundException: /mnt/sdf1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/impl/builtin/FindQuantiles.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) {code} But still has a lot of space in the hdfs.See attachment picture.

    SpringSource Issue Tracker | 4 years ago | Yifeng Xiao
    org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message
  3. 0

    Hadoop Failed to set permissions of path: \tmp\

    Stack Overflow | 3 years ago | Anton Belev
    org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Unable to run Pig Script in Psudo-distributed mode

    Stack Overflow | 4 years ago | thegreenogre
    org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message
  6. 0

    RANK operating failing

    Stack Overflow | 2 years ago | Michael
    org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message

    15 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.pig.backend.executionengine.ExecException

      ERROR 2244: Job failed, hadoop does not return any error message

      at org.apache.pig.tools.grunt.GruntParser.executeBatch()
    2. org.apache.pig
      Main.main
      1. org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:139)
      2. org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:192)
      3. org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:164)
      4. org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
      5. org.apache.pig.Main.run(Main.java:435)
      6. org.apache.pig.Main.main(Main.java:111)
      6 frames
    3. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      4. java.lang.reflect.Method.invoke(Method.java:597)
      4 frames
    4. Hadoop
      RunJar.main
      1. org.apache.hadoop.util.RunJar.main(RunJar.java:156)
      1 frame