java.io.FileNotFoundException: /mnt/sde1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/data/TupleRawComparator.class (No space left on device)

SpringSource Issue Tracker | Yifeng Xiao | 4 years ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Reproduce steps: 1. Create a cluster which is a combination of HBase and MapReduce. {code} cluster name: HBaseCustomizedf5, distro: hw, status: RUNNING GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------------------------ master [hadoop_namenode, hadoop_jobtracker] 1 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------- HBaseCustomizedf5-master-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.83 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) --------------------------------------------------------------------- hbaseMaster [hbase_master] 2 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------------- HBaseCustomizedf5-hbaseMaster-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.148 Service Ready HBaseCustomizedf5-hbaseMaster-1 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.34 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ----------------------------------------------------------------- zookeeper [zookeeper] 3 1 1024 SHARED 10 NODE NAME HOST IP STATUS -------------------------------------------------------------------------------------------------- HBaseCustomizedf5-zookeeper-1 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.80 Service Ready HBaseCustomizedf5-zookeeper-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.75 Service Ready HBaseCustomizedf5-zookeeper-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.28 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ----------------------------------------------------------------------------------------------------- HRegionServerWithHDFS [hadoop_datanode, hbase_regionserver] 3 1 1024 LOCAL 10 NODE NAME HOST IP STATUS --------------------------------------------------------------------------------------------------------------- HBaseCustomizedf5-HRegionServerWithHDFS-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.64 Service Ready HBaseCustomizedf5-HRegionServerWithHDFS-1 sin2-pekaurora-bdcqe005.eng.vmware.com 10.111.57.42 Service Ready HBaseCustomizedf5-HRegionServerWithHDFS-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.101 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------- compute [hadoop_tasktracker] 3 1 1024 LOCAL 10 NODE NAME HOST IP STATUS ------------------------------------------------------------------------------------------------- HBaseCustomizedf5-compute-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.193 Service Ready HBaseCustomizedf5-compute-1 sin2-pekaurora-bdcqe005.eng.vmware.com 10.111.57.153 Service Ready HBaseCustomizedf5-compute-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.99 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------------------------------------------ hbaseClient [hadoop_client, pig, hive, hive_server, hbase_client] 1 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------------- HBaseCustomizedf5-hbaseClient-0 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.185 Service Ready {code} 2. Run pig script in client vm. {code} [serengeti@localhost 1]$ ssh joe@10.111.57.185 Warning: Permanently added '10.111.57.185' (RSA) to the list of known hosts. joe@10.111.57.185's password: Permission denied, please try again. joe@10.111.57.185's password: Last login: Fri Feb 8 09:29:40 2013 from pek2-auro-office-dhcp1.eng.vmware.com [joe@10 ~]$ ll total 16 -rw-r--r-- 1 joe joe 97 Feb 8 08:08 create_test_9934ee.hbase drwxr-xr-x 2 joe joe 4096 Feb 8 09:31 pig -rw-rw-r-- 1 joe joe 1029 Feb 8 08:11 pig_1360311079941.log -rw-rw-r-- 1 joe joe 1029 Feb 8 09:30 pig_1360315825437.log [joe@10 ~]$ pig -f pig/script.pig 2013-02-08 09:41:37,054 [main] INFO org.apache.pig.Main - Logging error messages to: /var/lib/joe/pig_1360316497051.log 2013-02-08 09:41:37,301 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://10.111.57.83:8020 2013-02-08 09:41:37,754 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: 10.111.57.83:8021 2013-02-08 09:41:38,224 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN 2013-02-08 09:41:38,340 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false 2013-02-08 09:41:38,363 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1 2013-02-08 09:41:38,363 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1 2013-02-08 09:41:38,620 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job 2013-02-08 09:41:38,649 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 2013-02-08 09:41:38,652 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job4840688475063295283.jar 2013-02-08 09:41:42,560 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job4840688475063295283.jar created 2013-02-08 09:41:42,591 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job 2013-02-08 09:41:42,646 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission. ****hdfs://10.111.57.83:8020/user/joe/password/pwInput 2013-02-08 09:41:42,953 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1 2013-02-08 09:41:42,953 [Thread-5] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1 2013-02-08 09:41:42,962 [Thread-5] INFO org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library 2013-02-08 09:41:42,962 [Thread-5] WARN org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library not loaded 2013-02-08 09:41:42,964 [Thread-5] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1 2013-02-08 09:41:43,152 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete 2013-02-08 09:41:43,807 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201302080803_0007 2013-02-08 09:41:43,807 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://10.111.57.83:50030/jobdetails.jsp?jobid=job_201302080803_0007 2013-02-08 09:41:58,416 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_201302080803_0007 has failed! Stop running all dependent jobs 2013-02-08 09:41:58,416 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete 2013-02-08 09:41:58,420 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed! 2013-02-08 09:41:58,422 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 1.0.2 0.9.2 joe 2013-02-08 09:41:38 2013-02-08 09:41:58 UNKNOWN Failed! Failed Jobs: JobId Alias Feature Message Outputs job_201302080803_0007 A,B MAP_ONLY Message: Job failed! Error - JobCleanup Task Failure, Task: task_201302080803_0007_m_000001 /user/joe/password/pwOutput, Input(s): Failed to read data from "/user/joe/password/pwInput" Output(s): Failed to produce result in "/user/joe/password/pwOutput" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_201302080803_0007 2013-02-08 09:41:58,422 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed! 2013-02-08 09:41:58,441 [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2244: Job failed, hadoop does not return any error message Details at logfile: /var/lib/joe/pig_1360316497051.log {code} 3.See the pig log: {code} [joe@10 ~]$ cat /var/lib/joe/pig_1360316497051.log Pig Stack Trace --------------- ERROR 2244: Job failed, hadoop does not return any error message org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:139) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:192) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:164) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at org.apache.pig.Main.run(Main.java:435) at org.apache.pig.Main.main(Main.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) ================================================================================ {code} 4. See MR job log {code} Error initializing attempt_201302080803_0004_m_000002_1: java.io.FileNotFoundException: /mnt/sde1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/data/TupleRawComparator.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) Error initializing attempt_201302080803_0004_m_000002_2: java.io.FileNotFoundException: /mnt/sdf1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/impl/builtin/FindQuantiles.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) Error initializing attempt_201302080803_0004_m_000002_3: java.io.FileNotFoundException: /mnt/sdf1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/impl/builtin/FindQuantiles.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) {code} But still has a lot of space in the hdfs.See attachment picture.

    SpringSource Issue Tracker | 4 years ago | Yifeng Xiao
    java.io.FileNotFoundException: /mnt/sde1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/data/TupleRawComparator.class (No space left on device)
  2. 0

    Reproduce steps: 1. Create a cluster which is a combination of HBase and MapReduce. {code} cluster name: HBaseCustomizedf5, distro: hw, status: RUNNING GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------------------------ master [hadoop_namenode, hadoop_jobtracker] 1 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------- HBaseCustomizedf5-master-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.83 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) --------------------------------------------------------------------- hbaseMaster [hbase_master] 2 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------------- HBaseCustomizedf5-hbaseMaster-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.148 Service Ready HBaseCustomizedf5-hbaseMaster-1 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.34 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ----------------------------------------------------------------- zookeeper [zookeeper] 3 1 1024 SHARED 10 NODE NAME HOST IP STATUS -------------------------------------------------------------------------------------------------- HBaseCustomizedf5-zookeeper-1 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.80 Service Ready HBaseCustomizedf5-zookeeper-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.75 Service Ready HBaseCustomizedf5-zookeeper-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.28 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ----------------------------------------------------------------------------------------------------- HRegionServerWithHDFS [hadoop_datanode, hbase_regionserver] 3 1 1024 LOCAL 10 NODE NAME HOST IP STATUS --------------------------------------------------------------------------------------------------------------- HBaseCustomizedf5-HRegionServerWithHDFS-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.64 Service Ready HBaseCustomizedf5-HRegionServerWithHDFS-1 sin2-pekaurora-bdcqe005.eng.vmware.com 10.111.57.42 Service Ready HBaseCustomizedf5-HRegionServerWithHDFS-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.101 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------- compute [hadoop_tasktracker] 3 1 1024 LOCAL 10 NODE NAME HOST IP STATUS ------------------------------------------------------------------------------------------------- HBaseCustomizedf5-compute-0 sin2-pekaurora-bdcqe004.eng.vmware.com 10.111.57.193 Service Ready HBaseCustomizedf5-compute-1 sin2-pekaurora-bdcqe005.eng.vmware.com 10.111.57.153 Service Ready HBaseCustomizedf5-compute-2 sin2-pekaurora-bdcqe006.eng.vmware.com 10.111.57.99 Service Ready GROUP NAME ROLES INSTANCE CPU MEM(MB) TYPE SIZE(GB) ------------------------------------------------------------------------------------------------------------ hbaseClient [hadoop_client, pig, hive, hive_server, hbase_client] 1 1 1024 SHARED 10 NODE NAME HOST IP STATUS ----------------------------------------------------------------------------------------------------- HBaseCustomizedf5-hbaseClient-0 sin2-pekaurora-bdcqe003.eng.vmware.com 10.111.57.185 Service Ready {code} 2. Run pig script in client vm. {code} [serengeti@localhost 1]$ ssh joe@10.111.57.185 Warning: Permanently added '10.111.57.185' (RSA) to the list of known hosts. joe@10.111.57.185's password: Permission denied, please try again. joe@10.111.57.185's password: Last login: Fri Feb 8 09:29:40 2013 from pek2-auro-office-dhcp1.eng.vmware.com [joe@10 ~]$ ll total 16 -rw-r--r-- 1 joe joe 97 Feb 8 08:08 create_test_9934ee.hbase drwxr-xr-x 2 joe joe 4096 Feb 8 09:31 pig -rw-rw-r-- 1 joe joe 1029 Feb 8 08:11 pig_1360311079941.log -rw-rw-r-- 1 joe joe 1029 Feb 8 09:30 pig_1360315825437.log [joe@10 ~]$ pig -f pig/script.pig 2013-02-08 09:41:37,054 [main] INFO org.apache.pig.Main - Logging error messages to: /var/lib/joe/pig_1360316497051.log 2013-02-08 09:41:37,301 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://10.111.57.83:8020 2013-02-08 09:41:37,754 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: 10.111.57.83:8021 2013-02-08 09:41:38,224 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN 2013-02-08 09:41:38,340 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false 2013-02-08 09:41:38,363 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1 2013-02-08 09:41:38,363 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1 2013-02-08 09:41:38,620 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job 2013-02-08 09:41:38,649 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 2013-02-08 09:41:38,652 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job4840688475063295283.jar 2013-02-08 09:41:42,560 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job4840688475063295283.jar created 2013-02-08 09:41:42,591 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job 2013-02-08 09:41:42,646 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission. ****hdfs://10.111.57.83:8020/user/joe/password/pwInput 2013-02-08 09:41:42,953 [Thread-5] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1 2013-02-08 09:41:42,953 [Thread-5] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1 2013-02-08 09:41:42,962 [Thread-5] INFO org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library 2013-02-08 09:41:42,962 [Thread-5] WARN org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library not loaded 2013-02-08 09:41:42,964 [Thread-5] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1 2013-02-08 09:41:43,152 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete 2013-02-08 09:41:43,807 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201302080803_0007 2013-02-08 09:41:43,807 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://10.111.57.83:50030/jobdetails.jsp?jobid=job_201302080803_0007 2013-02-08 09:41:58,416 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_201302080803_0007 has failed! Stop running all dependent jobs 2013-02-08 09:41:58,416 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete 2013-02-08 09:41:58,420 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed! 2013-02-08 09:41:58,422 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 1.0.2 0.9.2 joe 2013-02-08 09:41:38 2013-02-08 09:41:58 UNKNOWN Failed! Failed Jobs: JobId Alias Feature Message Outputs job_201302080803_0007 A,B MAP_ONLY Message: Job failed! Error - JobCleanup Task Failure, Task: task_201302080803_0007_m_000001 /user/joe/password/pwOutput, Input(s): Failed to read data from "/user/joe/password/pwInput" Output(s): Failed to produce result in "/user/joe/password/pwOutput" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_201302080803_0007 2013-02-08 09:41:58,422 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed! 2013-02-08 09:41:58,441 [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2244: Job failed, hadoop does not return any error message Details at logfile: /var/lib/joe/pig_1360316497051.log {code} 3.See the pig log: {code} [joe@10 ~]$ cat /var/lib/joe/pig_1360316497051.log Pig Stack Trace --------------- ERROR 2244: Job failed, hadoop does not return any error message org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:139) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:192) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:164) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at org.apache.pig.Main.run(Main.java:435) at org.apache.pig.Main.main(Main.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) ================================================================================ {code} 4. See MR job log {code} Error initializing attempt_201302080803_0004_m_000002_1: java.io.FileNotFoundException: /mnt/sde1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/data/TupleRawComparator.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) Error initializing attempt_201302080803_0004_m_000002_2: java.io.FileNotFoundException: /mnt/sdf1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/impl/builtin/FindQuantiles.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) Error initializing attempt_201302080803_0004_m_000002_3: java.io.FileNotFoundException: /mnt/sdf1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/impl/builtin/FindQuantiles.class (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:194) at java.io.FileOutputStream.<init>(FileOutputStream.java:145) at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51) at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377) at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367) at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202) at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203) at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118) at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430) at java.lang.Thread.run(Thread.java:662) {code} But still has a lot of space in the hdfs.See attachment picture.

    SpringSource Issue Tracker | 4 years ago | Yifeng Xiao
    java.io.FileNotFoundException: /mnt/sde1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/data/TupleRawComparator.class (No space left on device)
  3. 0

    Error on trying to run example job

    GitHub | 2 years ago | afarnoosh
    java.io.FileNotFoundException: /51838945/java-wc.zip (No such file or directory)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Bugg v0.3.7 - Går inte att ladda upp bilder i admin

    GitHub | 5 years ago | IgorJovic
    java.io.FileNotFoundException: web-app\uploads\images\Who-is-your-Judge.jpg (The system cannot find the path specified)
  6. 0

    having trouble backing up DIM1

    GitHub | 4 years ago | BornToDoStuff
    java.io.FileNotFoundException: Backup/FTB_DIM_1/2013/3/14_22:19.zip (No such file or directory)

  1. reni 1 times, last 2 days ago
  2. batwalrus76 1 times, last 6 days ago
  3. Pilleo 1 times, last 1 week ago
  4. QrCeric 12 times, last 2 weeks ago
  5. ccarpenter04 1263 times, last 1 month ago
31 more registered users
64 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.io.FileNotFoundException

    /mnt/sde1/hadoop/mapred/local/taskTracker/joe/jobcache/job_201302080803_0004/jars/org/apache/pig/data/TupleRawComparator.class (No space left on device)

    at java.io.FileOutputStream.open()
  2. Java RT
    FileOutputStream.<init>
    1. java.io.FileOutputStream.open(Native Method)
    2. java.io.FileOutputStream.<init>(FileOutputStream.java:194)
    3. java.io.FileOutputStream.<init>(FileOutputStream.java:145)
    3 frames
  3. Hadoop
    RunJar.unJar
    1. org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
    1 frame
  4. Hadoop
    TaskTracker$4.run
    1. org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
    2. org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
    3. org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
    4. org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
    5. org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
    5 frames
  5. Java RT
    Subject.doAs
    1. java.security.AccessController.doPrivileged(Native Method)
    2. javax.security.auth.Subject.doAs(Subject.java:396)
    2 frames
  6. Hadoop
    UserGroupInformation.doAs
    1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    1 frame
  7. Hadoop
    TaskTracker$5.run
    1. org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
    2. org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
    3. org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
    3 frames
  8. Java RT
    Thread.run
    1. java.lang.Thread.run(Thread.java:662)
    1 frame