java.lang.RuntimeException

tip

Check for syntax problems, if you're using < and &rt try changing them for < and > as there's no need to avoid the special characters from xml.

You have a different solution? A short tip here would help you and many other users who saw this issue last week.

  • Failed to submit a job: [root@192 share]# hadoop jar /usr/lib/hadoop/hadoop-examples-1.0.1.jar terasort -Dmapred.reduce.tasks=96 /user/root/terasort-input /user/root/terasort-output 12/08/24 02:11:11 INFO terasort.TeraSort: starting 12/08/24 02:11:12 INFO mapred.FileInputFormat: Total input paths to process : 240 12/08/24 02:11:14 INFO util.NativeCodeLoader: Loaded the native-hadoop library 12/08/24 02:11:14 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library 12/08/24 02:11:14 INFO compress.CodecPool: Got brand-new compressor Making 96 from 100000 records Step size is 1041.6666 12/08/24 02:11:15 INFO mapred.FileInputFormat: Total input paths to process : 240 12/08/24 02:11:17 INFO mapred.JobClient: Running job: job_201208231334_0004 12/08/24 02:11:18 INFO mapred.JobClient: map 0% reduce 0% 12/08/24 02:11:18 INFO mapred.JobClient: Job complete: job_201208231334_0004 12/08/24 02:11:18 INFO mapred.JobClient: Counters: 0 12/08/24 02:11:18 INFO mapred.JobClient: Job Failed: Job initialization failed: java.lang.RuntimeException: javax.xml.transform.TransformerException: java.io.IOException: No space left on device at org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1313) at org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1757) at org.apache.hadoop.mapred.JobInProgress$3.run(JobInProgress.java:681) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:678) at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4207) at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: javax.xml.transform.TransformerException: java.io.IOException: No space left on device at com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:719) Check the master node, /mnt/sdc1 is full: [root@192 hadoop]# df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/sda3 2728 1756 832 68% / /dev/sda1 122 12 104 11% /boot tmpfs 1927 0 1927 0% /dev/shm /dev/sdc1 6298 6298 0 100% /mnt/sdc1 /dev/sdd1 6298 17 5962 1% /mnt/sdd1 /dev/sde1 6298 22 5957 1% /mnt/sde1 /dev/sdf1 6298 22 5957 1% /mnt/sdf1 /dev/sdg1 6298 22 5957 1% /mnt/sdg1 /dev/sdh1 6298 22 5957 1% /mnt/sdh1 /dev/sdi1 6298 22 5957 1% /mnt/sdi1 /mnt/sdc1 is the log disk of hadoop. [root@192 log]# ls -la /var/log/hadoop lrwxrwxrwx 1 root root 14 Aug 21 02:13 /var/log/hadoop -> /mnt/sdc1/logs [root@192 hadoop]# du -sm /var/log/hadoop/* 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.log 265 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.log.2012-08-21 1482 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.log.2012-08-22 811 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.log.2012-08-23 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out.1 1 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out.2 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out.3 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out.4 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out.5 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.log 1120 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.log.2012-08-21 1395 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.log.2012-08-22 610 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.log.2012-08-23 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out.1 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out.2 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out.3 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out.4 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out.5 596 /var/log/hadoop/history 1 /var/log/hadoop/job_201208210219_0001_conf.xml 1 /var/log/hadoop/job_201208210219_0002_conf.xml 1 /var/log/hadoop/job_201208210357_0001_conf.xml 1 /var/log/hadoop/job_201208210357_0002_conf.xml 1 /var/log/hadoop/job_201208210357_0005_conf.xml 1 /var/log/hadoop/job_201208210357_0006_conf.xml 1 /var/log/hadoop/job_201208210357_0007_conf.xml 1 /var/log/hadoop/job_201208210357_0008_conf.xml 1 /var/log/hadoop/job_201208210357_0009_conf.xml 1 /var/log/hadoop/job_201208210357_0010_conf.xml 1 /var/log/hadoop/job_201208210357_0011_conf.xml 1 /var/log/hadoop/job_201208211653_0002_conf.xml 1 /var/log/hadoop/job_201208211653_0003_conf.xml 1 /var/log/hadoop/job_201208211653_0004_conf.xml 1 /var/log/hadoop/job_201208211653_0005_conf.xml 1 /var/log/hadoop/job_201208211653_0006_conf.xml 1 /var/log/hadoop/job_201208211653_0007_conf.xml 1 /var/log/hadoop/job_201208211653_0008_conf.xml 1 /var/log/hadoop/job_201208211653_0009_conf.xml 1 /var/log/hadoop/job_201208220259_0007_conf.xml 1 /var/log/hadoop/job_201208220259_0008_conf.xml 1 /var/log/hadoop/job_201208220259_0009_conf.xml 1 /var/log/hadoop/job_201208220259_0010_conf.xml 1 /var/log/hadoop/job_201208220259_0011_conf.xml 1 /var/log/hadoop/job_201208231023_0001_conf.xml 0 /var/log/hadoop/job_201208231334_0001_conf.xml 0 /var/log/hadoop/job_201208231334_0002_conf.xml 0 /var/log/hadoop/job_201208231334_0004_conf.xml The log files are quite huge, 3 of them are bigger than 1G. We need to find a solution for better addressing it.
    via by Binbin Zhao,
  • Failed to submit a job: [root@192 share]# hadoop jar /usr/lib/hadoop/hadoop-examples-1.0.1.jar terasort -Dmapred.reduce.tasks=96 /user/root/terasort-input /user/root/terasort-output 12/08/24 02:11:11 INFO terasort.TeraSort: starting 12/08/24 02:11:12 INFO mapred.FileInputFormat: Total input paths to process : 240 12/08/24 02:11:14 INFO util.NativeCodeLoader: Loaded the native-hadoop library 12/08/24 02:11:14 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library 12/08/24 02:11:14 INFO compress.CodecPool: Got brand-new compressor Making 96 from 100000 records Step size is 1041.6666 12/08/24 02:11:15 INFO mapred.FileInputFormat: Total input paths to process : 240 12/08/24 02:11:17 INFO mapred.JobClient: Running job: job_201208231334_0004 12/08/24 02:11:18 INFO mapred.JobClient: map 0% reduce 0% 12/08/24 02:11:18 INFO mapred.JobClient: Job complete: job_201208231334_0004 12/08/24 02:11:18 INFO mapred.JobClient: Counters: 0 12/08/24 02:11:18 INFO mapred.JobClient: Job Failed: Job initialization failed: java.lang.RuntimeException: javax.xml.transform.TransformerException: java.io.IOException: No space left on device at org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1313) at org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1757) at org.apache.hadoop.mapred.JobInProgress$3.run(JobInProgress.java:681) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:678) at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4207) at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: javax.xml.transform.TransformerException: java.io.IOException: No space left on device at com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:719) Check the master node, /mnt/sdc1 is full: [root@192 hadoop]# df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/sda3 2728 1756 832 68% / /dev/sda1 122 12 104 11% /boot tmpfs 1927 0 1927 0% /dev/shm /dev/sdc1 6298 6298 0 100% /mnt/sdc1 /dev/sdd1 6298 17 5962 1% /mnt/sdd1 /dev/sde1 6298 22 5957 1% /mnt/sde1 /dev/sdf1 6298 22 5957 1% /mnt/sdf1 /dev/sdg1 6298 22 5957 1% /mnt/sdg1 /dev/sdh1 6298 22 5957 1% /mnt/sdh1 /dev/sdi1 6298 22 5957 1% /mnt/sdi1 /mnt/sdc1 is the log disk of hadoop. [root@192 log]# ls -la /var/log/hadoop lrwxrwxrwx 1 root root 14 Aug 21 02:13 /var/log/hadoop -> /mnt/sdc1/logs [root@192 hadoop]# du -sm /var/log/hadoop/* 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.log 265 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.log.2012-08-21 1482 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.log.2012-08-22 811 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.log.2012-08-23 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out.1 1 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out.2 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out.3 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out.4 0 /var/log/hadoop/hadoop-hdfs-jobtracker-192.168.1.99.out.5 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.log 1120 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.log.2012-08-21 1395 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.log.2012-08-22 610 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.log.2012-08-23 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out.1 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out.2 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out.3 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out.4 0 /var/log/hadoop/hadoop-hdfs-namenode-192.168.1.99.out.5 596 /var/log/hadoop/history 1 /var/log/hadoop/job_201208210219_0001_conf.xml 1 /var/log/hadoop/job_201208210219_0002_conf.xml 1 /var/log/hadoop/job_201208210357_0001_conf.xml 1 /var/log/hadoop/job_201208210357_0002_conf.xml 1 /var/log/hadoop/job_201208210357_0005_conf.xml 1 /var/log/hadoop/job_201208210357_0006_conf.xml 1 /var/log/hadoop/job_201208210357_0007_conf.xml 1 /var/log/hadoop/job_201208210357_0008_conf.xml 1 /var/log/hadoop/job_201208210357_0009_conf.xml 1 /var/log/hadoop/job_201208210357_0010_conf.xml 1 /var/log/hadoop/job_201208210357_0011_conf.xml 1 /var/log/hadoop/job_201208211653_0002_conf.xml 1 /var/log/hadoop/job_201208211653_0003_conf.xml 1 /var/log/hadoop/job_201208211653_0004_conf.xml 1 /var/log/hadoop/job_201208211653_0005_conf.xml 1 /var/log/hadoop/job_201208211653_0006_conf.xml 1 /var/log/hadoop/job_201208211653_0007_conf.xml 1 /var/log/hadoop/job_201208211653_0008_conf.xml 1 /var/log/hadoop/job_201208211653_0009_conf.xml 1 /var/log/hadoop/job_201208220259_0007_conf.xml 1 /var/log/hadoop/job_201208220259_0008_conf.xml 1 /var/log/hadoop/job_201208220259_0009_conf.xml 1 /var/log/hadoop/job_201208220259_0010_conf.xml 1 /var/log/hadoop/job_201208220259_0011_conf.xml 1 /var/log/hadoop/job_201208231023_0001_conf.xml 0 /var/log/hadoop/job_201208231334_0001_conf.xml 0 /var/log/hadoop/job_201208231334_0002_conf.xml 0 /var/log/hadoop/job_201208231334_0004_conf.xml The log files are quite huge, 3 of them are bigger than 1G. We need to find a solution for better addressing it.
    via by Binbin Zhao,
  • Reference is not allowed in prolog
    via Stack Overflow by sunleo
    ,
  • not generating pdf from http website url
    via by Unknown author,
  • not generating pdf from http website url
    via by Unknown author,
    • java.lang.RuntimeException: javax.xml.transform.TransformerException: java.io.IOException: No space left on device at org.apache.hadoop.conf.Configuration.writeXml(Configuration.java:1313) at org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1757) at org.apache.hadoop.mapred.JobInProgress$3.run(JobInProgress.java:681) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:678) at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4207) at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: javax.xml.transform.TransformerException: java.io.IOException: No space left on device at com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:719)

    Users with the same issue

    Tahir
    1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    6 more bugmates