Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

  1. ,
    Expert tip

    There's probably a problem with your dependencies, check if you used "hadoop-yarn-common". Also check if you have the right version of hadoop for the version of mapreduce that you're using.

Solutions on the web

via Google Groups by Robin Moffatt, 1 year ago
Hadoop home directory does not exist, is not a directory, or is not an absolute path.
via Google Groups by Fredrik, 10 months ago
HADOOP_HOME or hadoop.home.dir are not set.
via apache.org by Unknown author, 2 years ago
via GitHub by sboettcher
, 7 months ago
HADOOP_HOME or hadoop.home.dir are not set.
via Google Groups by joanne, 1 year ago
HADOOP_HOME or hadoop.home.dir are not set.
java.io.IOException: Hadoop home directory  does not exist, is not a
 directory, or is not an absolute path.	at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:312)	at org.apache.hadoop.util.Shell.(Shell.java:327)	at org.apache.hadoop.util.StringUtils.(StringUtils.java:79)	at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:104)	at org.apache.hadoop.security.Groups.(Groups.java:86)	at org.apache.hadoop.security.Groups.(Groups.java:66)	at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:271)	at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:248)	at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:763)	at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:748)	at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:621)	at org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)	at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:2617)	at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:417)	at io.confluent.connect.hdfs.storage.HdfsStorage.(HdfsStorage.java:39)	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)	at io.confluent.connect.hdfs.storage.StorageFactory.createStorage(StorageFactory.java:29)	at io.confluent.connect.hdfs.DataWriter.(DataWriter.java:168)	at io.confluent.connect.hdfs.HdfsSinkTask.start(HdfsSinkTask.java:64)	at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:207)	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:139)	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)	at java.util.concurrent.FutureTask.run(FutureTask.java:266)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)	at java.lang.Thread.run(Thread.java:745)