If you like a tip written by other Samebug users, mark is as helpful! Marks help our algorithm provide you better solutions and also help other users.

There's probably a problem with your dependencies, check if you used "hadoop-yarn-common". Also check if you have the right version of hadoop for the version of mapreduce that you're using.

You have a different solution? A short tip here would help you and many other users who saw this issue last week.

  • HBase User - Remote java client hang
    via by Unknown author,
  • Apache Spark User List - spark multi-node cluster
    via by Unknown author,
    • Hadoop home directory does not exist, is not a directory, or is not an absolute path. at org.apache.hadoop.util.Shell.checkHadoopHome( at org.apache.hadoop.util.Shell.<clinit>( at org.apache.hadoop.util.StringUtils.<clinit>( at at<init>( at<init>( at at at at at at at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>( at org.apache.hadoop.fs.FileSystem$Cache.getUnique( at org.apache.hadoop.fs.FileSystem.newInstance( at<init>( at sun.reflect.NativeConstructorAccessorImpl.newInstance0(NativeMethod) at sun.reflect.NativeConstructorAccessorImpl.newInstance( at sun.reflect.DelegatingConstructorAccessorImpl.newInstance( at java.lang.reflect.Constructor.newInstance( at at io.confluent.connect.hdfs.DataWriter.<init>( at io.confluent.connect.hdfs.HdfsSinkTask.start( at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart( at org.apache.kafka.connect.runtime.WorkerSinkTask.execute( at org.apache.kafka.connect.runtime.WorkerTask.doRun( at at java.util.concurrent.Executors$ at at java.util.concurrent.ThreadPoolExecutor.runWorker( at java.util.concurrent.ThreadPoolExecutor$ at

    Users with the same issue

    Unknown visitor1 times, last one,
    8 times, last one,
    1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    4 more bugmates