java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode1.hdfs.mesos

Stack Overflow | ktaube | 9 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    When reading text file from file system, Spark still tries to connect to HDFS

    Stack Overflow | 9 months ago | ktaube
    java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode1.hdfs.mesos
  2. 0

    GitHub comment 613#239890055

    GitHub | 7 months ago | martinstuder
    java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode1.hdfs.mesos
  3. 0

    Do I need to install hdfs to run my Spark jobs

    GitHub | 1 year ago | yogeshnath
    java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode1.hdfs.mesos
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    GitHub comment 250#219496747

    GitHub | 10 months ago | radek1st
    java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode1.hdfs.mesos
  6. 0

    Hbase - hadoop handling DNS blips

    Google Groups | 3 years ago | Arun Mishra
    java.io.IOException: cannot get log writer

  1. Anup Ash 1 times, last 8 months ago
13 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.net.UnknownHostException

    namenode1.hdfs.mesos

    at org.apache.hadoop.security.SecurityUtil.buildTokenService()
  2. Hadoop
    SecurityUtil.buildTokenService
    1. org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
    1 frame
  3. Apache Hadoop HDFS
    ConfiguredFailoverProxyProvider.getProxy
    1. org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:240)
    2. org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:124)
    2 frames
  4. Hadoop
    RetryProxy.create
    1. org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:74)
    2. org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:65)
    3. org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58)
    3 frames
  5. Apache Hadoop HDFS
    DistributedFileSystem.initialize
    1. org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:152)
    2. org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:579)
    3. org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
    4. org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
    4 frames
  6. Hadoop
    FileSystem.get
    1. org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
    2. org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
    3. org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
    4. org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
    5. org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
    6. org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
    6 frames
  7. Hadoop
    FileInputFormat.setInputPaths
    1. org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:653)
    2. org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:427)
    3. org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:400)
    3 frames
  8. Spark
    HadoopRDD$$anonfun$getJobConf$6.apply
    1. org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
    2. org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
    3. org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
    4. org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
    4 frames
  9. Scala
    Option.map
    1. scala.Option.map(Option.scala:145)
    1 frame
  10. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)
    2. org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
    4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
    4 frames
  11. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:120)
    1 frame
  12. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
    2. org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    3. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
    4. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
    4 frames
  13. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:120)
    1 frame
  14. Spark
    RDD.count
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
    2. org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
    3. org.apache.spark.rdd.RDD.count(RDD.scala:1157)
    3 frames
  15. Unknown
    SimpleApp.main
    1. SimpleApp$.main(SimpleApp.scala:15)
    2. SimpleApp.main(SimpleApp.scala)
    2 frames
  16. Java RT
    Method.invoke
    1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    4. java.lang.reflect.Method.invoke(Method.java:498)
    4 frames
  17. Spark
    SparkSubmit.main
    1. org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:786)
    2. org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:183)
    3. org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:208)
    4. org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:123)
    5. org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    5 frames