scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, spark003.example.com): java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1

qiita.com | 4 months ago
  1. 0

    Apache Sparkを1.5系にあげたらUnknownHostExceptionがでるようになってしまった時の対処法 - Qiita

    qiita.com | 4 months ago
    scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, spark003.example.com): java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1
  2. 0

    UnknownHostException error when spark1.6.0 running on mesos cann't get the Hadoop cluster name

    Stack Overflow | 10 months ago | Fann Wu
    scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, hadoopnn1.nogle.com): java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoopcluster1
  3. 0

    In presto i keep getting java.net.UnknownHostException: nameservice1

    Stack Overflow | 4 months ago | Anup Ash
    java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    GitHub comment 974#237806062

    GitHub | 4 months ago | anupash147
    java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1
  6. 0

    Namenode HA (UnknownHostException: nameservice1)

    Stack Overflow | 2 years ago | roy
    java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.net.UnknownHostException

      nameservice1

      at org.apache.hadoop.security.SecurityUtil.buildTokenService()
    2. Hadoop
      SecurityUtil.buildTokenService
      1. org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
      1 frame
    3. Apache Hadoop HDFS
      DistributedFileSystem.initialize
      1. org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
      2. org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
      3. org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:665)
      4. org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:601)
      5. org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
      5 frames
    4. Hadoop
      FileSystem.get
      1. org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
      2. org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
      3. org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
      4. org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
      5. org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
      6. org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
      6 frames
    5. Hadoop
      FileInputFormat.setInputPaths
      1. org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:656)
      2. org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:436)
      3. org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:409)
      3 frames
    6. Spark
      HadoopRDD$$anonfun$getJobConf$6.apply
      1. org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$32.apply(SparkContext.scala:1016)
      2. org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$32.apply(SparkContext.scala:1016)
      3. org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
      4. org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
      4 frames
    7. Scala
      Option.map
      1. scala.Option.map(Option.scala:145)
      1 frame
    8. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)
      2. org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:220)
      3. org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
      4. org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
      5. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
      6. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
      7. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      8. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
      9. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
      10. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      11. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
      12. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
      13. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      14. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
      15. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
      16. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      17. org.apache.spark.scheduler.Task.run(Task.scala:88)
      18. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      18 frames
    9. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames