org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens when accessing a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'

elastic.co | 9 months ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    Cannot detect ES version - typically this happens when accessing a WAN/Cloud instance without the proper setting 'es.nodes.wan.only' on Docker Deployment - Hadoop and Elasticsearch - Discuss the Elastic Stack

    elastic.co | 9 months ago
    org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens when accessing a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
  2. 0

    Read failure when index/alias spread among 32 or more nodes.

    GitHub | 1 year ago | hronik1
    org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Too many elements to create a power set 54
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    [Spark]Is there a way to make elasticsearch-hadoop stick to the load-balancer(client), instead of going trying to ping all the data nodes?

    GitHub | 2 years ago | wingchen
    org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot find node with id [gy_92kljRluUFT9N022KDw] (is HTTP enabled?) from shard ...

Root Cause Analysis

  1. org.elasticsearch.hadoop.EsHadoopIllegalArgumentException

    Cannot detect ES version - typically this happens when accessing a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'

    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion()
  2. Elasticsearch Hadoop
    RestService.findPartitions
    1. org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:190)
    2. org.elasticsearch.hadoop.rest.RestService.findPartitions(RestService.java:231)
    2 frames
  3. Elasticsearch Spark
    AbstractEsRDD.getPartitions
    1. org.elasticsearch.spark.rdd.AbstractEsRDD.esPartitions$lzycompute(AbstractEsRDD.scala:61)
    2. org.elasticsearch.spark.rdd.AbstractEsRDD.esPartitions(AbstractEsRDD.scala:60)
    3. org.elasticsearch.spark.rdd.AbstractEsRDD.getPartitions(AbstractEsRDD.scala:27)
    3 frames
  4. Spark
    RDD$$anonfun$partitions$2.apply
    1. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
    2. org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
    2 frames
  5. Scala
    Option.getOrElse
    1. scala.Option.getOrElse(Option.scala:120)
    1 frame
  6. Spark
    RDD.collect
    1. org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
    2. org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
    3. org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
    4. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    5. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    6. org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    7. org.apache.spark.rdd.RDD.collect(RDD.scala:926)
    7 frames