org.elasticsearch.hadoop.rest.EsHadoopParsingException: Cannot parse value [1] for field [boolField]

GitHub | mathieu-rossignol | 5 months ago
  1. 0

    Reading boolean 0/1 values from ES into Spark does not work (although false/true is ok)

    GitHub | 5 months ago | mathieu-rossignol
    org.elasticsearch.hadoop.rest.EsHadoopParsingException: Cannot parse value [1] for field [boolField]
  2. 0

    SparkContext textFile with localfile system throws IllegalArgumentException

    Stack Overflow | 2 years ago | Vijay
    java.lang.IllegalArgumentException: For input string: "0"
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    Beta of new Notebook Application for Spark & SQL | Hue - Hadoop User Experience - The Apache Hadoop UI

    gethue.com | 2 months ago
    java.lang.IllegalArgumentException: For input string: “$SPARK_HOME/logs”
  5. 0

    spark 2.0.0 csv write fails for empty input string

    GitHub | 4 months ago | manisha803
    java.lang.IllegalArgumentException: For input string: ""
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.IllegalArgumentException

    For input string: "1"

    at scala.collection.immutable.StringLike$class.parseBoolean()
  2. Scala
    StringOps.toBoolean
    1. scala.collection.immutable.StringLike$class.parseBoolean(StringLike.scala:238)
    2. scala.collection.immutable.StringLike$class.toBoolean(StringLike.scala:226)
    3. scala.collection.immutable.StringOps.toBoolean(StringOps.scala:31)
    3 frames
  3. Elasticsearch Spark
    ScalaRowValueReader.readValue
    1. org.elasticsearch.spark.serialization.ScalaValueReader.parseBoolean(ScalaValueReader.scala:112)
    2. org.elasticsearch.spark.serialization.ScalaValueReader$$anonfun$booleanValue$1.apply(ScalaValueReader.scala:111)
    3. org.elasticsearch.spark.serialization.ScalaValueReader$$anonfun$booleanValue$1.apply(ScalaValueReader.scala:111)
    4. org.elasticsearch.spark.serialization.ScalaValueReader.checkNull(ScalaValueReader.scala:81)
    5. org.elasticsearch.spark.serialization.ScalaValueReader.booleanValue(ScalaValueReader.scala:111)
    6. org.elasticsearch.spark.serialization.ScalaValueReader.readValue(ScalaValueReader.scala:67)
    7. org.elasticsearch.spark.sql.ScalaRowValueReader.readValue(ScalaEsRowValueReader.scala:28)
    7 frames
  4. Elasticsearch Hadoop
    ScrollQuery.hasNext
    1. org.elasticsearch.hadoop.serialization.ScrollReader.parseValue(ScrollReader.java:726)
    2. org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:711)
    3. org.elasticsearch.hadoop.serialization.ScrollReader.map(ScrollReader.java:806)
    4. org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:704)
    5. org.elasticsearch.hadoop.serialization.ScrollReader.readHitAsMap(ScrollReader.java:458)
    6. org.elasticsearch.hadoop.serialization.ScrollReader.readHit(ScrollReader.java:383)
    7. org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:278)
    8. org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:251)
    9. org.elasticsearch.hadoop.rest.RestRepository.scroll(RestRepository.java:456)
    10. org.elasticsearch.hadoop.rest.ScrollQuery.hasNext(ScrollQuery.java:86)
    10 frames
  5. Elasticsearch Spark
    AbstractEsRDDIterator.hasNext
    1. org.elasticsearch.spark.rdd.AbstractEsRDDIterator.hasNext(AbstractEsRDDIterator.scala:43)
    1 frame
  6. Scala
    AbstractIterator.toArray
    1. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    2. scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
    3. scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    4. scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
    5. scala.collection.Iterator$class.foreach(Iterator.scala:727)
    6. scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    7. scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    8. scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    9. scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    10. scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    11. scala.collection.AbstractIterator.to(Iterator.scala:1157)
    12. scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    13. scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    14. scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    15. scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    15 frames
  7. Spark Project SQL
    SparkPlan$$anonfun$5.apply
    1. org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:212)
    2. org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:212)
    2 frames
  8. Spark
    Executor$TaskRunner.run
    1. org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
    2. org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
    3. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    4. org.apache.spark.scheduler.Task.run(Task.scala:89)
    5. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    5 frames
  9. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    3. java.lang.Thread.run(Thread.java:745)
    3 frames