java.lang.ArrayIndexOutOfBoundsException: 1

nabble.com | 7 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Apache Spark User List - distinct on huge dataset

    nabble.com | 7 months ago
    java.lang.ArrayIndexOutOfBoundsException: 1

    Root Cause Analysis

    1. java.lang.ArrayIndexOutOfBoundsException

      1

      at $line59.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply()
    2. $line59
      $read$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply
      1. $line59.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:18)
      2. $line59.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:16)
      2 frames
    3. Scala
      Iterator$$anon$11.next
      1. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      2. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      2 frames
    4. Spark
      SparkHadoopUtil$$anon$1.run
      1. org.apache.spark.Aggregator.combineValuesByKey(Aggregator.scala:58)
      2. org.apache.spark.rdd.PairRDDFunctions$$anonfun$1.apply(PairRDDFunctions.scala:95)
      3. org.apache.spark.rdd.PairRDDFunctions$$anonfun$1.apply(PairRDDFunctions.scala:94)
      4. org.apache.spark.rdd.RDD$$anonfun$3.apply(RDD.scala:471)
      5. org.apache.spark.rdd.RDD$$anonfun$3.apply(RDD.scala:471)
      6. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
      7. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
      8. org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
      9. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
      10. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
      11. org.apache.spark.scheduler.Task.run(Task.scala:53)
      12. org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
      13. org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:46)
      14. org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:45)
      14 frames
    5. Java RT
      Subject.doAs
      1. java.security.AccessController.doPrivileged(Native Method)
      2. javax.security.auth.Subject.doAs(Subject.java:396)
      2 frames
    6. Hadoop
      UserGroupInformation.doAs
      1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
      1 frame
    7. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:45)
      2. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
      2 frames
    8. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      3. java.lang.Thread.run(Thread.java:662)
      3 frames