java.lang.NegativeArraySizeException

Stack Overflow | user1330526 | 7 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Spark SQL group_concat UDAF error

    Stack Overflow | 7 months ago | user1330526
    java.lang.NegativeArraySizeException

    Root Cause Analysis

    1. java.lang.NegativeArraySizeException

      No message provided

      at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply()
    2. Spark Project Catalyst
      GeneratedClass$SpecificSafeProjection.apply
      1. org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown Source)
      1 frame
    3. Scala
      Iterator$$anon$11.next
      1. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      1 frame
    4. org.apache.spark
      SortBasedAggregationIterator.next
      1. org.apache.spark.sql.execution.aggregate.AggregationIterator$$anon$2.next(AggregationIterator.scala:466)
      2. org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.processCurrentSortedGroup(SortBasedAggregationIterator.scala:115)
      3. org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:139)
      4. org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:30)
      4 frames
    5. Scala
      AbstractIterator.foreach
      1. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      2. scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      3. scala.collection.Iterator$class.foreach(Iterator.scala:727)
      4. scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      4 frames
    6. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:890)
      2. org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:890)
      3. org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
      4. org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
      5. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      6. org.apache.spark.scheduler.Task.run(Task.scala:88)
      7. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      7 frames
    7. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames