java.lang.OutOfMemoryError: Java heap space

Stack Overflow | K F | 6 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.

Root Cause Analysis

  1. java.lang.OutOfMemoryError

    Java heap space

    at org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.expandPointerArray()
  2. org.apache.spark
    UnsafeInMemorySorter.insertRecord
    1. org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.expandPointerArray(UnsafeInMemorySorter.java:115)
    2. org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.insertRecord(UnsafeInMemorySorter.java:128)
    2 frames
  3. Spark Project SQL
    UnsafeFixedWidthAggregationMap.destructAndCreateExternalSorter
    1. org.apache.spark.sql.execution.UnsafeKVExternalSorter.<init>(UnsafeKVExternalSorter.java:113)
    2. org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.destructAndCreateExternalSorter(UnsafeFixedWidthAggregationMap.java:257)
    2 frames
  4. org.apache.spark
    TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply
    1. org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.switchToSortBasedAggregation(TungstenAggregationIterator.scala:435)
    2. org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:379)
    3. org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.start(TungstenAggregationIterator.scala:622)
    4. org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1.org$apache$spark$sql$execution$aggregate$TungstenAggregate$$anonfun$$executePartition$1(TungstenAggregate.scala:110)
    5. org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:119)
    6. org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:119)
    6 frames
  5. Spark
    Executor$TaskRunner.run
    1. org.apache.spark.rdd.MapPartitionsWithPreparationRDD.compute(MapPartitionsWithPreparationRDD.scala:64)
    2. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    3. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    4. org.apache.spark.rdd.MapPartitionsWithPreparationRDD.compute(MapPartitionsWithPreparationRDD.scala:63)
    5. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    6. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    7. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    8. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    9. org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    10. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    11. org.apache.spark.scheduler.Task.run(Task.scala:88)
    12. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    12 frames
  6. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    3. java.lang.Thread.run(Thread.java:745)
    3 frames