scheduler.TaskSetManager: Lost task 170.0 in stage 153.0 (TID 9280, hw-node5): java.lang.OutOfMemoryError: Unable to acquire 1073741824 bytes of memory, got 1060110796

nabble.com | 5 months ago
  1. 0

    Apache Spark Developers List - java.lang.OutOfMemoryError: Unable to acquire bytes of memory

    nabble.com | 5 months ago
    scheduler.TaskSetManager: Lost task 170.0 in stage 153.0 (TID 9280, hw-node5): java.lang.OutOfMemoryError: Unable to acquire 1073741824 bytes of memory, got 1060110796
  2. 0

    [SPARK-10474] TungstenAggregation cannot acquire memory for pointer array after switching to sort-based - ASF JIRA

    apache.org | 1 year ago
    scheduler.TaskSetManager: Lost task 115.0 in stage 152.0 (TID 1736, bb-node2): java.io.IOException: Unable to acquire 16777216 bytes of memory
  3. 0

    [SPARK-10474] TungstenAggregation cannot acquire memory for pointer array after switching to sort-based - ASF JIRA

    apache.org | 1 year ago
    scheduler.TaskSetManager: Lost task 115.0 in stage 152.0 (TID 1736, bb-node2): java.io.IOException: Unable to acquire 16777216 bytes of memory
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Spark executor lost because of GC overhead limit exceeded even though using 20 executors using 25GB each

    Stack Overflow | 1 year ago | user449355
    scheduler.TaskSetManager: Lost task 7.0 in stage 363.0 (TID 3373, myhost.com): java.lang.OutOfMemoryError: GC overhead limit exceeded
  6. 0

    Spark Job failed on YARN -

    Stack Overflow | 12 months ago | Shankar
    scheduler.TaskSetManager: Lost task 2.0 in stage 5.0 (TID 117, lpdn0185.com): java.lang.OutOfMemoryError: GC overhead limit exceeded

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. scheduler.TaskSetManager

      Lost task 170.0 in stage 153.0 (TID 9280, hw-node5): java.lang.OutOfMemoryError: Unable to acquire 1073741824 bytes of memory, got 1060110796

      at org.apache.spark.memory.MemoryConsumer.allocateArray()
    2. org.apache.spark
      UnsafeExternalSorter.insertRecord
      1. org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:91)
      2. org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.growPointerArrayIfNecessary(UnsafeExternalSorter.java:295)
      3. org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:330)
      3 frames
    3. Spark Project SQL
      Sort$$anonfun$1.apply
      1. org.apache.spark.sql.execution.UnsafeExternalRowSorter.insertRow(UnsafeExternalRowSorter.java:91)
      2. org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:168)
      3. org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:90)
      4. org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:64)
      4 frames
    4. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728)
      2. org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728)
      3. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      5. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      6. org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:88)
      7. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      8. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      9. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      10. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      11. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      12. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      13. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      14. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      15. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      16. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      17. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      18. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
      19. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      20. org.apache.spark.scheduler.Task.run(Task.scala:89)
      21. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      21 frames
    5. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames