java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got 220032

Apache's JIRA Issue Tracker | Josh Rosen | 1 year ago
  1. 0

    [jira] [Updated] (SPARK-14363) Executor OOM while trying to acquire new page from the memory manager

    spark-issues | 8 months ago | Sital Kedia (JIRA)
    java.lang.OutOfMemoryError: Unable to acquire 76 bytes of memory, got 0
  2. 0

    [jira] [Updated] (SPARK-14363) Executor OOM while trying to acquire new page from the memory manager

    spark-issues | 8 months ago | Sital Kedia (JIRA)
    java.lang.OutOfMemoryError: Unable to acquire 76 bytes of memory, got 0
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    [jira] [Updated] (SPARK-14363) Executor OOM while trying to acquire new page from the memory manager

    spark-issues | 8 months ago | Sital Kedia (JIRA)
    java.lang.OutOfMemoryError: Unable to acquire 76 bytes of memory, got 0
  5. 0

    [jira] [Updated] (SPARK-14363) Executor OOM while trying to acquire new page from the memory manager

    spark-issues | 8 months ago | Sital Kedia (JIRA)
    java.lang.OutOfMemoryError: Unable to acquire 76 bytes of memory, got 0

    1 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.OutOfMemoryError

      Unable to acquire 262144 bytes of memory, got 220032

      at org.apache.spark.memory.MemoryConsumer.allocateArray()
    2. org.apache.spark
      BytesToBytesMap.<init>
      1. org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:91)
      2. org.apache.spark.unsafe.map.BytesToBytesMap.allocate(BytesToBytesMap.java:735)
      3. org.apache.spark.unsafe.map.BytesToBytesMap.<init>(BytesToBytesMap.java:197)
      4. org.apache.spark.unsafe.map.BytesToBytesMap.<init>(BytesToBytesMap.java:212)
      4 frames
    3. Spark Project SQL
      UnsafeFixedWidthAggregationMap.<init>
      1. org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.<init>(UnsafeFixedWidthAggregationMap.java:103)
      1 frame
    4. org.apache.spark
      TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply
      1. org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.<init>(TungstenAggregationIterator.scala:483)
      2. org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
      3. org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
      3 frames
    5. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
      2. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
      3. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      5. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      6. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      7. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      8. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      9. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
      10. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
      11. org.apache.spark.scheduler.Task.run(Task.scala:89)
      12. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      12 frames
    6. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames