Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Apache's JIRA Issue Tracker by Josh Rosen, 1 year ago
via spark-dev by Nezih Yigitbasi, 2 years ago
Unable to acquire 1073741824 bytes of memory, got 1060110796
via nabble.com by Unknown author, 1 year ago
via spark-dev by james, 2 years ago
Unable to acquire 1073741824 bytes of memory, got 1060110796
java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got 220032	at org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:91)	at org.apache.spark.unsafe.map.BytesToBytesMap.allocate(BytesToBytesMap.java:735)	at org.apache.spark.unsafe.map.BytesToBytesMap.(BytesToBytesMap.java:197)	at org.apache.spark.unsafe.map.BytesToBytesMap.(BytesToBytesMap.java:212)	at org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.(UnsafeFixedWidthAggregationMap.java:103)	at org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.(TungstenAggregationIterator.scala:483)	at org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)	at org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)	at org.apache.spark.scheduler.Task.run(Task.scala:89)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)	at java.lang.Thread.run(Thread.java:745)