Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by fingerspitzen
, 1 year ago
Task failed while writing rows.
via DataStax JIRA by Dmytro Popovych, 1 year ago
via Stack Overflow by Newbie
, 1 year ago
via Apache's JIRA Issue Tracker by Naden Franciscus, 1 year ago
via DataStax JIRA by Dmytro Popovych, 2 years ago
via Stack Overflow by Hello lad
, 2 years ago
java.io.IOException: Unable to acquire 16777216 bytes of memory	at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPage(UnsafeExternalSorter.java:351)	at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.(UnsafeExternalSorter.java:138)	at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.create(UnsafeExternalSorter.java:106)	at org.apache.spark.sql.execution.UnsafeExternalRowSorter.(UnsafeExternalRowSorter.java:68)	at org.apache.spark.sql.execution.TungstenSort.org$apache$spark$sql$execution$TungstenSort$$preparePartition$1(sort.scala:146)	at org.apache.spark.sql.execution.TungstenSort$$anonfun$doExecute$3.apply(sort.scala:169)	at org.apache.spark.sql.execution.TungstenSort$$anonfun$doExecute$3.apply(sort.scala:169)	at org.apache.spark.rdd.MapPartitionsWithPreparationRDD.prepare(MapPartitionsWithPreparationRDD.scala:50)	at org.apache.spark.rdd.ZippedPartitionsBaseRDD$$anonfun$tryPrepareParents$1.applyOrElse(ZippedPartitionsRDD.scala:83)	at org.apache.spark.rdd.ZippedPartitionsBaseRDD$$anonfun$tryPrepareParents$1.applyOrElse(ZippedPartitionsRDD.scala:82)	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)	at scala.collection.TraversableLike$$anonfun$collect$1.apply(TraversableLike.scala:278)	at scala.collection.immutable.List.foreach(List.scala:318)	at scala.collection.TraversableLike$class.collect(TraversableLike.scala:278)	at scala.collection.AbstractTraversable.collect(Traversable.scala:105)	at org.apache.spark.rdd.ZippedPartitionsBaseRDD.tryPrepareParents(ZippedPartitionsRDD.scala:82)	at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:97)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)