org.apache.spark.SparkException: Task failed while writing rows

Stack Overflow | ML_Passion | 8 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Re: java.lang.OutOfMemoryError: Unable to acquire bytes of memory

    spark-dev | 1 year ago | james
    java.lang.OutOfMemoryError: Unable to acquire 1073741824 bytes of memory, got 1060110796
  2. 0

    Re: java.lang.OutOfMemoryError: Unable to acquire bytes of memory

    spark-dev | 1 year ago | Nezih Yigitbasi
    java.lang.OutOfMemoryError: Unable to acquire 1073741824 bytes of memory, got 1060110796
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    [jira] [Created] (SPARK-14363) Executor OOM while trying to acquire new page from the memory manager

    spark-issues | 1 year ago | Sital Kedia (JIRA)
    java.lang.OutOfMemoryError: Unable to acquire 76 bytes of memory, got 0
  5. 0

    [jira] [Updated] (SPARK-14363) Executor OOM while trying to acquire new page from the memory manager

    spark-issues | 1 year ago | Sital Kedia (JIRA)
    java.lang.OutOfMemoryError: Unable to acquire 76 bytes of memory, got 0
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.OutOfMemoryError

    Unable to acquire 100 bytes of memory, got 0

    at org.apache.spark.memory.MemoryConsumer.allocatePage()
  2. org.apache.spark
    UnsafeExternalSorter.insertRecord
    1. org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:129)
    2. org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPageIfNecessary(UnsafeExternalSorter.java:374)
    3. org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:396)
    3 frames
  3. Spark Project SQL
    UnsafeExternalRowSorter.insertRow
    1. org.apache.spark.sql.execution.UnsafeExternalRowSorter.insertRow(UnsafeExternalRowSorter.java:94)
    1 frame
  4. Spark Project Catalyst
    GeneratedClass$GeneratedIterator.processNext
    1. org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.sort_addToSorter$(Unknown Source)
    2. org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    2 frames
  5. Spark Project SQL
    WindowExec$$anonfun$15.apply
    1. org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    2. org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
    3. org.apache.spark.sql.execution.WindowExec$$anonfun$15$$anon$1.fetchNextRow(WindowExec.scala:300)
    4. org.apache.spark.sql.execution.WindowExec$$anonfun$15$$anon$1.<init>(WindowExec.scala:309)
    5. org.apache.spark.sql.execution.WindowExec$$anonfun$15.apply(WindowExec.scala:289)
    6. org.apache.spark.sql.execution.WindowExec$$anonfun$15.apply(WindowExec.scala:288)
    6 frames
  6. Spark
    CoalescedRDD$$anonfun$compute$1.apply
    1. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
    2. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
    3. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    5. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    6. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    7. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    8. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    9. org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89)
    10. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    11. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    12. org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89)
    13. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    14. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    15. org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:89)
    16. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    17. org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    18. org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:96)
    19. org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:95)
    19 frames
  7. Scala
    Iterator$$anon$12.hasNext
    1. scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
    2. scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
    2 frames
  8. org.apache.spark
    DefaultWriterContainer$$anonfun$writeRows$1.apply
    1. org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:253)
    2. org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
    3. org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
    3 frames
  9. Spark
    Utils$.tryWithSafeFinallyAndFailureCallbacks
    1. org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1325)
    1 frame
  10. org.apache.spark
    InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply
    1. org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
    2. org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
    3. org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
    3 frames
  11. Spark
    Executor$TaskRunner.run
    1. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    2. org.apache.spark.scheduler.Task.run(Task.scala:85)
    3. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    3 frames
  12. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    3. java.lang.Thread.run(Thread.java:745)
    3 frames