org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 2.0 failed 1 times, most recent failure: Lost task 4.0 in stage 2.0 (TID 52, localhost): java.lang.OutOfMemoryError: GC overhead limit exceeded

GitHub | car2008 | 3 months ago
  1. 0

    Avocado-submit on Spark locally throw exception

    GitHub | 3 months ago | car2008
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 2.0 failed 1 times, most recent failure: Lost task 4.0 in stage 2.0 (TID 52, localhost): java.lang.OutOfMemoryError: GC overhead limit exceeded
  2. 0

    GitHub comment 173#243313005

    GitHub | 3 months ago | car2008
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 2.0 failed 1 times, most recent failure: Lost task 9.0 in stage 2.0 (TID 201, localhost): java.lang.OutOfMemoryError: GC overhead limit exceeded
  3. 0

    GitHub comment 190#243338857

    GitHub | 3 months ago | car2008
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 2.0 failed 1 times, most recent failure: Lost task 9.0 in stage 2.0 (TID 201, localhost): java.lang.OutOfMemoryError: GC overhead limit exceeded
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    SparkException caused by GC overhead limit exceeded - Hortonworks

    hortonworks.com | 2 months ago
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 40, localhost): java.lang.OutOfMemoryError: GC overhead limit exceeded
  6. 0

    ADAM how to improve performance ?

    Google Groups | 2 years ago | Unknown author
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 4, localhost): java.util.NoSuchElementException: None.get

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.spark.SparkException

      Job aborted due to stage failure: Task 4 in stage 2.0 failed 1 times, most recent failure: Lost task 4.0 in stage 2.0 (TID 52, localhost): java.lang.OutOfMemoryError: GC overhead limit exceeded

      at java.util.Arrays.copyOfRange()
    2. Java RT
      StringBuilder.toString
      1. java.util.Arrays.copyOfRange(Arrays.java:3664)
      2. java.lang.String.<init>(String.java:207)
      3. java.lang.StringBuilder.toString(StringBuilder.java:407)
      3 frames
    3. Scala
      StringBuilder.drop
      1. scala.collection.mutable.StringBuilder.toString(StringBuilder.scala:427)
      2. scala.collection.immutable.StringLike$class.slice(StringLike.scala:64)
      3. scala.collection.mutable.StringBuilder.slice(StringBuilder.scala:28)
      4. scala.collection.IndexedSeqOptimized$class.drop(IndexedSeqOptimized.scala:135)
      5. scala.collection.mutable.StringBuilder.drop(StringBuilder.scala:28)
      5 frames
    4. org.bdgenomics.adam
      FastaConverter$$anonfun$mapFragments$1.apply
      1. org.bdgenomics.adam.converters.FastaConverter.org$bdgenomics$adam$converters$FastaConverter$$addFragment$1(FastaConverter.scala:158)
      2. org.bdgenomics.adam.converters.FastaConverter$$anonfun$mapFragments$1.apply(FastaConverter.scala:163)
      3. org.bdgenomics.adam.converters.FastaConverter$$anonfun$mapFragments$1.apply(FastaConverter.scala:163)
      3 frames
    5. Scala
      List.foreach
      1. scala.collection.immutable.List.foreach(List.scala:318)
      1 frame
    6. org.bdgenomics.adam
      FastaConverter$$anonfun$apply$1.apply
      1. org.bdgenomics.adam.converters.FastaConverter.mapFragments(FastaConverter.scala:163)
      2. org.bdgenomics.adam.converters.FastaConverter.convert(FastaConverter.scala:194)
      3. org.bdgenomics.adam.converters.FastaConverter$$anonfun$apply$1.apply(FastaConverter.scala:99)
      4. org.bdgenomics.adam.converters.FastaConverter$$anonfun$apply$1.apply(FastaConverter.scala:92)
      4 frames
    7. Scala
      AbstractIterator.foldLeft
      1. scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      2. scala.collection.Iterator$class.foreach(Iterator.scala:727)
      3. scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      4. scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144)
      5. scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157)
      5 frames
    8. org.bdgenomics.adam
      ADAMSequenceDictionaryRDDAggregator$$anonfun$3.apply
      1. org.bdgenomics.adam.rdd.ADAMSequenceDictionaryRDDAggregator.org$bdgenomics$adam$rdd$ADAMSequenceDictionaryRDDAggregator$$foldIterator$1(ADAMRDDFunctions.scala:120)
      2. org.bdgenomics.adam.rdd.ADAMSequenceDictionaryRDDAggregator$$anonfun$3.apply(ADAMRDDFunctions.scala:126)
      3. org.bdgenomics.adam.rdd.ADAMSequenceDictionaryRDDAggregator$$anonfun$3.apply(ADAMRDDFunctions.scala:126)
      3 frames
    9. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
      2. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
      3. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      5. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      6. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      7. org.apache.spark.scheduler.Task.run(Task.scala:89)
      8. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      8 frames