java.lang.OutOfMemoryError: GC overhead limit exceeded

GitHub | car2008 | 3 months ago
  1. 0

    Avocado-submit on Spark locally throw exception

    GitHub | 3 months ago | car2008
    java.lang.OutOfMemoryError: GC overhead limit exceeded
  2. 0

    GWT 2.8.0 compatibility

    GitHub | 3 weeks ago | chris-becker
    java.lang.OutOfMemoryError: GC overhead limit exceeded
  3. 0

    Crack pack crashes during log-in

    GitHub | 3 years ago | jkkid
    java.lang.OutOfMemoryError: GC overhead limit exceeded
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Grails 2.3.0 memory error with webflow

    Stack Overflow | 3 years ago | ludo_rj
    java.lang.OutOfMemoryError: GC overhead limit exceeded
  6. 0

    Execution failed for task ':app:dexBusinessApp1Debug'

    Stack Overflow | 2 years ago
    java.lang.OutOfMemoryError: GC overhead limit exceeded

  1. ajinkya_w 58 times, last 2 months ago
  2. kid 2 times, last 4 months ago
  3. mortalman7 1 times, last 6 months ago
3 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.lang.OutOfMemoryError

    GC overhead limit exceeded

    at java.util.Arrays.copyOfRange()
  2. Java RT
    StringBuilder.toString
    1. java.util.Arrays.copyOfRange(Arrays.java:3664)
    2. java.lang.String.<init>(String.java:207)
    3. java.lang.StringBuilder.toString(StringBuilder.java:407)
    3 frames
  3. Scala
    StringBuilder.drop
    1. scala.collection.mutable.StringBuilder.toString(StringBuilder.scala:427)
    2. scala.collection.immutable.StringLike$class.slice(StringLike.scala:64)
    3. scala.collection.mutable.StringBuilder.slice(StringBuilder.scala:28)
    4. scala.collection.IndexedSeqOptimized$class.drop(IndexedSeqOptimized.scala:135)
    5. scala.collection.mutable.StringBuilder.drop(StringBuilder.scala:28)
    5 frames
  4. org.bdgenomics.adam
    FastaConverter$$anonfun$mapFragments$1.apply
    1. org.bdgenomics.adam.converters.FastaConverter.org$bdgenomics$adam$converters$FastaConverter$$addFragment$1(FastaConverter.scala:158)
    2. org.bdgenomics.adam.converters.FastaConverter$$anonfun$mapFragments$1.apply(FastaConverter.scala:163)
    3. org.bdgenomics.adam.converters.FastaConverter$$anonfun$mapFragments$1.apply(FastaConverter.scala:163)
    3 frames
  5. Scala
    List.foreach
    1. scala.collection.immutable.List.foreach(List.scala:318)
    1 frame
  6. org.bdgenomics.adam
    FastaConverter$$anonfun$apply$1.apply
    1. org.bdgenomics.adam.converters.FastaConverter.mapFragments(FastaConverter.scala:163)
    2. org.bdgenomics.adam.converters.FastaConverter.convert(FastaConverter.scala:194)
    3. org.bdgenomics.adam.converters.FastaConverter$$anonfun$apply$1.apply(FastaConverter.scala:99)
    4. org.bdgenomics.adam.converters.FastaConverter$$anonfun$apply$1.apply(FastaConverter.scala:92)
    4 frames
  7. Scala
    AbstractIterator.foldLeft
    1. scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
    2. scala.collection.Iterator$class.foreach(Iterator.scala:727)
    3. scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    4. scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144)
    5. scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157)
    5 frames
  8. org.bdgenomics.adam
    ADAMSequenceDictionaryRDDAggregator$$anonfun$3.apply
    1. org.bdgenomics.adam.rdd.ADAMSequenceDictionaryRDDAggregator.org$bdgenomics$adam$rdd$ADAMSequenceDictionaryRDDAggregator$$foldIterator$1(ADAMRDDFunctions.scala:120)
    2. org.bdgenomics.adam.rdd.ADAMSequenceDictionaryRDDAggregator$$anonfun$3.apply(ADAMRDDFunctions.scala:126)
    3. org.bdgenomics.adam.rdd.ADAMSequenceDictionaryRDDAggregator$$anonfun$3.apply(ADAMRDDFunctions.scala:126)
    3 frames
  9. Spark
    Executor$TaskRunner.run
    1. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
    2. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
    3. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    5. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    6. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    7. org.apache.spark.scheduler.Task.run(Task.scala:89)
    8. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    8 frames