java.lang.OutOfMemoryError: GC overhead limit exceeded

Stack Overflow | Ronaldinho | 2 months ago
  1. 0

    OutOfMemoryError in sqlContext.table()

    Stack Overflow | 2 months ago | Ronaldinho
    java.lang.OutOfMemoryError: GC overhead limit exceeded
  2. 0

    OutOfMemoryError exception when reading Avro files on GCS

    Stack Overflow | 2 years ago | Jason Chou
    java.lang.OutOfMemoryError: GC overhead limit exceeded
  3. 0

    [platform] Unhandled event loop exception

    Eclipse Bugzilla | 2 years ago | error-reports-inbox
    java.lang.OutOfMemoryError: GC overhead limit exceeded
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Java Out of Memory Problems | Wowza Support

    wowza.com | 1 year ago
    java.lang.OutOfMemoryError: GC overhead limit exceeded
  6. 0

    JDBC MySQL import GC Overhead Limit Exceeded

    Stack Overflow | 3 years ago | BradStevenson
    java.lang.OutOfMemoryError: GC overhead limit exceeded

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.OutOfMemoryError

      GC overhead limit exceeded

      at org.apache.hadoop.fs.Path.initialize()
    2. Hadoop
      Path.<init>
      1. org.apache.hadoop.fs.Path.initialize(Path.java:203)
      2. org.apache.hadoop.fs.Path.<init>(Path.java:172)
      2 frames
    3. Spark Project SQL
      HadoopFsRelation$$anonfun$21.apply
      1. org.apache.spark.sql.sources.HadoopFsRelation$$anonfun$21.apply(interfaces.scala:908)
      2. org.apache.spark.sql.sources.HadoopFsRelation$$anonfun$21.apply(interfaces.scala:906)
      2 frames
    4. Scala
      ArrayOps$ofRef.map
      1. scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
      2. scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
      3. scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
      4. scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
      5. scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
      6. scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
      6 frames
    5. Spark Project SQL
      HadoopFsRelation.cachedLeafStatuses
      1. org.apache.spark.sql.sources.HadoopFsRelation$.listLeafFilesInParallel(interfaces.scala:906)
      2. org.apache.spark.sql.sources.HadoopFsRelation$FileStatusCache.listLeafFiles(interfaces.scala:445)
      3. org.apache.spark.sql.sources.HadoopFsRelation$FileStatusCache.refresh(interfaces.scala:477)
      4. org.apache.spark.sql.sources.HadoopFsRelation.org$apache$spark$sql$sources$HadoopFsRelation$$fileStatusCache$lzycompute(interfaces.scala:489)
      5. org.apache.spark.sql.sources.HadoopFsRelation.org$apache$spark$sql$sources$HadoopFsRelation$$fileStatusCache(interfaces.scala:487)
      6. org.apache.spark.sql.sources.HadoopFsRelation.cachedLeafStatuses(interfaces.scala:494)
      6 frames
    6. org.apache.spark
      ParquetRelation$$anonfun$6.apply
      1. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$MetadataCache.refresh(ParquetRelation.scala:398)
      2. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.org$apache$spark$sql$execution$datasources$parquet$ParquetRelation$$metadataCache$lzycompute(ParquetRelation.scala:145)
      3. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.org$apache$spark$sql$execution$datasources$parquet$ParquetRelation$$metadataCache(ParquetRelation.scala:143)
      4. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$6.apply(ParquetRelation.scala:202)
      5. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$6.apply(ParquetRelation.scala:202)
      5 frames
    7. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:120)
      1 frame
    8. org.apache.spark
      ParquetRelation.dataSchema
      1. org.apache.spark.sql.execution.datasources.parquet.ParquetRelation.dataSchema(ParquetRelation.scala:202)
      1 frame
    9. Spark Project SQL
      HadoopFsRelation.schema
      1. org.apache.spark.sql.sources.HadoopFsRelation.schema$lzycompute(interfaces.scala:636)
      2. org.apache.spark.sql.sources.HadoopFsRelation.schema(interfaces.scala:635)
      2 frames
    10. org.apache.spark
      LogicalRelation.<init>
      1. org.apache.spark.sql.execution.datasources.LogicalRelation.<init>(LogicalRelation.scala:37)
      1 frame
    11. Spark Project Hive
      HiveMetastoreCatalog$$anonfun$12.apply
      1. org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$12.apply(HiveMetastoreCatalog.scala:481)
      2. org.apache.spark.sql.hive.HiveMetastoreCatalog$$anonfun$12.apply(HiveMetastoreCatalog.scala:480)
      2 frames
    12. Scala
      Option.getOrElse
      1. scala.Option.getOrElse(Option.scala:120)
      1 frame
    13. Spark Project Hive
      HiveMetastoreCatalog$ParquetConversions$$anonfun$apply$1.applyOrElse
      1. org.apache.spark.sql.hive.HiveMetastoreCatalog.org$apache$spark$sql$hive$HiveMetastoreCatalog$$convertToParquetRelation(HiveMetastoreCatalog.scala:480)
      2. org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$$anonfun$apply$1.applyOrElse(HiveMetastoreCatalog.scala:542)
      3. org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$$anonfun$apply$1.applyOrElse(HiveMetastoreCatalog.scala:522)
      3 frames