java.lang.OutOfMemoryError: Java heap space

Stack Overflow | melody | 4 months ago
  1. 0

    Java Out-of-Memory Error in pyspark

    Stack Overflow | 4 months ago | melody
    java.lang.OutOfMemoryError: Java heap space
  2. 0

    Spark fails after 6000s because of akka

    spark-dev | 12 months ago | Alexander Pivovarov
    java.lang.OutOfMemoryError: Java heap space
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    Spark fails after 6000s because of akka

    spark-dev | 12 months ago | Alexander Pivovarov
    java.lang.OutOfMemoryError: Java heap space
  5. 0

    Java Heap not full but still getting error java.lang.OutOfMemoryError: Java heap space

    Stack Overflow | 6 months ago | nikhilgupta86
    java.lang.OutOfMemoryError: Java heap space

    2 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.OutOfMemoryError

      Java heap space

      at java.util.Arrays.copyOf()
    2. Java RT
      ObjectOutputStream.writeObject
      1. java.util.Arrays.copyOf(Arrays.java:2271)
      2. java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
      3. java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
      4. java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
      5. java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1876)
      6. java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1785)
      7. java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1188)
      8. java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
      8 frames
    3. Spark
      RDD.mapPartitions
      1. org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
      2. org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:84)
      3. org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
      4. org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
      5. org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
      6. org.apache.spark.SparkContext.clean(SparkContext.scala:2032)
      7. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:703)
      8. org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:702)
      9. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
      10. org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
      11. org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
      12. org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:702)
      12 frames
    4. Spark Project ML Library
      PythonMLLibAPI.trainRandomForestModel
      1. org.apache.spark.mllib.tree.DecisionTree$.findBestSplits(DecisionTree.scala:625)
      2. org.apache.spark.mllib.tree.RandomForest.run(RandomForest.scala:235)
      3. org.apache.spark.mllib.tree.RandomForest$.trainRegressor(RandomForest.scala:380)
      4. org.apache.spark.mllib.api.python.PythonMLLibAPI.trainRandomForestModel(PythonMLLibAPI.scala:744)
      4 frames
    5. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:606)
      4 frames
    6. Py4J
      AbstractCommand.invokeMethod
      1. py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
      2. py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
      3. py4j.Gateway.invoke(Gateway.java:259)
      4. py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
      4 frames