Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

  1. ,
    via oracle.com by Unknown author

    An easy way to solve OutOfMemoryError in java is to increase the maximum heap size by using JVM options -Xmx512M, this will immediately solve your OutOfMemoryError.

  2. ,
    via Stack Overflow by Eugene Yokota

    In Eclipse : go to Run --> Run Configurations --> then select the project under maven build --> then select the tab "JRE" --> then enter -Xmx1024m.

    This should increase the memory heap for all the builds/projects. The above memory size is 1 GB.

Solutions on the web

via spark-user by leosandylh@gmail.com, 2 years ago
Java heap space
via spark-user by leosandylh@gmail.com, 2 years ago
Java heap space
via apache.org by Unknown author, 2 years ago
via Google Groups by Kim Trang Le, 1 year ago
via Google Groups by Kim Trang Le, 2 years ago
java.lang.OutOfMemoryError: Java heap space	at java.util.Arrays.copyOf(Arrays.java:2271)	at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)	at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)	at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)	at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1876)	at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1785)	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1188)	at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)	at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)	at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:84)	at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)	at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)	at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)	at org.apache.spark.SparkContext.clean(SparkContext.scala:2032)	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:703)	at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:702)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)	at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:702)	at org.apache.spark.mllib.tree.DecisionTree$.findBestSplits(DecisionTree.scala:625)	at org.apache.spark.mllib.tree.RandomForest.run(RandomForest.scala:235)	at org.apache.spark.mllib.tree.RandomForest$.trainRegressor(RandomForest.scala:380)	at org.apache.spark.mllib.api.python.PythonMLLibAPI.trainRandomForestModel(PythonMLLibAPI.scala:744)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)	at java.lang.reflect.Method.invoke(Method.java:606)	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)	at py4j.Gateway.invoke(Gateway.java:259)	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)