Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Apache's JIRA Issue Tracker by Josh Rosen, 1 year ago
java.lang.OutOfMemoryError: Java heap space	at scala.reflect.ManifestFactory$$anon$10.newArray(Manifest.scala:122)	at scala.reflect.ManifestFactory$$anon$10.newArray(Manifest.scala:120)	at org.apache.spark.util.collection.OpenHashSet.rehash(OpenHashSet.scala:231)	at org.apache.spark.util.collection.OpenHashSet.rehashIfNeeded(OpenHashSet.scala:166)	at org.apache.spark.util.collection.OpenHashSet.rehashIfNeeded$mcJ$sp(OpenHashSet.scala:164)	at org.apache.spark.graphx.util.collection.GraphXPrimitiveKeyOpenHashMap$mcJI$sp.changeValue$mcJI$sp(GraphXPrimitiveKeyOpenHashMap.scala:107)	at org.apache.spark.graphx.impl.EdgePartitionBuilder.toEdgePartition(EdgePartitionBuilder.scala:58)	at org.apache.spark.graphx.impl.GraphImpl$$anonfun$4.apply(GraphImpl.scala:115)	at org.apache.spark.graphx.impl.GraphImpl$$anonfun$4.apply(GraphImpl.scala:109)	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)	at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:262)	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)	at org.apache.spark.scheduler.Task.run(Task.scala:88)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)	at java.lang.Thread.run(Thread.java:745)