Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by car2008
, 1 year ago
via spark-user by Olivier Girardot, 1 year ago
via spark-user by Olivier Girardot, 1 year ago
via incubator-spark-user by Olivier Girardot, 1 year ago
via spark-user by Olivier Girardot, 1 year ago
via GitHub by jpdna
, 1 year ago
Java heap space
java.lang.OutOfMemoryError: Java heap space	at java.util.IdentityHashMap.resize(IdentityHashMap.java:471)	at java.util.IdentityHashMap.put(IdentityHashMap.java:440)	at org.apache.spark.util.SizeEstimator$SearchState.enqueue(SizeEstimator.scala:176)	at org.apache.spark.util.SizeEstimator$$anonfun$visitSingleObject$1.apply(SizeEstimator.scala:224)	at org.apache.spark.util.SizeEstimator$$anonfun$visitSingleObject$1.apply(SizeEstimator.scala:223)	at scala.collection.immutable.List.foreach(List.scala:318)	at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:223)	at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:203)	at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:70)	at org.apache.spark.util.collection.SizeTracker$class.takeSample(SizeTracker.scala:78)	at org.apache.spark.util.collection.SizeTracker$class.afterUpdate(SizeTracker.scala:70)	at org.apache.spark.util.collection.SizeTrackingVector.$plus$eq(SizeTrackingVector.scala:31)	at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:285)	at org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:169)	at org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:147)	at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:798)	at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:645)	at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1003)	at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:99)	at org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:85)	at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)	at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)	at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1326)	at org.hammerlab.guacamole.readsets.rdd.PartitionedRegions$.compute(PartitionedRegions.scala:199)	at org.hammerlab.guacamole.readsets.rdd.PartitionedRegions$.apply(PartitionedRegions.scala:184)	at org.hammerlab.guacamole.readsets.rdd.PartitionedRegions$.apply(PartitionedRegions.scala:146)	at org.hammerlab.guacamole.commands.SomaticJoint$.makeCalls(SomaticJointCaller.scala:166)	at org.hammerlab.guacamole.commands.SomaticJoint$Caller$.run(SomaticJointCaller.scala:91)	at org.hammerlab.guacamole.commands.SomaticJoint$Caller$.run(SomaticJointCaller.scala:56)	at org.hammerlab.guacamole.commands.SparkCommand.run(SparkCommand.scala:12)	at org.hammerlab.guacamole.commands.Command.run(Command.scala:27)	at org.hammerlab.guacamole.Main$.main(Main.scala:49)