Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via DataStax JIRA by Rahul Shukla, 2 years ago
via DataStax JIRA by Rahul Shukla, 1 year ago
java.lang.OutOfMemoryError: GC overhead limit exceeded	at scala.collection.immutable.VectorBuilder.(Vector.scala:706)	at scala.collection.immutable.Vector$.newBuilder(Vector.scala:22)	at scala.collection.generic.GenericTraversableTemplate$class.genericBuilder(GenericTraversableTemplate.scala:70)	at scala.collection.AbstractTraversable.genericBuilder(Traversable.scala:104)	at scala.collection.generic.GenTraversableFactory$GenericCanBuildFrom.apply(GenTraversableFactory.scala:57)	at scala.collection.generic.GenTraversableFactory$GenericCanBuildFrom.apply(GenTraversableFactory.scala:52)	at scala.collection.TraversableLike$class.builder$1(TraversableLike.scala:240)	at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)	at scala.collection.AbstractTraversable.map(Traversable.scala:104)	at com.datastax.spark.connector.rdd.partitioner.CassandraRDDPartitioner$$anonfun$partitions$2.apply(CassandraRDDPartitioner.scala:137)	at com.datastax.spark.connector.rdd.partitioner.CassandraRDDPartitioner$$anonfun$partitions$2.apply(CassandraRDDPartitioner.scala:135)	at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:728)	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)	at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:727)	at com.datastax.spark.connector.rdd.partitioner.CassandraRDDPartitioner.partitions(CassandraRDDPartitioner.scala:135)	at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:120)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)	at scala.Option.getOrElse(Option.scala:121)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1802)	at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:979)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)	at org.apache.spark.rdd.RDD.reduce(RDD.scala:961)	at com.datastax.spark.connector.rdd.CassandraTableScanRDD.count(CassandraTableScanRDD.scala:247)	at Hello$.main(Hello.scala:13)	at Hello.main(Hello.scala)