Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via github.com by Unknown author, 1 year ago
java.lang.ArrayStoreException: scala.collection.mutable.HashSet Serialization trace: shouldSend (org.apache.spark.mllib.recommendation.OutLinkBlock)
via Apache's JIRA Issue Tracker by Gen TANG, 1 year ago
java.lang.ArrayStoreException: scala.collection.mutable.HashSet Serialization trace: shouldSend (org.apache.spark.mllib.recommendation.OutLinkBlock)
via Apache's JIRA Issue Tracker by Gen TANG, 1 year ago
java.lang.ArrayStoreException: scala.collection.mutable.HashSet Serialization trace: shouldSend (org.apache.spark.mllib.recommendation.OutLinkBlock)
via GitHub by tenstriker
, 1 year ago
java.lang.IndexOutOfBoundsException: Index: 100, Size: 6 Serialization trace: familyMap (org.apache.hadoop.hbase.client.Put)
via Stack Overflow by pythonic
, 5 months ago
java.lang.UnsupportedOperationException Serialization trace: mAlignmentBlocks (htsjdk.samtools.SAMRecord)
via Stack Overflow by bachr
, 10 months ago
java.lang.IndexOutOfBoundsException: Index: 48, Size: 8 Serialization trace: familyMap (org.apache.hadoop.hbase.client.Put)
java.lang.ArrayStoreException: scala.collection.mutable.HashSet	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:338)	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:293)	at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:648)	at com.esotericsoftware.kryo.serializers.FieldSerializer$ObjectField.read(FieldSerializer.java:605)	at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:221)	at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)	at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:43)	at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:34)	at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)	at org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:133)	at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)	at org.apache.spark.util.collection.ExternalAppendOnlyMap.insertAll(ExternalAppendOnlyMap.scala:137)	at org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$5.apply(CoGroupedRDD.scala:159)	at org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$5.apply(CoGroupedRDD.scala:158)	at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)	at org.apache.spark.rdd.CoGroupedRDD.compute(CoGroupedRDD.scala:158)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)	at org.apache.spark.rdd.MappedValuesRDD.compute(MappedValuesRDD.scala:31)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)	at org.apache.spark.rdd.FlatMappedValuesRDD.compute(FlatMappedValuesRDD.scala:31)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)	at org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)	at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:61)	at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)	at org.apache.spark.scheduler.Task.run(Task.scala:54)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)	at java.lang.Thread.run(Thread.java:745)