java.lang.NoSuchMethodError: com.google.common.io.ByteSource.concat(Ljava/lang/Iterable;)Lcom/google/common/io/ByteSource;

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by drcrallen
, 1 year ago
com.google.common.io.ByteSource.concat(Ljava/lang/Iterable;)Lcom/google/common/io/ByteSource;
via GitHub by drcrallen
, 1 year ago
com.google.common.io.ByteSource.concat(Ljava/lang/Iterable;)Lcom/google/common/io/ByteSource;
via GitHub by drcrallen
, 11 months ago
com.google.common.io.ByteSource.concat(Ljava/lang/Iterable;)Lcom/google/common/io/ByteSource;
java.lang.NoSuchMethodError: com.google.common.io.ByteSource.concat(Ljava/lang/Iterable;)Lcom/google/common/io/ByteSource;
at io.druid.segment.data.GenericIndexedWriter.combineStreams(GenericIndexedWriter.java:139)
at io.druid.segment.StringDimensionMergerLegacy.writeValueMetadataToFile(StringDimensionMergerLegacy.java:206)
at io.druid.segment.IndexMerger.makeIndexFiles(IndexMerger.java:696)
at io.druid.segment.IndexMerger.merge(IndexMerger.java:438)
at io.druid.segment.IndexMerger.persist(IndexMerger.java:186)
at io.druid.segment.IndexMerger.persist(IndexMerger.java:152)
at io.druid.indexer.spark.SparkDruidIndexer$$anonfun$13$$anonfun$21.apply(SparkDruidIndexer.scala:293)
at io.druid.indexer.spark.SparkDruidIndexer$$anonfun$13$$anonfun$21.apply(SparkDruidIndexer.scala:288)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ListBuffer.$plus$plus$eq(ListBuffer.scala:183)
at scala.collection.mutable.ListBuffer.$plus$plus$eq(ListBuffer.scala:45)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at scala.collection.AbstractIterator.to(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toList(TraversableOnce.scala:294)
at io.druid.indexer.spark.SparkDruidIndexer$$anonfun$13.apply(SparkDruidIndexer.scala:309)
at io.druid.indexer.spark.SparkDruidIndexer$$anonfun$13.apply(SparkDruidIndexer.scala:205)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:801)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:801)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Users with the same issue

You are the first who have seen this exception.

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.