Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by samelamin
, 1 year ago
com.google.common.base.Splitter.splitToList(Ljava/lang/CharSequence;)Ljava/util/List;
via GitHub by samelamin
, 1 year ago
com.google.common.base.Splitter.splitToList(Ljava/lang/CharSequence;)Ljava/util/List; and the stack trace ```
via GitHub by pranay29
, 1 month ago
com.google.common.base.Splitter.splitToList(Ljava/lang/CharSequence;)Ljava/util/List;
via Stack Overflow by Abhis
, 1 year ago
com.google.common.base.Splitter.splitToList(Ljava/lang/CharSequence;)Ljava/util/List;
via GitHub by martinstuder
, 2 months ago
com.google.common.base.Splitter.splitToList(Ljava/lang/CharSequence;)Ljava/util/List;
java.lang.NoSuchMethodError: com.google.common.base.Splitter.splitToList(Ljava/lang/CharSequence;)Ljava/util/List;	at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase$ParentTimestampUpdateIncludePredicate.create(GoogleHadoopFileSystemBase.java:572)	at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.createOptionsBuilderFromConfig(GoogleHadoopFileSystemBase.java:1890)	at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1587)	at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:793)	at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:756)	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)	at com.google.cloud.hadoop.io.bigquery.AbstractBigQueryInputFormat.extractExportPathRoot(AbstractBigQueryInputFormat.java:247)	at com.google.cloud.hadoop.io.bigquery.AbstractBigQueryInputFormat.getSplits(AbstractBigQueryInputFormat.java:107)	at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:113)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)	at scala.Option.getOrElse(Option.scala:120)	at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)	at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1293)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)	at org.apache.spark.rdd.RDD.take(RDD.scala:1288)	at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1328)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)	at org.apache.spark.rdd.RDD.first(RDD.scala:1327)	at com.spotify.spark.bigquery.package$BigQuerySQLContext.bigQueryTable(package.scala:112)	at com.spotify.spark.bigquery.package$BigQuerySQLContext.bigQuerySelect(package.scala:93)	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:28)	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:33)	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:35)	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:37)	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:39)	at $iwC$$iwC$$iwC$$iwC$$iwC.(:41)	at $iwC$$iwC$$iwC$$iwC.(:43)	at $iwC$$iwC$$iwC.(:45)	at $iwC$$iwC.(:47)	at $iwC.(:49)