java.lang.NoSuchMethodError

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • {code} SLF4J: Found binding in [jar:file:/usr/java/apache-tomcat-8.0.33/webapps/ROOT/WEB-INF/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/java/apache-tomcat-8.0.33/webapps/ROOT/WEB-INF/lib/logback-classic-1.0.13.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/java/apache-tomcat-8.0.33/webapps/ROOT/WEB-INF/lib/tika-app-1.13.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 14-Sep-2016 17:52:41.429 INFO [localhost-startStop-1] dk.kb.webdanica.webapp.Environment.<init> Connected to NetarchiveSuite system with environmentname: WEBDANICA INFO Did not find Netty's native epoll transport in the classpath, defaulting to NIO. 14-Sep-2016 17:52:42.333 SEVERE [localhost-startStop-1] dk.kb.webdanica.webapp.Servlet.init dk.kb.webdanica.webapp.Servlet failed to initialize properly. java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:178) at com.datastax.driver.core.Connection$Factory.open(Connection.java:739) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:253) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:201) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1424) at com.datastax.driver.core.Cluster.init(Cluster.java:163) at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:334) at com.datastax.driver.core.Cluster.connect(Cluster.java:284) at dk.kb.webdanica.datamodel.Cassandra.<init>(Cassandra.java:28) at dk.kb.webdanica.datamodel.CassandraSeedDAO.<init>(CassandraSeedDAO.java:46) at dk.kb.webdanica.datamodel.CassandraSeedDAO.getInstance(CassandraSeedDAO.java:39) at dk.kb.webdanica.webapp.Configuration.initDb(Configuration.java:85) at dk.kb.webdanica.webapp.Configuration.<init>(Configuration.java:81) at dk.kb.webdanica.webapp.Configuration.getInstance(Configuration.java:40) at dk.kb.webdanica.webapp.Environment.<init>(Environment.java:274) at dk.kb.webdanica.webapp.Servlet.init(Servlet.java:54) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1238) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1151) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1038) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4996) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5285) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:940) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1816) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code}
    via by Søren Vejrup Carlsen,
  • Cassandra and Java EE
    via Stack Overflow by kinkajou
    ,
  • I'm getting an exception when using Guava dependencies other than 14.0.1. I've tried both shaded/unshaded versions in the pom: {code:xml} <dependency> <groupId>com.datastax.cassandra</groupId> <artifactId>cassandra-driver-core</artifactId> <version>2.1.6</version> <classifier>shaded</classifier> <exclusions> <exclusion> <groupId>io.netty</groupId> <artifactId>*</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>com.datastax.cassandra</groupId> <artifactId>cassandra-driver-mapping</artifactId> <version>2.1.6</version> <exclusions> <exclusion> <groupId>com.datastax.cassandra</groupId> <artifactId>cassandra-driver-core</artifactId> </exclusion> </exclusions> </dependency> {code} {code:java} Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.transform(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/AsyncFunction;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:172) at com.datastax.driver.core.Connection$Factory.open(Connection.java:721) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:244) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:190) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272) at com.datastax.driver.core.Cluster.init(Cluster.java:158) at com.datastax.driver.core.Cluster.connect(Cluster.java:248) {code}
    via by Lucian Mocanu,
  • I'm getting an exception when using Guava dependencies other than 14.0.1. I've tried both shaded/unshaded versions in the pom: {code:xml} <dependency> <groupId>com.datastax.cassandra</groupId> <artifactId>cassandra-driver-core</artifactId> <version>2.1.6</version> <classifier>shaded</classifier> <exclusions> <exclusion> <groupId>io.netty</groupId> <artifactId>*</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>com.datastax.cassandra</groupId> <artifactId>cassandra-driver-mapping</artifactId> <version>2.1.6</version> <exclusions> <exclusion> <groupId>com.datastax.cassandra</groupId> <artifactId>cassandra-driver-core</artifactId> </exclusion> </exclusions> </dependency> {code} {code:java} Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.transform(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/AsyncFunction;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:172) at com.datastax.driver.core.Connection$Factory.open(Connection.java:721) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:244) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:190) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272) at com.datastax.driver.core.Cluster.init(Cluster.java:158) at com.datastax.driver.core.Cluster.connect(Cluster.java:248) {code}
    via by Lucian Mocanu,
  • GitHub comment 88#191263156
    via GitHub by pklemenkov
    ,
  • Hi, I'm trying out connecting to Cassandra and reading / writing data. I'm able to connect (e.g. create an RDD pointing to a cassandra table), but when I retrieve the data it fails. I've created a fat jar using this in my sbt: {code} libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-sql" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-hive" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-streaming" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-mllib" % "1.6.0" % "provided" ,"com.datastax.spark" %% "spark-cassandra-connector" % "1.6.0-M1" ) // META-INF discarding mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) => { case PathList("META-INF", xs @ _*) => MergeStrategy.discard case x => MergeStrategy.first } } {code} When I launch a spark shell session like this, I am able to connect to a table and count rows; {code} /opt/spark/current/bin/spark-shell --master local[2] --conf "spark.cassandra.connection.host=[cassandra-host]” --conf "spark.cassandra.auth.username=[my username]“ --conf "spark.cassandra.auth.password=[my pwd]“ --jars fat-jar-assembly-1.0.jar Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.6.0 /_/ Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67) Type in expressions to have them evaluated. Type :help for more information. Spark context available as sc. SQL context available as sqlContext. scala> import com.datastax.spark.connector._ import com.datastax.spark.connector._ scala> val personRDD = sc.cassandraTable(“test”,”person"); personRDD: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15 scala> println(personRDD.count) 16/04/15 12:43:41 WARN ReplicationStrategy$NetworkTopologyStrategy: Error while computing token map for keyspace test with datacenter ***: could not achieve replication factor 2 (found 0 replicas only), check your keyspace replication settings. 2 {code} When I launch it without the --master local[2], then it doesn't work: {code} /opt/spark/current/bin/spark-shell --conf "spark.cassandra.connection.host=[cassandra-host]” --conf "spark.cassandra.auth.username=[my username]“ --conf "spark.cassandra.auth.password=[my pwd]“ --jars fat-jar-assembly-1.0.jar Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.6.0 /_/ Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67) Type in expressions to have them evaluated. Type :help for more information. spark.driver.cores is set but does not apply in client mode. Spark context available as sc. SQL context available as sqlContext. scala> import com.datastax.spark.connector._ import com.datastax.spark.connector._ scala> val message = sc.cassandraTable(“test”,”person”); message: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15 scala> println(message.count) 16/04/14 14:16:04 WARN ReplicationStrategy$NetworkTopologyStrategy: Error while computing token map for keyspace test with datacenter ****: could not achieve replication factor 2 (found 0 replicas only), check your keyspace replication settings. [Stage 0:> (0 + 2) / 2]16/04/14 14:16:09 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, [spark-node-on-yarn]): java.io.IOException: Failed to open native connection to Cassandra at {**.**.246, **.**.10}:9042 at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31) at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56) at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:218) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:177) at com.datastax.driver.core.Connection$Factory.open(Connection.java:731) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414) at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393) at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155) ... 14 more 16/04/14 14:16:21 WARN TaskSetManager: Lost task 0.2 in stage 0.0 (TID 4, [spark-node-on-yarn]): java.io.IOException: Failed to open native connection to Cassandra at {**.**.246, **.**.10}:9042 at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31) at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56) at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:218) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:177) at com.datastax.driver.core.Connection$Factory.open(Connection.java:731) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414) at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393) at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155) ... 14 more {code} But the 'hack' makes spark running on the local node. So it's not distributed anymore, so it won't work with any real data.
    via by Ben Teeuwen,
    • java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:178) at com.datastax.driver.core.Connection$Factory.open(Connection.java:739) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:253) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:201) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1424) at com.datastax.driver.core.Cluster.init(Cluster.java:163) at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:334) at com.datastax.driver.core.Cluster.connect(Cluster.java:284) at dk.kb.webdanica.datamodel.Cassandra.<init>(Cassandra.java:28) at dk.kb.webdanica.datamodel.CassandraSeedDAO.<init>(CassandraSeedDAO.java:46) at dk.kb.webdanica.datamodel.CassandraSeedDAO.getInstance(CassandraSeedDAO.java:39) at dk.kb.webdanica.webapp.Configuration.initDb(Configuration.java:85) at dk.kb.webdanica.webapp.Configuration.<init>(Configuration.java:81) at dk.kb.webdanica.webapp.Configuration.getInstance(Configuration.java:40) at dk.kb.webdanica.webapp.Environment.<init>(Environment.java:274) at dk.kb.webdanica.webapp.Servlet.init(Servlet.java:54) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1238) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1151) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1038) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4996) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5285) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:940) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1816) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

    Users with the same issue

    Unknown visitor
    Unknown visitor1 times, last one,