java.io.IOException: Failed to open native connection to Cassandra at {**.**.246, **.**.10}:9042

DataStax JIRA | Ben Teeuwen | 1 year ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Hi, I'm trying out connecting to Cassandra and reading / writing data. I'm able to connect (e.g. create an RDD pointing to a cassandra table), but when I retrieve the data it fails. I've created a fat jar using this in my sbt: {code} libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-sql" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-hive" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-streaming" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-mllib" % "1.6.0" % "provided" ,"com.datastax.spark" %% "spark-cassandra-connector" % "1.6.0-M1" ) // META-INF discarding mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) => { case PathList("META-INF", xs @ _*) => MergeStrategy.discard case x => MergeStrategy.first } } {code} When I launch a spark shell session like this, I am able to connect to a table and count rows; {code} /opt/spark/current/bin/spark-shell --master local[2] --conf "spark.cassandra.connection.host=[cassandra-host]” --conf "spark.cassandra.auth.username=[my username]“ --conf "spark.cassandra.auth.password=[my pwd]“ --jars fat-jar-assembly-1.0.jar Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.6.0 /_/ Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67) Type in expressions to have them evaluated. Type :help for more information. Spark context available as sc. SQL context available as sqlContext. scala> import com.datastax.spark.connector._ import com.datastax.spark.connector._ scala> val personRDD = sc.cassandraTable(“test”,”person"); personRDD: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15 scala> println(personRDD.count) 16/04/15 12:43:41 WARN ReplicationStrategy$NetworkTopologyStrategy: Error while computing token map for keyspace test with datacenter ***: could not achieve replication factor 2 (found 0 replicas only), check your keyspace replication settings. 2 {code} When I launch it without the --master local[2], then it doesn't work: {code} /opt/spark/current/bin/spark-shell --conf "spark.cassandra.connection.host=[cassandra-host]” --conf "spark.cassandra.auth.username=[my username]“ --conf "spark.cassandra.auth.password=[my pwd]“ --jars fat-jar-assembly-1.0.jar Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.6.0 /_/ Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67) Type in expressions to have them evaluated. Type :help for more information. spark.driver.cores is set but does not apply in client mode. Spark context available as sc. SQL context available as sqlContext. scala> import com.datastax.spark.connector._ import com.datastax.spark.connector._ scala> val message = sc.cassandraTable(“test”,”person”); message: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15 scala> println(message.count) 16/04/14 14:16:04 WARN ReplicationStrategy$NetworkTopologyStrategy: Error while computing token map for keyspace test with datacenter ****: could not achieve replication factor 2 (found 0 replicas only), check your keyspace replication settings. [Stage 0:> (0 + 2) / 2]16/04/14 14:16:09 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, [spark-node-on-yarn]): java.io.IOException: Failed to open native connection to Cassandra at {**.**.246, **.**.10}:9042 at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31) at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56) at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:218) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:177) at com.datastax.driver.core.Connection$Factory.open(Connection.java:731) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414) at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393) at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155) ... 14 more 16/04/14 14:16:21 WARN TaskSetManager: Lost task 0.2 in stage 0.0 (TID 4, [spark-node-on-yarn]): java.io.IOException: Failed to open native connection to Cassandra at {**.**.246, **.**.10}:9042 at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31) at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56) at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:218) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:177) at com.datastax.driver.core.Connection$Factory.open(Connection.java:731) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414) at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393) at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155) ... 14 more {code} But the 'hack' makes spark running on the local node. So it's not distributed anymore, so it won't work with any real data.

    DataStax JIRA | 1 year ago | Ben Teeuwen
    java.io.IOException: Failed to open native connection to Cassandra at {**.**.246, **.**.10}:9042
  2. 0

    Hi, I'm trying out connecting to Cassandra and reading / writing data. I'm able to connect (e.g. create an RDD pointing to a cassandra table), but when I retrieve the data it fails. I've created a fat jar using this in my sbt: {code} libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-sql" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-hive" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-streaming" % "1.6.0" % "provided" ,"org.apache.spark" %% "spark-mllib" % "1.6.0" % "provided" ,"com.datastax.spark" %% "spark-cassandra-connector" % "1.6.0-M1" ) // META-INF discarding mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) => { case PathList("META-INF", xs @ _*) => MergeStrategy.discard case x => MergeStrategy.first } } {code} When I launch a spark shell session like this, I am able to connect to a table and count rows; {code} /opt/spark/current/bin/spark-shell --master local[2] --conf "spark.cassandra.connection.host=[cassandra-host]” --conf "spark.cassandra.auth.username=[my username]“ --conf "spark.cassandra.auth.password=[my pwd]“ --jars fat-jar-assembly-1.0.jar Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.6.0 /_/ Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67) Type in expressions to have them evaluated. Type :help for more information. Spark context available as sc. SQL context available as sqlContext. scala> import com.datastax.spark.connector._ import com.datastax.spark.connector._ scala> val personRDD = sc.cassandraTable(“test”,”person"); personRDD: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15 scala> println(personRDD.count) 16/04/15 12:43:41 WARN ReplicationStrategy$NetworkTopologyStrategy: Error while computing token map for keyspace test with datacenter ***: could not achieve replication factor 2 (found 0 replicas only), check your keyspace replication settings. 2 {code} When I launch it without the --master local[2], then it doesn't work: {code} /opt/spark/current/bin/spark-shell --conf "spark.cassandra.connection.host=[cassandra-host]” --conf "spark.cassandra.auth.username=[my username]“ --conf "spark.cassandra.auth.password=[my pwd]“ --jars fat-jar-assembly-1.0.jar Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.6.0 /_/ Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67) Type in expressions to have them evaluated. Type :help for more information. spark.driver.cores is set but does not apply in client mode. Spark context available as sc. SQL context available as sqlContext. scala> import com.datastax.spark.connector._ import com.datastax.spark.connector._ scala> val message = sc.cassandraTable(“test”,”person”); message: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15 scala> println(message.count) 16/04/14 14:16:04 WARN ReplicationStrategy$NetworkTopologyStrategy: Error while computing token map for keyspace test with datacenter ****: could not achieve replication factor 2 (found 0 replicas only), check your keyspace replication settings. [Stage 0:> (0 + 2) / 2]16/04/14 14:16:09 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, [spark-node-on-yarn]): java.io.IOException: Failed to open native connection to Cassandra at {**.**.246, **.**.10}:9042 at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31) at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56) at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:218) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:177) at com.datastax.driver.core.Connection$Factory.open(Connection.java:731) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414) at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393) at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155) ... 14 more 16/04/14 14:16:21 WARN TaskSetManager: Lost task 0.2 in stage 0.0 (TID 4, [spark-node-on-yarn]): java.io.IOException: Failed to open native connection to Cassandra at {**.**.246, **.**.10}:9042 at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:162) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148) at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31) at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56) at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81) at com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:218) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture; at com.datastax.driver.core.Connection.initAsync(Connection.java:177) at com.datastax.driver.core.Connection$Factory.open(Connection.java:731) at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414) at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393) at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155) ... 14 more {code} But the 'hack' makes spark running on the local node. So it's not distributed anymore, so it won't work with any real data.

    DataStax JIRA | 1 year ago | Ben Teeuwen
    java.io.IOException: Failed to open native connection to Cassandra at {**.**.246, **.**.10}:9042
  3. 0

    GitHub comment 21#211386632

    GitHub | 1 year ago | webstergd
    java.io.IOException: Failed to open native conne ction to Cassandra at {10.0.4.80, 10.0.4.81, 10.0.4.82}:9042
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Spark 1.3 and Cassandra 3.0 problems with guava

    Stack Overflow | 1 year ago | Dragan Milcevski
    java.io.IOException: Failed to open native connection to Cassandra at {139.19.52.111}:9042
  6. 0

    GitHub comment 88#191263156

    GitHub | 1 year ago | pklemenkov
    org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 7, lin00.profile.rambler.ru): java.io.IOException: Failed to open native connection to Cassandra at {10.9.5.198}:9042

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.NoSuchMethodError

      com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture;

      at com.datastax.driver.core.Connection.initAsync()
    2. DataStax Java Driver for Apache Cassandra - Core
      Cluster.getMetadata
      1. com.datastax.driver.core.Connection.initAsync(Connection.java:177)
      2. com.datastax.driver.core.Connection$Factory.open(Connection.java:731)
      3. com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251)
      4. com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
      5. com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
      6. com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
      7. com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393)
      7 frames
    3. spark-cassandra-connector
      CassandraTableScanRDD.compute
      1. com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:155)
      2. com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148)
      3. com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:148)
      4. com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31)
      5. com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56)
      6. com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:81)
      7. com.datastax.spark.connector.rdd.CassandraTableScanRDD.compute(CassandraTableScanRDD.scala:218)
      7 frames
    4. Spark
      Executor$TaskRunner.run
      1. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
      2. org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
      3. org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      4. org.apache.spark.scheduler.Task.run(Task.scala:89)
      5. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      5 frames
    5. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames