java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3000/test_db?user=abc&password=123

Apache's JIRA Issue Tracker | Evan Yu | 2 years ago
  1. 0

    val sc = new SparkContext(conf) sc.addJar("J:\mysql-connector-java-5.1.35.jar") val df = sqlContext.jdbc("jdbc:mysql://localhost:3000/test_db?user=abc&password=123", "table1") df.show() Folloing error: 2015-04-14 17:04:39,541 [task-result-getter-0] WARN org.apache.spark.scheduler.TaskSetManager - Lost task 0.0 in stage 0.0 (TID 0, dev1.test.dc2.com): java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3000/test_db?user=abc&password=123 at java.sql.DriverManager.getConnection(DriverManager.java:689) at java.sql.DriverManager.getConnection(DriverManager.java:270) at org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConnector$1.apply(JDBCRDD.scala:158) at org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConnector$1.apply(JDBCRDD.scala:150) at org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.<init>(JDBCRDD.scala:317) at org.apache.spark.sql.jdbc.JDBCRDD.compute(JDBCRDD.scala:309) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:64) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

    Apache's JIRA Issue Tracker | 2 years ago | Evan Yu
    java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3000/test_db?user=abc&password=123
  2. 0

    val sc = new SparkContext(conf) sc.addJar("J:\mysql-connector-java-5.1.35.jar") val df = sqlContext.jdbc("jdbc:mysql://localhost:3000/test_db?user=abc&password=123", "table1") df.show() Folloing error: 2015-04-14 17:04:39,541 [task-result-getter-0] WARN org.apache.spark.scheduler.TaskSetManager - Lost task 0.0 in stage 0.0 (TID 0, dev1.test.dc2.com): java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3000/test_db?user=abc&password=123 at java.sql.DriverManager.getConnection(DriverManager.java:689) at java.sql.DriverManager.getConnection(DriverManager.java:270) at org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConnector$1.apply(JDBCRDD.scala:158) at org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConnector$1.apply(JDBCRDD.scala:150) at org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.<init>(JDBCRDD.scala:317) at org.apache.spark.sql.jdbc.JDBCRDD.compute(JDBCRDD.scala:309) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:64) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

    Apache's JIRA Issue Tracker | 2 years ago | Evan Yu
    java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3000/test_db?user=abc&password=123
  3. 0

    java.sql.SQLException: No suitable driver found when loading DataFrame into Spark SQL

    Stack Overflow | 2 years ago | Wildfire
    java.sql.SQLException: No suitable driver found for jdbc:mysql://<hostname>:3306/test
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Class not found error when running Phoenix Spark program in cluster

    Stack Overflow | 1 year ago | Satya
    java.sql.SQLException: No suitable driver found for jdbc:phoenix:tpar019.test.com:2181
  6. 0

    Apache Spark Mysql connection suitable jdbc driver not found

    Stack Overflow | 1 year ago | Shawon91Sust
    java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost/productsearch_userinfo?user=spark&password=spark123

  1. jshakil 1 times, last 1 month ago
  2. linxiaolong 7101 times, last 3 weeks ago
  3. ruifung 1 times, last 3 months ago
  4. linxiaolong 838 times, last 3 months ago
  5. Juraji 1 times, last 8 months ago
49 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.sql.SQLException

    No suitable driver found for jdbc:mysql://localhost:3000/test_db?user=abc&password=123

    at java.sql.DriverManager.getConnection()
  2. Java RT
    DriverManager.getConnection
    1. java.sql.DriverManager.getConnection(DriverManager.java:689)
    2. java.sql.DriverManager.getConnection(DriverManager.java:270)
    2 frames
  3. org.apache.spark
    JDBCRDD.compute
    1. org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConnector$1.apply(JDBCRDD.scala:158)
    2. org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConnector$1.apply(JDBCRDD.scala:150)
    3. org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.<init>(JDBCRDD.scala:317)
    4. org.apache.spark.sql.jdbc.JDBCRDD.compute(JDBCRDD.scala:309)
    4 frames
  4. Spark
    Executor$TaskRunner.run
    1. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
    2. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
    3. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
    4. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
    5. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
    6. org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
    7. org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
    8. org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
    9. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
    10. org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    11. org.apache.spark.scheduler.Task.run(Task.scala:64)
    12. org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
    12 frames
  5. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    3. java.lang.Thread.run(Thread.java:745)
    3 frames