java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3000/test_db?user=abc&password=123

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Apache's JIRA Issue Tracker by Evan Yu, 1 year ago
No suitable driver found for jdbc:mysql://localhost:3000/test_db?user=abc&password=123
via Apache's JIRA Issue Tracker by Evan Yu, 1 year ago
No suitable driver found for jdbc:mysql://localhost:3000/test_db?user=abc&password=123
via Stack Overflow by Satya
, 2 years ago
No suitable driver found for jdbc:phoenix:tpar019.test.com:2181
via Stack Overflow by Shawon91Sust
, 2 years ago
No suitable driver found for jdbc:mysql://localhost/productsearch_userinfo?user=spark&password=spark123
via filegala.com by Unknown author, 1 year ago
No suitable driver found for jdbc:mysql://localhost/productsearch_userinfo?user=spark&password=spark123
via Stack Overflow by Wildfire
, 1 year ago
No suitable driver found for jdbc:mysql://<hostname>:3306/test
java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3000/test_db?user=abc&password=123
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConnector$1.apply(JDBCRDD.scala:158)
at org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConnector$1.apply(JDBCRDD.scala:150)
at org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.(JDBCRDD.scala:317)
at org.apache.spark.sql.jdbc.JDBCRDD.compute(JDBCRDD.scala:309)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Users with the same issue

3 times, 1 month ago
10 times, 3 months ago
Once, 5 months ago
Once, 7 months ago
2 times, 7 months ago

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.