org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 5, 10.8.137.12): java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via GitHub by schon
, 1 year ago
Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 5, 10.8.137.12): java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 5, 10.8.137.12): java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)
at scalikejdbc.ConnectionPool$$anonfun$get$1.apply(ConnectionPool.scala:58)
at scalikejdbc.ConnectionPool$$anonfun$get$1.apply(ConnectionPool.scala:56)
at scala.Option.getOrElse(Option.scala:120)
at scalikejdbc.ConnectionPool$.get(ConnectionPool.scala:56)
at scalikejdbc.ConnectionPool$.apply(ConnectionPool.scala:47)
at scalikejdbc.DB$.connectionPool(DB.scala:150)
at scalikejdbc.DB$.localTx(DB.scala:256)
at io.prediction.data.storage.jdbc.JDBCPEvents$$anonfun$write$1.apply(JDBCPEvents.scala:141)
at io.prediction.data.storage.jdbc.JDBCPEvents$$anonfun$write$1.apply(JDBCPEvents.scala:124)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:898)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:898)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Users with the same issue

You are the first who have seen this exception.

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.