Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via spark-dev by Alexander Pivovarov, 2 years ago
Job cancelled because SparkContext was shut down
via spark-dev by Alexander Pivovarov, 2 years ago
Job cancelled because SparkContext was shut down
via Stack Overflow by Aleksander Zendel
, 1 year ago
via GitHub by goldjay1231
, 2 years ago
Job cancelled because SparkContext was shut down
via Stack Overflow by Newbie
, 1 year ago
Job 2 cancelled because SparkContext was shut down
via spark-dev by Alexander Pivovarov, 2 years ago
Job cancelled because SparkContext was shut down
org.apache.spark.SparkException: Job cancelled because SparkContext was shut down	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:703)	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:702)	at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)	at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:702)	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1514)	at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)	at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1438)	at org.apache.spark.SparkContext$$anonfun$stop$7.apply$mcV$sp(SparkContext.scala:1724)	at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1185)	at org.apache.spark.SparkContext.stop(SparkContext.scala:1723)	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:146)	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1914)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1124)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1065)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1065)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)	at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1065)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:989)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:965)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:965)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)	at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:965)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:897)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:897)	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:897)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)	at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:896)	at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1430)	at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1409)	at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1409)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)	at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)	at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1409)	at com.radius.distiller.components.CondenseRecords.saveValidationQa(CondenseRecords.scala:65)	at com.radius.distiller.Distiller.runCondenseRecords(Distiller.scala:49)	at com.radius.distiller.Execute$.run(Execute.scala:56)	at com.radius.distiller.Execute$.main(Execute.scala:33)	at com.radius.distiller.Execute.main(Execute.scala)	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)	at java.lang.reflect.Method.invoke(Method.java:606)	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)