org.apache.spark.SparkException: Job 1 cancelled because SparkContext was shut down

Talend Open Integration Solution | lei ju | 10 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Talend Open Integration Solution | 10 months ago | lei ju
    org.apache.spark.SparkException: Job 1 cancelled because SparkContext was shut down
  2. 0

    Spark example Pi fails to run on yarn-client mode

    Unix & Linux | 9 months ago | Henry
    org.apache.spark.SparkException: Job 0 cancelled because SparkContext was shut down
  3. 0

    Job cancelled because SparkContext was shut down while saving dataframe as hive table

    Stack Overflow | 6 months ago | vatsal mevada
    org.apache.spark.SparkException: Job 2 cancelled because SparkContext was shut down
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    SparkContext was shut down on long running as_h2o_frame

    GitHub | 6 months ago | jmuhlenkamp
    org.apache.spark.SparkException: Job 34 cancelled because SparkContext was shut down
  6. 0

    SPARK : How to generate s3 file path dynamically using date diff

    Stack Overflow | 5 months ago | Newbie
    org.apache.spark.SparkException: Job 2 cancelled because SparkContext was shut down

  1. tyson925 4 times, last 4 days ago
  2. johnxfly 6 times, last 3 days ago
  3. Nikolay Rybak 2 times, last 2 months ago
3 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. org.apache.spark.SparkException

    Job 1 cancelled because SparkContext was shut down

    at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply()
  2. Spark
    DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply
    1. org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:806)
    2. org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:804)
    2 frames
  3. Scala
    HashSet.foreach
    1. scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
    1 frame
  4. Spark
    YarnClientSchedulerBackend$MonitorThread.run
    1. org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:804)
    2. org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1658)
    3. org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
    4. org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1581)
    5. org.apache.spark.SparkContext$$anonfun$stop$9.apply$mcV$sp(SparkContext.scala:1751)
    6. org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1230)
    7. org.apache.spark.SparkContext.stop(SparkContext.scala:1750)
    8. org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:147)
    8 frames