org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.

GitHub | car2008 | 3 months ago
  1. 0

    Fail to connect to master. ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive

    Stack Overflow | 5 months ago | Yu Shi
    org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
  2. 0

    GitHub comment 572#246278541

    GitHub | 3 months ago | car2008
    org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
  3. 0

    Is it necessary to submit spark application jar?

    Stack Overflow | 10 months ago | Marcin Lagowski
    org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    启动spark-shell 报错。请大神帮忙-Spark-about云开发

    aboutyun.com | 11 months ago
    org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
  6. 0

    Initialization of SparkContext kills Play Framework application when Spark Master is unreachable

    Stack Overflow | 1 year ago | mg88
    org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.spark.SparkException

      Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.

      at org.apache.spark.scheduler.TaskSchedulerImpl.error()
    2. Spark
      AppClient$ClientEndpoint$$anon$2.run
      1. org.apache.spark.scheduler.TaskSchedulerImpl.error(TaskSchedulerImpl.scala:438)
      2. org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:124)
      3. org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)
      4. org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)
      5. org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)
      6. org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)
      6 frames
    3. Java RT
      Thread.run
      1. java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
      2. java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
      3. java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
      4. java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
      5. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      6. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      7. java.lang.Thread.run(Thread.java:745)
      7 frames