org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up. at org.apache.spark.scheduler.TaskSchedulerImpl.error( TaskSchedulerImpl.scala:438) at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead( SparkDeploySchedulerBackend.scala:124) at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead( AppClient.scala:264) at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$ anonfun$run$1.apply$mcV$sp(AppClient.scala:134)

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • GitHub comment 572#246357384
    via GitHub by arahuja
    ,
    • org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up. at org.apache.spark.scheduler.TaskSchedulerImpl.error( TaskSchedulerImpl.scala:438) at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead( SparkDeploySchedulerBackend.scala:124) at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead( AppClient.scala:264) at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$ anonfun$run$1.apply$mcV$sp(AppClient.scala:134) at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)

    Users with the same issue

    Unknown visitor1 times, last one,