org.apache.spark.SparkException: Error sending message [message = StopAllReceivers]

Talend Open Integration Solution | lei ju | 11 months ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Talend Open Integration Solution | 11 months ago | lei ju
    org.apache.spark.SparkException: Error sending message [message = StopAllReceivers]
  2. 0

    Sparkling water executor error

    Google Groups | 8 months ago | hart jo
    org.apache.spark.SparkException: Error sending message [message = RemoveExecutor(0)]
  3. 0

    [SPARK-6980] [CORE] Akka timeout exceptions indicate which conf controls them (RPC Layer) by BryanCutler · Pull Request #6205 · apache/spark · GitHub

    github.com | 8 months ago
    org.apache.spark.SparkException: Error sending message [message = (BlockManagerHeartbeat(BlockManagerId(driver, localhost, 51109)),600 seconds)]
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    While testing spark jobs on VM we noticed that the spark job logs a lot of heartbeat retries messages in master log. Here is the stacktrace: Spark program ran fine though. {code} 2016-04-29 05:04:05,963 - WARN [driver-heartbeater:o.a.s.Logging$class@91] - Error sending message [message = Heartbeat(driver,[L scala.Tuple2;@6d61f2ad,BlockManagerId(driver, localhost, 49484))] in 3 attempts org.apache.spark.SparkException: Could not find HeartbeatReceiver or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) ~[co.cask.cdap.spark-assembly-1.6.1.jar:na] at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) ~[co.cask.cdap.spark-assembly-1.6.1.jar:na ] at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) ~[co.cask.cdap.spark-assembly-1.6.1.jar:na] at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) ~[co.cask.cdap.spark-assembly-1.6.1.jar:na] at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) ~[co.cask.cdap.spark-assembly-1.6.1.jar:na] at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:449) [co.cask.cda p.spark-assembly-1.6.1.jar:na] at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:470) [co.cask.cdap.spark-assembly -1.6.1.jar:na] at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:470) [co.cask.cdap.spark-assembly-1.6.1. jar:na] at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:470) [co.cask.cdap.spark-assembly-1.6.1. jar:na] at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765) [co.cask.cdap.spark-assembly-1.6.1.jar:na] at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:470) [co.cask.cdap.spark-assembly-1.6.1.jar:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_75] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [na:1.7.0_75] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [ na:1.7.0_75] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.7. 0_75] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_75] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_75] Caused by: org.apache.spark.SparkException: Could not find HeartbeatReceiver or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) ~[co.cask.cdap.spark-assembly-1.6.1.jar:na] at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) ~[co.cask.cdap.spark-assembly-1.6.1.jar:na] at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) ~[co.cask.cdap.spark-assembly-1.6.1.jar:na] at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) ~[co.cask.cdap.spark-assembly-1.6.1.jar:na] at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) ~[co.cask.cdap.spark-assembly-1.6.1.jar:na] ... 13 common frames omitted {code}

    Cask Community Issue Tracker | 12 months ago | Rohit Sinha
    org.apache.spark.SparkException: Could not find HeartbeatReceiver or it has been stopped.
  6. 0

    Apache Spark User List - GC overhead limit exceeded

    nabble.com | 8 months ago
    org.apache.spark.SparkException: Could not find HeartbeatReceiver or it has been stopped.
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. org.apache.spark.SparkException

    Error sending message [message = StopAllReceivers]

    at org.apache.spark.rpc.RpcEndpointRef.askWithRetry()
  2. org.apache.spark
    RpcEndpointRef.askWithRetry
    1. org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:118)
    2. org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
    2 frames
  3. Spark Project Streaming
    JavaStreamingContext.stop
    1. org.apache.spark.streaming.scheduler.ReceiverTracker.stop(ReceiverTracker.scala:170)
    2. org.apache.spark.streaming.scheduler.JobScheduler.stop(JobScheduler.scala:93)
    3. org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:709)
    4. org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:682)
    5. org.apache.spark.streaming.api.java.JavaStreamingContext.stop(JavaStreamingContext.scala:662)
    5 frames
  4. bigdata.spark_0_1
    spark.main
    1. bigdata.spark_0_1.spark.runJobInTOS(spark.java:898)
    2. bigdata.spark_0_1.spark.main(spark.java:773)
    2 frames