org.apache.spark.SparkException: Error sending message [message = RemoveExecutor(0)] at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:118) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:225) at org.apache.spark.storage.BlockManagerMaster.removeExecutor(BlockManagerMaster.scala:40) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.removeExecutor(CoarseGrainedSchedulerBackend.scala:276) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(CoarseGrainedSchedulerBackend.scala:186) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:104)

Google Groups | hart jo | 4 months ago
  1. 0

    Sparkling water executor error

    Google Groups | 4 months ago | hart jo
    org.apache.spark.SparkException: Error sending message [message = RemoveExecutor(0)] at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:118) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:225) at org.apache.spark.storage.BlockManagerMaster.removeExecutor(BlockManagerMaster.scala:40) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.removeExecutor(CoarseGrainedSchedulerBackend.scala:276) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(CoarseGrainedSchedulerBackend.scala:186) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:104)
  2. 0

    [incubating-0.9.0] Too Many Open Files on Workers

    apache.org | 2 years ago
    org.apache.spark.SparkException: Error sending message to BlockManagerMaster [message = HeartBeat(BlockManagerId(1, 172.17.0.4, 52780, 0))] at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:176) at org.apache.spark.storage.BlockManagerMaster.sendHeartBeat(BlockManagerMaster.scala:52) at org.apache.spark.storage.BlockManager.org $apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:97) at org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:135)
  3. 0

    Re: [incubating-0.9.0] Too Many Open Files on Workers

    apache.org | 12 months ago
    java.lang.Error: org.apache.spark.SparkException: Error sending message to BlockManagerMaster [message = HeartBeat(BlockManagerId(1, 172.17.0.4, 52780, 0))]
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Re: [incubating-0.9.0] Too Many Open Files on Workers

    apache.org | 1 year ago
    java.lang.Error: org.apache.spark.SparkException: Error sending message to BlockManagerMaster [message = HeartBeat(BlockManagerId(1, 172.17.0.4, 52780, 0))]
  6. 0

    Re: [incubating-0.9.0] Too Many Open Files on Workers

    apache.org | 1 year ago
    java.lang.Error: org.apache.spark.SparkException: Error sending message to BlockManagerMaster [message = HeartBeat(BlockManagerId(1, 172.17.0.4, 52780, 0))]

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.spark.SparkException

      Error sending message [message = RemoveExecutor(0)] at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:118) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:225) at org.apache.spark.storage.BlockManagerMaster.removeExecutor(BlockManagerMaster.scala:40) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.removeExecutor(CoarseGrainedSchedulerBackend.scala:276) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(CoarseGrainedSchedulerBackend.scala:186) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:104)

      at org.apache.spark.rpc.netty.Inbox.safelyCall()
    2. org.apache.spark
      Inbox.process
      1. org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
      2. org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
      2 frames