org.apache.spark.rpc.RpcTimeoutException: Recipient[Actor[akka://sparkDriver/user/MapOutputT racker#1304693884]] had already been terminated.. This timeout is controlled by spark.rpc.askTimeout at org.apache.spark.rpc.RpcTimeout.org$apache$spark$r pc$RpcTimeout$$createRpcTimeoutException(RpcEnv.sc ala:214)

cloudera.com | 7 months ago
tip
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. 0

    ERROR ActorSystemImpl - Running my spark job on ya... - Cloudera Community

    cloudera.com | 7 months ago
    org.apache.spark.rpc.RpcTimeoutException: Recipient[Actor[akka://sparkDriver/user/MapOutputT racker#1304693884]] had already been terminated.. This timeout is controlled by spark.rpc.askTimeout at org.apache.spark.rpc.RpcTimeout.org$apache$spark$r pc$RpcTimeout$$createRpcTimeoutException(RpcEnv.sc ala:214)
  2. 0

    Futures timed out after [120 seconds]

    gmane.org | 12 months ago
    org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout at $apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcEnv.scala:214)
  3. 0

    Re: Exception in task 0.0 in stage 13.0 (TID 13) java.lang.OutOfMemoryError: Java heap space

    mahout-user | 1 year ago | Dmitriy Lyubimov
    org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout at org.apache.spark.rpc.RpcTimeout.org $apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    GitHub comment 64#252462840

    GitHub | 6 months ago | yxzf
    org.apache.spark.rpc.RpcTimeoutException: Recipient[Actor[akka://sparkDriver/user/CoarseGrainedScheduler#325208432]] had already been terminated.. This timeout is controlled by spark.rpc.askTimeout
  6. 0

    Unable to view Twitter stream using a Spark streaming application

    Stack Overflow | 1 year ago | nikos
    org.apache.spark.rpc.RpcTimeoutException: Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#1080029491]] had already been terminated.. This timeout is controlled by spark.rpc.askTimeout

  1. Nikolay Rybak 1 times, last 2 months ago
  2. johnxfly 4 times, last 1 week ago
  3. tyson925 1 times, last 10 months ago
1 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. org.apache.spark.rpc.RpcTimeoutException

    Recipient[Actor[akka://sparkDriver/user/MapOutputT racker#1304693884]] had already been terminated.. This timeout is controlled by spark.rpc.askTimeout at org.apache.spark.rpc.RpcTimeout.org$apache$spark$r pc$RpcTimeout$$createRpcTimeoutException(RpcEnv.sc ala:214)

    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse()
  2. org.apache.spark
    RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse
    1. org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcEnv.scala:229)
    2. org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcEnv.scala:225)
    2 frames
  3. Scala
    Failure.recover
    1. scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
    2. scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185)
    3. scala.util.Try$.apply(Try.scala:161)
    4. scala.util.Failure.recover(Try.scala:185)
    4 frames