org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout

Google Groups | hart jo | 9 months ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    Sparkling water executor error

    Google Groups | 9 months ago | hart jo
    org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
  2. 0

    ExecutorBackend blocked at "UserGroupInformation.doAs"

    Stack Overflow | 1 month ago | Jerry.X.He
    org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
  3. 0

    How to investigate failing dataproc worker processes?

    Stack Overflow | 1 year ago | sthomps
    org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Spark Application Not Recovering when Executor Lost

    Stack Overflow | 12 months ago | user481a
    org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
  6. 0

    Spark - Reading .gz file faster than flat files on s3 - Flat file take way too long and never completes with 200+GB memory

    Stack Overflow | 11 months ago | SpringStarter
    org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout

  1. rp 3 times, last 1 month ago
  2. johnxfly 1 times, last 2 months ago
  3. Nikolay Rybak 10 times, last 3 months ago
  4. tyson925 75 times, last 2 months ago
  5. Handemelindo 1 times, last 1 month ago
4 more registered users
29 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.util.concurrent.TimeoutException

    Futures timed out after [120 seconds]

    at scala.concurrent.impl.Promise$DefaultPromise.ready()
  2. Scala
    Await$.result
    1. scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
    2. scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    3. scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
    4. scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
    5. scala.concurrent.Await$.result(package.scala:107)
    5 frames
  3. org.apache.spark
    RpcEndpointRef.askWithRetry
    1. org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    2. org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
    3. org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
    3 frames
  4. Spark
    AppClient$ClientEndpoint$$anonfun$receive$1.applyOrElse
    1. org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:370)
    2. org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.executorRemoved(SparkDeploySchedulerBackend.scala:144)
    3. org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$receive$1.applyOrElse(AppClient.scala:184)
    3 frames
  5. org.apache.spark
    Dispatcher$MessageLoop.run
    1. org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
    2. org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
    3. org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
    4. org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
    4 frames
  6. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    3. java.lang.Thread.run(Thread.java:745)
    3 frames