java.util.concurrent.TimeoutException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • I have a AWS instance and am running the sequenceiq/docker-spark (including hadoop and spark) on it. The R and SparkR were installed on AWS. After I initiated a spark context using the following command: sc <- sparkR.init(master='spark://ip-10-64-65-168:7077', appName="ER") The connect was built successfully. But the error messages popped up soon. The main line of the message is: "ERROR TaskSchedulerImpl: Lost an executor 0 (already removed): remote Akka client disassociated". And the same error messages will repeat for executor 1, 2, .... I am new to spark, and I have done online research for a few weeks, but still could not fix it. Any ideas or hints will be appreciated very much! Thanks!!! Attached pictures are the screen snapshop of the terminal and the slave UI page. And the stderr Log from one of the Executor is as follows: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 15/02/23 21:02:40 INFO CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] 15/02/23 21:02:40 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/02/23 21:02:40 INFO SecurityManager: Changing view acls to: root 15/02/23 21:02:40 INFO SecurityManager: Changing modify acls to: root 15/02/23 21:02:40 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 15/02/23 21:02:41 INFO Slf4jLogger: Slf4jLogger started 15/02/23 21:02:41 INFO Remoting: Starting remoting 15/02/23 21:02:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@ip-10-64-65-168.us-west-2.compute.internal:46838] 15/02/23 21:02:41 INFO Utils: Successfully started service 'driverPropsFetcher' on port 46838. 15/02/23 21:02:41 INFO SecurityManager: Changing view acls to: root 15/02/23 21:02:41 INFO SecurityManager: Changing modify acls to: root 15/02/23 21:02:41 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 15/02/23 21:02:41 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. 15/02/23 21:02:41 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. 15/02/23 21:02:41 INFO Slf4jLogger: Slf4jLogger started 15/02/23 21:02:41 INFO Remoting: Starting remoting 15/02/23 21:02:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@ip-10-64-65-168.us-west-2.compute.internal:48541] 15/02/23 21:02:41 INFO Utils: Successfully started service 'sparkExecutor' on port 48541. 15/02/23 21:02:41 INFO CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114/user/CoarseGrainedScheduler 15/02/23 21:02:41 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. 15/02/23 21:02:41 INFO CoarseGrainedExecutorBackend: Successfully registered with driver 15/02/23 21:02:41 INFO SecurityManager: Changing view acls to: root 15/02/23 21:02:41 INFO SecurityManager: Changing modify acls to: root 15/02/23 21:02:41 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 15/02/23 21:02:41 INFO AkkaUtils: Connecting to MapOutputTracker: akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114/user/MapOutputTracker 15/02/23 21:02:41 INFO AkkaUtils: Connecting to BlockManagerMaster: akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114/user/BlockManagerMaster 15/02/23 21:02:41 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20150223210241-704c 15/02/23 21:02:41 INFO MemoryStore: MemoryStore started with capacity 265.0 MB 15/02/23 21:02:41 INFO NettyBlockTransferService: Server created on 37580 15/02/23 21:02:41 INFO BlockManagerMaster: Trying to register BlockManager 15/02/23 21:02:41 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114] has failed, address is now gated for [5000] ms. Reason is: [Disassociated]. 15/02/23 21:03:11 WARN AkkaUtils: Error sending message in 1 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:187) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:211) at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:51) at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:206) at org.apache.spark.executor.Executor.<init>(Executor.scala:90) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receiveWithLogging$1.applyOrElse(CoarseGrainedExecutorBackend.scala:61) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42) at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.executor.CoarseGrainedExecutorBackend.aroundReceive(CoarseGrainedExecutorBackend.scala:36) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 15/02/23 21:03:14 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114] has failed, address is now gated for [5000] ms. Reason is: [Disassociated]. 15/02/23 21:03:44 WARN AkkaUtils: Error sending message in 2 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:187) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:211) at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:51) at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:206) at org.apache.spark.executor.Executor.<init>(Executor.scala:90) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receiveWithLogging$1.applyOrElse(CoarseGrainedExecutorBackend.scala:61) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42) at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.executor.CoarseGrainedExecutorBackend.aroundReceive(CoarseGrainedExecutorBackend.scala:36) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 15/02/23 21:03:47 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114] has failed, address is now gated for [5000] ms. Reason is: [Disassociated]. 15/02/23 21:04:17 WARN AkkaUtils: Error sending message in 3 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:187) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:211) at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:51) at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:206) at org.apache.spark.executor.Executor.<init>(Executor.scala:90) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receiveWithLogging$1.applyOrElse(CoarseGrainedExecutorBackend.scala:61) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42) at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.executor.CoarseGrainedExecutorBackend.aroundReceive(CoarseGrainedExecutorBackend.scala:36) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 15/02/23 21:04:20 ERROR OneForOneStrategy: Error sending message [message = RegisterBlockManager(BlockManagerId(4, ip-10-64-65-168.us-west-2.compute.internal, 37580),277842493,Actor[akka://sparkExecutor/user/BlockManagerActor1#-2041230339])] org.apache.spark.SparkException: Error sending message [message = RegisterBlockManager(BlockManagerId(4, ip-10-64-65-168.us-west-2.compute.internal, 37580),277842493,Actor[akka://sparkExecutor/user/BlockManagerActor1#-2041230339])] at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:201) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:211) at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:51) at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:206) at org.apache.spark.executor.Executor.<init>(Executor.scala:90) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receiveWithLogging$1.applyOrElse(CoarseGrainedExecutorBackend.scala:61) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42) at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.executor.CoarseGrainedExecutorBackend.aroundReceive(CoarseGrainedExecutorBackend.scala:36) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:187) ... 24 more 15/02/23 21:04:20 INFO CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114/user/CoarseGrainedScheduler 15/02/23 21:04:20 ERROR CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor@ip-10-64-65-168.us-west-2.compute.internal:48541] -> [akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114] disassociated! Shutting down.
    via by XW Gong,
  • I have a AWS instance and am running the sequenceiq/docker-spark (including hadoop and spark) on it. The R and SparkR were installed on AWS. After I initiated a spark context using the following command: sc <- sparkR.init(master='spark://ip-10-64-65-168:7077', appName="ER") The connect was built successfully. But the error messages popped up soon. The main line of the message is: "ERROR TaskSchedulerImpl: Lost an executor 0 (already removed): remote Akka client disassociated". And the same error messages will repeat for executor 1, 2, .... I am new to spark, and I have done online research for a few weeks, but still could not fix it. Any ideas or hints will be appreciated very much! Thanks!!! Attached pictures are the screen snapshop of the terminal and the slave UI page. And the stderr Log from one of the Executor is as follows: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 15/02/23 21:02:40 INFO CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] 15/02/23 21:02:40 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/02/23 21:02:40 INFO SecurityManager: Changing view acls to: root 15/02/23 21:02:40 INFO SecurityManager: Changing modify acls to: root 15/02/23 21:02:40 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 15/02/23 21:02:41 INFO Slf4jLogger: Slf4jLogger started 15/02/23 21:02:41 INFO Remoting: Starting remoting 15/02/23 21:02:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@ip-10-64-65-168.us-west-2.compute.internal:46838] 15/02/23 21:02:41 INFO Utils: Successfully started service 'driverPropsFetcher' on port 46838. 15/02/23 21:02:41 INFO SecurityManager: Changing view acls to: root 15/02/23 21:02:41 INFO SecurityManager: Changing modify acls to: root 15/02/23 21:02:41 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 15/02/23 21:02:41 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. 15/02/23 21:02:41 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. 15/02/23 21:02:41 INFO Slf4jLogger: Slf4jLogger started 15/02/23 21:02:41 INFO Remoting: Starting remoting 15/02/23 21:02:41 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@ip-10-64-65-168.us-west-2.compute.internal:48541] 15/02/23 21:02:41 INFO Utils: Successfully started service 'sparkExecutor' on port 48541. 15/02/23 21:02:41 INFO CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114/user/CoarseGrainedScheduler 15/02/23 21:02:41 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. 15/02/23 21:02:41 INFO CoarseGrainedExecutorBackend: Successfully registered with driver 15/02/23 21:02:41 INFO SecurityManager: Changing view acls to: root 15/02/23 21:02:41 INFO SecurityManager: Changing modify acls to: root 15/02/23 21:02:41 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 15/02/23 21:02:41 INFO AkkaUtils: Connecting to MapOutputTracker: akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114/user/MapOutputTracker 15/02/23 21:02:41 INFO AkkaUtils: Connecting to BlockManagerMaster: akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114/user/BlockManagerMaster 15/02/23 21:02:41 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20150223210241-704c 15/02/23 21:02:41 INFO MemoryStore: MemoryStore started with capacity 265.0 MB 15/02/23 21:02:41 INFO NettyBlockTransferService: Server created on 37580 15/02/23 21:02:41 INFO BlockManagerMaster: Trying to register BlockManager 15/02/23 21:02:41 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114] has failed, address is now gated for [5000] ms. Reason is: [Disassociated]. 15/02/23 21:03:11 WARN AkkaUtils: Error sending message in 1 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:187) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:211) at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:51) at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:206) at org.apache.spark.executor.Executor.<init>(Executor.scala:90) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receiveWithLogging$1.applyOrElse(CoarseGrainedExecutorBackend.scala:61) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42) at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.executor.CoarseGrainedExecutorBackend.aroundReceive(CoarseGrainedExecutorBackend.scala:36) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 15/02/23 21:03:14 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114] has failed, address is now gated for [5000] ms. Reason is: [Disassociated]. 15/02/23 21:03:44 WARN AkkaUtils: Error sending message in 2 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:187) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:211) at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:51) at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:206) at org.apache.spark.executor.Executor.<init>(Executor.scala:90) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receiveWithLogging$1.applyOrElse(CoarseGrainedExecutorBackend.scala:61) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42) at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.executor.CoarseGrainedExecutorBackend.aroundReceive(CoarseGrainedExecutorBackend.scala:36) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 15/02/23 21:03:47 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114] has failed, address is now gated for [5000] ms. Reason is: [Disassociated]. 15/02/23 21:04:17 WARN AkkaUtils: Error sending message in 3 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:187) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:211) at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:51) at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:206) at org.apache.spark.executor.Executor.<init>(Executor.scala:90) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receiveWithLogging$1.applyOrElse(CoarseGrainedExecutorBackend.scala:61) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42) at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.executor.CoarseGrainedExecutorBackend.aroundReceive(CoarseGrainedExecutorBackend.scala:36) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 15/02/23 21:04:20 ERROR OneForOneStrategy: Error sending message [message = RegisterBlockManager(BlockManagerId(4, ip-10-64-65-168.us-west-2.compute.internal, 37580),277842493,Actor[akka://sparkExecutor/user/BlockManagerActor1#-2041230339])] org.apache.spark.SparkException: Error sending message [message = RegisterBlockManager(BlockManagerId(4, ip-10-64-65-168.us-west-2.compute.internal, 37580),277842493,Actor[akka://sparkExecutor/user/BlockManagerActor1#-2041230339])] at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:201) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:211) at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:51) at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:206) at org.apache.spark.executor.Executor.<init>(Executor.scala:90) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receiveWithLogging$1.applyOrElse(CoarseGrainedExecutorBackend.scala:61) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42) at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.executor.CoarseGrainedExecutorBackend.aroundReceive(CoarseGrainedExecutorBackend.scala:36) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) Caused by: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:187) ... 24 more 15/02/23 21:04:20 INFO CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114/user/CoarseGrainedScheduler 15/02/23 21:04:20 ERROR CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor@ip-10-64-65-168.us-west-2.compute.internal:48541] -> [akka.tcp://sparkDriver@ip-10-64-65-168.us-west-2.compute.internal:50114] disassociated! Shutting down.
    via by XW Gong,
  • Spark cluster computing framework
    via by Unknown author,
  • GitHub comment 954#248399914
    via GitHub by weeshy
    ,
    • java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:187) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:211) at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:51) at org.apache.spark.storage.BlockManager.initialize(BlockManager.scala:206) at org.apache.spark.executor.Executor.<init>(Executor.scala:90) at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receiveWithLogging$1.applyOrElse(CoarseGrainedExecutorBackend.scala:61) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33) at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53) at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42) at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118) at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.executor.CoarseGrainedExecutorBackend.aroundReceive(CoarseGrainedExecutorBackend.scala:36) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

    Users with the same issue

    rp
    1 times, last one,
    michallos
    6 times, last one,
    Unknown visitor1 times, last one,
    balintn
    1 times, last one,
    guizmaii
    1 times, last one,
    40 more bugmates