There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • GitHub comment 40#233878814
    via GitHub by Locutus18
  • We are experiencing occasional application hangs, when testing an existing Pig MapReduce script, executing on Tez. When this occurs, we find this in the syslog for the executing dag: 016-03-21 16:39:01,643 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_e11_1437886552023_169758_01_000822, containerExpiryTime=1458603541415, idleTimeout=5000, taskRequestsCount=0, heldContainers=112, delayedContainers=27, isNew=false 2016-03-21 16:39:01,825 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_e11_1437886552023_169758_01_000824, containerExpiryTime=1458603541692, idleTimeout=5000, taskRequestsCount=0, heldContainers=111, delayedContainers=26, isNew=false 2016-03-21 16:39:01,990 [INFO] [Socket Reader #1 for port 53324] |ipc.Server|: Socket Reader #1 for port 53324: readAndProcess from client threw exception [ Connection reset by peer] Connection reset by peer at Method) at at at at at org.apache.hadoop.ipc.Server.channelRead( at org.apache.hadoop.ipc.Server.access$2800( at org.apache.hadoop.ipc.Server$Connection.readAndProcess( at org.apache.hadoop.ipc.Server$Listener.doRead( at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop( at org.apache.hadoop.ipc.Server$Listener$ 2016-03-21 16:39:02,032 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_e11_1437886552023_169758_01_000811, containerExpiryTime=1458603541828, idleTimeout=5000, taskRequestsCount=0, heldContainers=110, delayedContainers=25, isNew=false In all cases I've been able to analyze so far, this also correlates with a warning in the node identified in the IOException: 2016-03-21 16:36:13,641 [WARN] [I/O Setup 2 Initialize: {scope-178}] |retry.RetryInvocationHandler|: A failover has occurred since the start of this method invocation attempt. However, it does not appear that any namenode failover has actually occurred (the most recent failover we see in logs is from 2015). Attached: syslog_dag_1437886552023_169758_3.gz: syslog for the dag which hangs aggregated logs from the host identified in the IOException
    via by Kurt Muehlner,
  • Source Code - releasenotes.html
    via by Unknown author,
  • bug quiting game
    via GitHub by eprogramming
    • java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait( at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await( at java.util.concurrent.LinkedBlockingDeque.takeFirst( at java.util.concurrent.LinkedBlockingDeque.take( at cc.codechecker.api.runtime.CodecheckerServerThread$ at

    Users with the same issue

    3 times, last one,
    Hiren AmaliyarHiren Amaliyar
    2 times, last one,
    2 times, last one,
    2 times, last one,
    3 times, last one,
    36 more bugmates