GitHub | Locutus18 | 3 months ago
  1. 0

    GitHub comment 40#233878814

    GitHub | 3 months ago | Locutus18
  2. 0

    IDEA freezes on launch (thread deadlock)

    GitHub | 2 months ago | nettlep
  3. 0

    We are experiencing occasional application hangs, when testing an existing Pig MapReduce script, executing on Tez. When this occurs, we find this in the syslog for the executing dag: 016-03-21 16:39:01,643 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_e11_1437886552023_169758_01_000822, containerExpiryTime=1458603541415, idleTimeout=5000, taskRequestsCount=0, heldContainers=112, delayedContainers=27, isNew=false 2016-03-21 16:39:01,825 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_e11_1437886552023_169758_01_000824, containerExpiryTime=1458603541692, idleTimeout=5000, taskRequestsCount=0, heldContainers=111, delayedContainers=26, isNew=false 2016-03-21 16:39:01,990 [INFO] [Socket Reader #1 for port 53324] |ipc.Server|: Socket Reader #1 for port 53324: readAndProcess from client threw exception [ Connection reset by peer] Connection reset by peer at Method) at at at at at org.apache.hadoop.ipc.Server.channelRead( at org.apache.hadoop.ipc.Server.access$2800( at org.apache.hadoop.ipc.Server$Connection.readAndProcess( at org.apache.hadoop.ipc.Server$Listener.doRead( at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop( at org.apache.hadoop.ipc.Server$Listener$ 2016-03-21 16:39:02,032 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_e11_1437886552023_169758_01_000811, containerExpiryTime=1458603541828, idleTimeout=5000, taskRequestsCount=0, heldContainers=110, delayedContainers=25, isNew=false In all cases I've been able to analyze so far, this also correlates with a warning in the node identified in the IOException: 2016-03-21 16:36:13,641 [WARN] [I/O Setup 2 Initialize: {scope-178}] |retry.RetryInvocationHandler|: A failover has occurred since the start of this method invocation attempt. However, it does not appear that any namenode failover has actually occurred (the most recent failover we see in logs is from 2015). Attached: syslog_dag_1437886552023_169758_3.gz: syslog for the dag which hangs aggregated logs from the host identified in the IOException

    Apache's JIRA Issue Tracker | 7 months ago | Kurt Muehlner
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.InterruptedException

      No message provided

      at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait()
    2. Java RT
      1. java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(
      2. java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(
      3. java.util.concurrent.LinkedBlockingDeque.takeFirst(
      4. java.util.concurrent.LinkedBlockingDeque.take(
      4 frames
    3. cc.codechecker.api
      1. cc.codechecker.api.runtime.CodecheckerServerThread$
      1 frame
    4. Java RT
      1 frame