GitHub | nettlep | 8 months ago
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    IDEA freezes on launch (thread deadlock)

    GitHub | 8 months ago | nettlep
  2. 0

    GitHub comment 40#233878814

    GitHub | 9 months ago | Locutus18
  3. 0

    We are experiencing occasional application hangs, when testing an existing Pig MapReduce script, executing on Tez. When this occurs, we find this in the syslog for the executing dag: 016-03-21 16:39:01,643 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_e11_1437886552023_169758_01_000822, containerExpiryTime=1458603541415, idleTimeout=5000, taskRequestsCount=0, heldContainers=112, delayedContainers=27, isNew=false 2016-03-21 16:39:01,825 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_e11_1437886552023_169758_01_000824, containerExpiryTime=1458603541692, idleTimeout=5000, taskRequestsCount=0, heldContainers=111, delayedContainers=26, isNew=false 2016-03-21 16:39:01,990 [INFO] [Socket Reader #1 for port 53324] |ipc.Server|: Socket Reader #1 for port 53324: readAndProcess from client threw exception [ Connection reset by peer] Connection reset by peer at Method) at at at at at org.apache.hadoop.ipc.Server.channelRead( at org.apache.hadoop.ipc.Server.access$2800( at org.apache.hadoop.ipc.Server$Connection.readAndProcess( at org.apache.hadoop.ipc.Server$Listener.doRead( at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop( at org.apache.hadoop.ipc.Server$Listener$ 2016-03-21 16:39:02,032 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_e11_1437886552023_169758_01_000811, containerExpiryTime=1458603541828, idleTimeout=5000, taskRequestsCount=0, heldContainers=110, delayedContainers=25, isNew=false In all cases I've been able to analyze so far, this also correlates with a warning in the node identified in the IOException: 2016-03-21 16:36:13,641 [WARN] [I/O Setup 2 Initialize: {scope-178}] |retry.RetryInvocationHandler|: A failover has occurred since the start of this method invocation attempt. However, it does not appear that any namenode failover has actually occurred (the most recent failover we see in logs is from 2015). Attached: syslog_dag_1437886552023_169758_3.gz: syslog for the dag which hangs aggregated logs from the host identified in the IOException

    Apache's JIRA Issue Tracker | 1 year ago | Kurt Muehlner
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Interrupted Thread Exception

    Google Groups | 5 years ago | TestNG User
    java.lang.InterruptedException: sleep interrupted [testng] at java.lang.Thread.sleep(Native Method)

    Root Cause Analysis

    1. java.lang.InterruptedException

      No message provided

      at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait()
    2. Java RT
      1. java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(
      2. java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(
      3. java.util.concurrent.LinkedBlockingDeque.takeFirst(
      4. java.util.concurrent.LinkedBlockingDeque.take(
      4 frames
    3. net.groboclown.idea
      1. net.groboclown.idea.p4ic.v2.server.connection.ServerConnection.pullNextAction(
      2. net.groboclown.idea.p4ic.v2.server.connection.ServerConnection.access$1100(
      3. net.groboclown.idea.p4ic.v2.server.connection.ServerConnection$
      3 frames
    4. Java RT
      1 frame