java.lang.RuntimeException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Hive map reduce not working
    via Stack Overflow by Smruti Ranjan
    ,
  • Configured an ETLBatch application, that reads from stream and writes to two TPFSAvro sinks, though this MR job status is complete, the datasets themselves were empty and looking at the logs, we found this exception. We also noticed this in DataCleansingApp where we use multiple file-sets as output. This issue does not occur on multi outputs when one of the output is Table and the other is Fileset. {noformat} cturedRecord@379de7b4 to Sink Main method returned class org.apache.hadoop.mapred.YarnChild 03:13:22.515 [DistributedMapReduceTaskContextProvider STOPPING] WARN c.c.c.i.a.r.b.MapReduceTaskContextProvider - Exception when closing context job=ETLMapReduce,=namespaceId=defau lt, applicationId=mirrorApp, program=ETLMapReduce, runid=3931d179-5e7c-11e5-8915-42010af0e95c java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /cdap/namespaces/default/data/mirrorTabl e2/2015-09-19/03-12.1442632356767/_temporary/1/_temporary/attempt_1442608527883_0035_m_000000_0/part-m-00000.avro: File does not exist. [Lease. Holder: DFSClient_attempt_1442608527 883_0035_m_000000_0_1188726900_1, pendingcreates: 2] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2983) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2803) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2711) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:608) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at co.cask.cdap.internal.app.runtime.batch.dataset.MultipleOutputs.closeRecordWriters(MultipleOutputs.java:302) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at co.cask.cdap.internal.app.runtime.batch.dataset.MultipleOutputs.close(MultipleOutputs.java:285) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at co.cask.cdap.internal.app.runtime.batch.BasicMapReduceTaskContext.close(BasicMapReduceTaskContext.java:144) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at co.cask.cdap.internal.app.runtime.batch.MapReduceTaskContextProvider.shutDown(MapReduceTaskContextProvider.java:95) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at com.google.common.util.concurrent.AbstractIdleService$1$2.run(AbstractIdleService.java:57) [com.google.guava.guava-13.0.1.jar:na] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75] org.apache.hadoop.ipc.RemoteException: No lease on /cdap/namespaces/default/data/mirrorTable2/2015-09-19/03-12.1442632356767/_temporary/1/_temporary/attempt_1442608527883_0035_m_000 000_0/part-m-00000.avro: File does not exist. [Lease. Holder: DFSClient_attempt_1442608527883_0035_m_000000_0_1188726900_1, pendingcreates: 2] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2983) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2803) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2711) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:608) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.ipc.Client.call(Client.java:1364) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at com.sun.proxy.$Proxy12.addBlock(Unknown Source) ~[na:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_75] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_75] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_75] at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_75] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at com.sun.proxy.$Proxy12.addBlock(Unknown Source) ~[na:na] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] {noformat}
    via by Shankar Selvam,
    • java.lang.RuntimeException: Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/dssbp/eb828a68-6637-41ea-a05e-d33ae658eb19/hive_2016-08-30_16-41-13_054_4952687673775889960-1/-mr-10004/a48247fe-12cd-4c9f-bf41-df67ffada26d/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1547) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

    Users with the same issue

    Unknown visitor
    Unknown visitor1 times, last one,