java.lang.RuntimeException: Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/dssbp/eb828a68-6637-41ea-a05e-d33ae658eb19/hive_2016-08-30_16-41-13_054_4952687673775889960-1/-mr-10004/a48247fe-12cd-4c9f-bf41-df67ffada26d/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.

Stack Overflow | Smruti Ranjan | 3 months ago
  1. 0

    Hive map reduce not working

    Stack Overflow | 3 months ago | Smruti Ranjan
    java.lang.RuntimeException: Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/dssbp/eb828a68-6637-41ea-a05e-d33ae658eb19/hive_2016-08-30_16-41-13_054_4952687673775889960-1/-mr-10004/a48247fe-12cd-4c9f-bf41-df67ffada26d/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
  2. 0

    Configured an ETLBatch application, that reads from stream and writes to two TPFSAvro sinks, though this MR job status is complete, the datasets themselves were empty and looking at the logs, we found this exception. We also noticed this in DataCleansingApp where we use multiple file-sets as output. This issue does not occur on multi outputs when one of the output is Table and the other is Fileset. {noformat} cturedRecord@379de7b4 to Sink Main method returned class org.apache.hadoop.mapred.YarnChild 03:13:22.515 [DistributedMapReduceTaskContextProvider STOPPING] WARN c.c.c.i.a.r.b.MapReduceTaskContextProvider - Exception when closing context job=ETLMapReduce,=namespaceId=defau lt, applicationId=mirrorApp, program=ETLMapReduce, runid=3931d179-5e7c-11e5-8915-42010af0e95c java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /cdap/namespaces/default/data/mirrorTabl e2/2015-09-19/03-12.1442632356767/_temporary/1/_temporary/attempt_1442608527883_0035_m_000000_0/part-m-00000.avro: File does not exist. [Lease. Holder: DFSClient_attempt_1442608527 883_0035_m_000000_0_1188726900_1, pendingcreates: 2] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2983) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2803) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2711) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:608) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at co.cask.cdap.internal.app.runtime.batch.dataset.MultipleOutputs.closeRecordWriters(MultipleOutputs.java:302) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at co.cask.cdap.internal.app.runtime.batch.dataset.MultipleOutputs.close(MultipleOutputs.java:285) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at co.cask.cdap.internal.app.runtime.batch.BasicMapReduceTaskContext.close(BasicMapReduceTaskContext.java:144) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at co.cask.cdap.internal.app.runtime.batch.MapReduceTaskContextProvider.shutDown(MapReduceTaskContextProvider.java:95) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at com.google.common.util.concurrent.AbstractIdleService$1$2.run(AbstractIdleService.java:57) [com.google.guava.guava-13.0.1.jar:na] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75] org.apache.hadoop.ipc.RemoteException: No lease on /cdap/namespaces/default/data/mirrorTable2/2015-09-19/03-12.1442632356767/_temporary/1/_temporary/attempt_1442608527883_0035_m_000 000_0/part-m-00000.avro: File does not exist. [Lease. Holder: DFSClient_attempt_1442608527883_0035_m_000000_0_1188726900_1, pendingcreates: 2] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2983) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2803) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2711) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:608) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.ipc.Client.call(Client.java:1364) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at com.sun.proxy.$Proxy12.addBlock(Unknown Source) ~[na:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_75] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_75] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_75] at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_75] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at com.sun.proxy.$Proxy12.addBlock(Unknown Source) ~[na:na] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] {noformat}

    Cask Community Issue Tracker | 1 year ago | Shankar Selvam
    java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /cdap/namespaces/default/data/mirrorTabl e2/2015-09-19/03-12.1442632356767/_temporary/1/_temporary/attempt_1442608527883_0035_m_000000_0/part-m-00000.avro: File does not exist. [Lease. Holder: DFSClient_attempt_1442608527 883_0035_m_000000_0_1188726900_1, pendingcreates: 2]
  3. 0

    Hive使用默认数据库derby报错及解决方法 - - ITeye技术网站

    iteye.com | 8 months ago
    java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/root/1069947e-cafd-4538-99d3-e4e8b5187380. Name node is in safe mode. The reported blocks 234 has reached the threshold 0.9990 of total blocks 234. The number of live datanodes 3 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 4 seconds.
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Error while running a hadoop index task .

    Google Groups | 8 months ago | Bikash
    java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
  6. 0

    Hdfs write permission denied when starting the mater

    Google Groups | 2 weeks ago | 陈布
    java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=cdap, access=WRITE, inode="/user/cdap":hdfs:hdfs:drwxr-xr-x

    1 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.RuntimeException

      Error caching map.xml: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/dssbp/eb828a68-6637-41ea-a05e-d33ae658eb19/hive_2016-08-30_16-41-13_054_4952687673775889960-1/-mr-10004/a48247fe-12cd-4c9f-bf41-df67ffada26d/map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.

      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock()
    2. Apache Hadoop HDFS
      ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod
      1. org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1547)
      2. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107)
      3. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031)
      4. org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724)
      5. org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
      6. org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
      6 frames
    3. Hadoop
      Server$Handler$1.run
      1. org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
      2. org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
      3. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
      4. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
      4 frames
    4. Java RT
      Subject.doAs
      1. java.security.AccessController.doPrivileged(Native Method)
      2. javax.security.auth.Subject.doAs(Subject.java:415)
      2 frames
    5. Hadoop
      Server$Handler.run
      1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
      2. org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
      2 frames