org.apache.hadoop.ipc.RemoteException: File /tmp.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

hadoop-hdfs-dev | Apache Jenkins Server | 1 year ago
tip
Your exception is missing from the Samebug knowledge base.
Here are the best solutions we found on the Internet.
Click on the to mark the helpful solution and get rewards for you help.
  1. 0

    Hadoop-Hdfs-trunk-Java8 - Build # 749 - Still Failing

    hadoop-hdfs-dev | 1 year ago | Apache Jenkins Server
    org.apache.hadoop.ipc.RemoteException: File /tmp.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
  2. 0

    Hadoop-Mapreduce-trunk - Build # 2008 - Still Failing

    hadoop-mapreduce-dev | 2 years ago | Apache Jenkins Server
    org.apache.hadoop.ipc.RemoteException: File /user/jenkins/target/MiniMRCluster_2146462737-tmpDir/hadoop-5999631797445097776.jar could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation.
  3. 0

    Files in HDFS remains blank even if streams deployed successfully

    Stack Overflow | 11 months ago | jitendra singh
    org.apache.hadoop.ipc.RemoteException: File /xd/nnnnnn/nnnnnn-0.txt.tmp could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 n ode(s) are excluded in this operation.
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Configured an ETLBatch application, that reads from stream and writes to two TPFSAvro sinks, though this MR job status is complete, the datasets themselves were empty and looking at the logs, we found this exception. We also noticed this in DataCleansingApp where we use multiple file-sets as output. This issue does not occur on multi outputs when one of the output is Table and the other is Fileset. {noformat} cturedRecord@379de7b4 to Sink Main method returned class org.apache.hadoop.mapred.YarnChild 03:13:22.515 [DistributedMapReduceTaskContextProvider STOPPING] WARN c.c.c.i.a.r.b.MapReduceTaskContextProvider - Exception when closing context job=ETLMapReduce,=namespaceId=defau lt, applicationId=mirrorApp, program=ETLMapReduce, runid=3931d179-5e7c-11e5-8915-42010af0e95c java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /cdap/namespaces/default/data/mirrorTabl e2/2015-09-19/03-12.1442632356767/_temporary/1/_temporary/attempt_1442608527883_0035_m_000000_0/part-m-00000.avro: File does not exist. [Lease. Holder: DFSClient_attempt_1442608527 883_0035_m_000000_0_1188726900_1, pendingcreates: 2] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2983) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2803) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2711) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:608) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at co.cask.cdap.internal.app.runtime.batch.dataset.MultipleOutputs.closeRecordWriters(MultipleOutputs.java:302) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at co.cask.cdap.internal.app.runtime.batch.dataset.MultipleOutputs.close(MultipleOutputs.java:285) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at co.cask.cdap.internal.app.runtime.batch.BasicMapReduceTaskContext.close(BasicMapReduceTaskContext.java:144) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at co.cask.cdap.internal.app.runtime.batch.MapReduceTaskContextProvider.shutDown(MapReduceTaskContextProvider.java:95) ~[co.cask.cdap.cdap-app-fabric-3.2.0-SNAPSHOT.jar:na] at com.google.common.util.concurrent.AbstractIdleService$1$2.run(AbstractIdleService.java:57) [com.google.guava.guava-13.0.1.jar:na] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75] org.apache.hadoop.ipc.RemoteException: No lease on /cdap/namespaces/default/data/mirrorTable2/2015-09-19/03-12.1442632356767/_temporary/1/_temporary/attempt_1442608527883_0035_m_000 000_0/part-m-00000.avro: File does not exist. [Lease. Holder: DFSClient_attempt_1442608527883_0035_m_000000_0_1188726900_1, pendingcreates: 2] at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2983) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2803) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2711) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:608) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.ipc.Client.call(Client.java:1364) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at com.sun.proxy.$Proxy12.addBlock(Unknown Source) ~[na:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_75] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_75] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_75] at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_75] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) ~[hadoop-common-2.4.0.2.1.15.0-946.jar:na] at com.sun.proxy.$Proxy12.addBlock(Unknown Source) ~[na:na] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525) ~[hadoop-hdfs-2.4.0.2.1.15.0-946.jar:na] {noformat}

    Cask Community Issue Tracker | 2 years ago | Shankar Selvam
    org.apache.hadoop.ipc.RemoteException: No lease on /cdap/namespaces/default/data/mirrorTable2/2015-09-19/03-12.1442632356767/_temporary/1/_temporary/attempt_1442608527883_0035_m_000 000_0/part-m-00000.avro: File does not exist. [Lease. Holder: DFSClient_attempt_1442608527883_0035_m_000000_0_1188726900_1, pendingcreates: 2]
  6. 0

    Error while running a hadoop index task .

    Google Groups | 11 months ago | Bikash
    org.apache.hadoop.ipc.RemoteException: File /tmp/druid-indexing/classpath/jersey-core-1.19.jar could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation.

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.hadoop.ipc.RemoteException

      File /tmp.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

      at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock()
    2. Apache Hadoop HDFS
      ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod
      1. org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1750)
      2. org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:299)
      3. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2390)
      4. org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:797)
      5. org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:500)
      6. org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
      6 frames
    3. Hadoop
      Server$Handler$1.run
      1. org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637)
      2. org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
      3. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2305)
      4. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2301)
      4 frames
    4. Java RT
      Subject.doAs
      1. java.security.AccessController.doPrivileged(Native Method)
      2. javax.security.auth.Subject.doAs(Subject.java:422)
      2 frames
    5. Hadoop
      ProtobufRpcEngine$Invoker.invoke
      1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1705)
      2. org.apache.hadoop.ipc.Server$Handler.run(Server.java:2301)
      3. org.apache.hadoop.ipc.Client.call(Client.java:1448)
      4. org.apache.hadoop.ipc.Client.call(Client.java:1385)
      5. org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
      5 frames
    6. com.sun.proxy
      $Proxy18.addBlock
      1. com.sun.proxy.$Proxy18.addBlock(Unknown Source)
      1 frame
    7. Apache Hadoop HDFS
      ClientNamenodeProtocolTranslatorPB.addBlock
      1. org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:405)
      1 frame
    8. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:483)
      4 frames
    9. Hadoop
      RetryInvocationHandler.invoke
      1. org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:255)
      2. org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
      2 frames
    10. com.sun.proxy
      $Proxy19.addBlock
      1. com.sun.proxy.$Proxy19.addBlock(Unknown Source)
      1 frame
    11. Apache Hadoop HDFS
      DataStreamer.run
      1. org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:917)
      2. org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1685)
      3. org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1495)
      4. org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:595)
      4 frames