hdfs.DFSClient: Failed to close file /tmp/hadoop-yarn/staging/root/.staging/job_1376643060317_0003/job.split org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /tmp/hadoop-yarn/staging/root/.staging/job_1376643060317_0003/job.split File does not exist. Holder DFSClient_NONMAPREDUCE_426075520_1 does not have any open files.

couchbase.com | 5 months ago
tip
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. 1

    Trouble getting hadoop connector to work - Other Client Libraries - Couchbase Forums

    couchbase.com | 5 months ago
    hdfs.DFSClient: Failed to close file /tmp/hadoop-yarn/staging/root/.staging/job_1376643060317_0003/job.split org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /tmp/hadoop-yarn/staging/root/.staging/job_1376643060317_0003/job.split File does not exist. Holder DFSClient_NONMAPREDUCE_426075520_1 does not have any open files.
  2. 0

    Datanode is not showing up on hitting jps command

    Stack Overflow | 2 years ago | Qasim
    hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/test._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
  3. 0

    [HDFS-8093] BP does not exist or is not under Constructionnull - ASF JIRA

    apache.org | 2 years ago
    hdfs.DFSClient: Failed to close inode 19801755 org.apache.hadoop.ipc.RemoteException(java.io.IOException): BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 does not exist or is not under Constructionnull
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Running a Wordcount Mapreduce example in Hadoop 2.4.1 Single-node Cluster in Ubuntu 14.04 (64-bit) | Explore. Learn. Spread.

    kishorer.in | 1 year ago
    hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /usr/hduser._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
  6. 0

    File /hbase/hbase.version could only be replicated to 0 nodes instead of minRepl

    programru.com | 1 year ago
    hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

  1. rp 1 times, last 3 months ago
4 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. hdfs.DFSClient

    Failed to close file /tmp/hadoop-yarn/staging/root/.staging/job_1376643060317_0003/job.split org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /tmp/hadoop-yarn/staging/root/.staging/job_1376643060317_0003/job.split File does not exist. Holder DFSClient_NONMAPREDUCE_426075520_1 does not have any open files.

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease()
  2. Apache Hadoop HDFS
    ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod
    1. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2308)
    2. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2299)
    3. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2095)
    4. org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:471)
    5. org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:297)
    6. org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44080)
    6 frames
  3. Hadoop
    Server$Handler$1.run
    1. org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    2. org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
    3. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
    4. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
    4 frames
  4. Java RT
    Subject.doAs
    1. java.security.AccessController.doPrivileged(Native Method)
    2. javax.security.auth.Subject.doAs(Subject.java:415)
    2 frames
  5. Hadoop
    Server$Handler.run
    1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    2. org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
    2 frames