Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Google Groups by TJ Giuli, 10 months ago
No lease on /druid/delta_netflow/20160704T230000.000Z_20160705T000000.000Z/2016-07-04T23_00_05.235Z/0/index.zip (inode 18375798): File does not exist. Holder DFSClient_NONMAPREDUCE_-1070421989_1 does not have any open files.
via GitHub by tjgiuli
, 1 year ago
No lease on /druid/delta_netflow/20160704T230000.000Z_20160705T000000.000Z/2016-07-04T23_00_05.235Z/0/index.zip (inode 18375798): File does not exist. Holder DFSClient_NONMAPREDUCE_-1070421989_1 does not have any open files.
via Apache's JIRA Issue Tracker by Josh Rosen, 1 year ago
No lease on /test7/_temporary/_attempt_201501071517_0000_m_000000_120/part-00000: File does not exist. Holder DFSClient_NONMAPREDUCE_-469253416_73 does not have any open files.
via Cask Community Issue Tracker by Shankar Selvam, 2 years ago
No lease on /cdap/namespaces/default/data/mirrorTable2/2015-09-19/03-12.1442632356767/_temporary/1/_temporary/attempt_1442608527883_0035_m_000 000_0/part-m-00000.avro: File does not exist. [Lease. Holder: DFSClient_attempt_1442608527883_0035_m_000000_0_1188726900_1, pendingcreates: 2]
via iteye.com by Unknown author, 2 years ago
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No leas e on ***.lzo File does not exist. Holder ** does not have any open files.
via Stack Overflow by bndg
, 2 years ago
org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /xxx/ temp/_temporary/_attempt_201307111759_7495_m_000527_0/part-m-00527 File does not exist. Holder DFSClient_attempt_201307111759_7495_m_000527_0 does not have any open files.
org.apache.hadoop.ipc.RemoteException: No lease on /druid/delta_netflow/20160704T230000.000Z_20160705T000000.000Z/2016-07-04T23_00_05.235Z/0/index.zip (inode 18375798): File does not exist. Holder DFSClient_NONMAPREDUCE_-1070421989_1 does not have any open files. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3446) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3416) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:675) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.complete(AuthorizationProviderProxyClientProtocol.java:219) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:520) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411)[?:?] at org.apache.hadoop.ipc.Client.call(Client.java:1364)[?:?] at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)[?:?] at com.sun.proxy.$Proxy63.complete(Unknown Source)[?:?] at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:435)[?:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[?:1.7.0_67] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)[?:1.7.0_67] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[?:1.7.0_67] at java.lang.reflect.Method.invoke(Method.java:606)[?:1.7.0_67] at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)[?:?] at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)[?:?] at com.sun.proxy.$Proxy64.complete(Unknown Source)[?:?] at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2164)[?:?] at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2148)[?:?] at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)[?:?] at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)[?:?] at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:241)[?:1.7.0_67] at java.util.zip.ZipOutputStream.close(ZipOutputStream.java:360)[?:1.7.0_67] at com.metamx.common.CompressionUtils.zip(CompressionUtils.java:104)[java-util-0.27.7.jar:?] at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:92)[?:?] at io.druid.segment.realtime.plumber.RealtimePlumber$4.doRun(RealtimePlumber.java:550)[druid-server-0.9.0.jar:0.9.0] at io.druid.common.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:42)[druid-common-0.9.0.jar:0.9.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[?:1.7.0_67] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[?:1.7.0_67] at java.lang.Thread.run(Thread.java:745)[?:1.7.0_67]