org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4/.lock File does not exist. Holder DFSClient_-1523444256 does not have any open files.

Pentaho BI Platform Tracking | Tim Lynch | 4 years ago
  1. 0

    Exception happens at the end of a job when the job has installed the kettle files to hdfs. It appears this exception only happens when the kettle install hdfs path does not exist, and is reproducible by deleting the hdfs path. Subsequent jobs where the kettle files are already installed are not affected. INFO 17-12 22:20:19,628 - Pentaho MapReduce - Installing Kettle to /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4 INFO 17-12 22:21:50,213 - Pentaho MapReduce - Kettle successfully installed to /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4 INFO 17-12 22:21:50,219 - Pentaho MapReduce - Configuring Pentaho MapReduce job to use Kettle installation from /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4 WARN 17-12 22:21:50,333 - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. WARN 17-12 22:21:51,090 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable WARN 17-12 22:21:51,090 - Snappy native library not loaded INFO 17-12 22:21:51,095 - Total input paths to process : 4 INFO 17-12 22:21:55,153 - Pentaho MapReduce - Setup Complete: 0.0 Mapper Completion: 0.0 Reducer Completion: 0.0 INFO 17-12 22:22:55,170 - Pentaho MapReduce - Setup Complete: 100.0 Mapper Completion: 50.0 Reducer Completion: 0.0 INFO 17-12 22:22:55,174 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000005_0 Attempt: attempt_201211072319_19518_m_000005_0 Event: 0 INFO 17-12 22:22:55,175 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000002_0 Attempt: attempt_201211072319_19518_m_000002_0 Event: 1 INFO 17-12 22:22:55,176 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000000_0 Attempt: attempt_201211072319_19518_m_000000_0 Event: 2 INFO 17-12 22:23:55,191 - Pentaho MapReduce - Setup Complete: 100.0 Mapper Completion: 100.0 Reducer Completion: 0.0 INFO 17-12 22:23:55,192 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000001_0 Attempt: attempt_201211072319_19518_m_000001_0 Event: 3 INFO 17-12 22:23:55,193 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000003_0 Attempt: attempt_201211072319_19518_m_000003_0 Event: 4 ERROR 17-12 22:23:55,194 - Pentaho MapReduce - [KILLED] -- Task: attempt_201211072319_19518_m_000001_1 Attempt: attempt_201211072319_19518_m_000001_1 Event: 5 ERROR 17-12 22:23:55,194 - Pentaho MapReduce - [KILLED] -- Task: attempt_201211072319_19518_m_000003_1 Attempt: attempt_201211072319_19518_m_000003_1 Event: 6 INFO 17-12 22:24:55,210 - Pentaho MapReduce - Setup Complete: 100.0 Mapper Completion: 100.0 Reducer Completion: 100.0 INFO 17-12 22:24:55,211 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_r_000000_0 Attempt: attempt_201211072319_19518_r_000000_0 Event: 7 INFO 17-12 22:24:55,212 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000004_0 Attempt: attempt_201211072319_19518_m_000004_0 Event: 8 INFO 17-12 22:24:55,216 - report1 - Finished job entry [Pentaho MapReduce] (result=[true]) INFO 17-12 22:24:55,216 - report1 - Finished job entry [set-mr_inout] (result=[true]) INFO 17-12 22:24:55,216 - report1 - Finished job entry [date] (result=[true]) INFO 17-12 22:24:55,216 - report1 - Finished job entry [set prop] (result=[true]) INFO 17-12 22:24:55,216 - report1 - Job execution finished INFO 17-12 22:24:55,217 - Kitchen - Finished! INFO 17-12 22:24:55,217 - Kitchen - Start=2012/12/17 22:20:18.114, Stop=2012/12/17 22:24:55.217 INFO 17-12 22:24:55,217 - Kitchen - Processing ended after 4 minutes and 37 seconds (277 seconds total). ERROR 17-12 22:24:55,222 - Exception closing file /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4/.lock : org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4/.lock File does not exist. Holder DFSClient_-1523444256 does not have any open files. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1593) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1584) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1627) at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:687) at sun.reflect.GeneratedMethodAccessor291.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428) org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4/.lock File does not exist. Holder DFSClient_-1523444256 does not have any open files. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1593) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1584) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1627) at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:687) at sun.reflect.GeneratedMethodAccessor291.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428) at org.apache.hadoop.ipc.Client.call(Client.java:1107) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226) at $Proxy13.complete(Unknown Source) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy13.complete(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:4035) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3950) at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1354) at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:306) at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:365) at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1665) at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:1635)

    Pentaho BI Platform Tracking | 4 years ago | Tim Lynch
    org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4/.lock File does not exist. Holder DFSClient_-1523444256 does not have any open files.
  2. 0

    Exception happens at the end of a job when the job has installed the kettle files to hdfs. It appears this exception only happens when the kettle install hdfs path does not exist, and is reproducible by deleting the hdfs path. Subsequent jobs where the kettle files are already installed are not affected. INFO 17-12 22:20:19,628 - Pentaho MapReduce - Installing Kettle to /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4 INFO 17-12 22:21:50,213 - Pentaho MapReduce - Kettle successfully installed to /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4 INFO 17-12 22:21:50,219 - Pentaho MapReduce - Configuring Pentaho MapReduce job to use Kettle installation from /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4 WARN 17-12 22:21:50,333 - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. WARN 17-12 22:21:51,090 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable WARN 17-12 22:21:51,090 - Snappy native library not loaded INFO 17-12 22:21:51,095 - Total input paths to process : 4 INFO 17-12 22:21:55,153 - Pentaho MapReduce - Setup Complete: 0.0 Mapper Completion: 0.0 Reducer Completion: 0.0 INFO 17-12 22:22:55,170 - Pentaho MapReduce - Setup Complete: 100.0 Mapper Completion: 50.0 Reducer Completion: 0.0 INFO 17-12 22:22:55,174 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000005_0 Attempt: attempt_201211072319_19518_m_000005_0 Event: 0 INFO 17-12 22:22:55,175 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000002_0 Attempt: attempt_201211072319_19518_m_000002_0 Event: 1 INFO 17-12 22:22:55,176 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000000_0 Attempt: attempt_201211072319_19518_m_000000_0 Event: 2 INFO 17-12 22:23:55,191 - Pentaho MapReduce - Setup Complete: 100.0 Mapper Completion: 100.0 Reducer Completion: 0.0 INFO 17-12 22:23:55,192 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000001_0 Attempt: attempt_201211072319_19518_m_000001_0 Event: 3 INFO 17-12 22:23:55,193 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000003_0 Attempt: attempt_201211072319_19518_m_000003_0 Event: 4 ERROR 17-12 22:23:55,194 - Pentaho MapReduce - [KILLED] -- Task: attempt_201211072319_19518_m_000001_1 Attempt: attempt_201211072319_19518_m_000001_1 Event: 5 ERROR 17-12 22:23:55,194 - Pentaho MapReduce - [KILLED] -- Task: attempt_201211072319_19518_m_000003_1 Attempt: attempt_201211072319_19518_m_000003_1 Event: 6 INFO 17-12 22:24:55,210 - Pentaho MapReduce - Setup Complete: 100.0 Mapper Completion: 100.0 Reducer Completion: 100.0 INFO 17-12 22:24:55,211 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_r_000000_0 Attempt: attempt_201211072319_19518_r_000000_0 Event: 7 INFO 17-12 22:24:55,212 - Pentaho MapReduce - [SUCCEEDED] -- Task: attempt_201211072319_19518_m_000004_0 Attempt: attempt_201211072319_19518_m_000004_0 Event: 8 INFO 17-12 22:24:55,216 - report1 - Finished job entry [Pentaho MapReduce] (result=[true]) INFO 17-12 22:24:55,216 - report1 - Finished job entry [set-mr_inout] (result=[true]) INFO 17-12 22:24:55,216 - report1 - Finished job entry [date] (result=[true]) INFO 17-12 22:24:55,216 - report1 - Finished job entry [set prop] (result=[true]) INFO 17-12 22:24:55,216 - report1 - Job execution finished INFO 17-12 22:24:55,217 - Kitchen - Finished! INFO 17-12 22:24:55,217 - Kitchen - Start=2012/12/17 22:20:18.114, Stop=2012/12/17 22:24:55.217 INFO 17-12 22:24:55,217 - Kitchen - Processing ended after 4 minutes and 37 seconds (277 seconds total). ERROR 17-12 22:24:55,222 - Exception closing file /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4/.lock : org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4/.lock File does not exist. Holder DFSClient_-1523444256 does not have any open files. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1593) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1584) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1627) at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:687) at sun.reflect.GeneratedMethodAccessor291.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428) org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4/.lock File does not exist. Holder DFSClient_-1523444256 does not have any open files. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1593) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1584) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1639) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1627) at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:687) at sun.reflect.GeneratedMethodAccessor291.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428) at org.apache.hadoop.ipc.Client.call(Client.java:1107) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226) at $Proxy13.complete(Unknown Source) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy13.complete(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:4035) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3950) at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1354) at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:306) at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:365) at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1665) at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:1635)

    Pentaho BI Platform Tracking | 4 years ago | Tim Lynch
    org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4/.lock File does not exist. Holder DFSClient_-1523444256 does not have any open files.
  3. 0

    getting java.lang.RuntimeException: "advancing post rec#0"

    Stack Overflow | 3 years ago | user1585111
    org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /dummy20.txt/_temporary/_attempt_local_0001_r_000000_0/part-00000 File does not exist. Holder DFSClient_1595916561 does not have any open files.
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    LeaseExpiredException: No lease error on HDFS

    Stack Overflow | 5 years ago | zohar
    org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /data/work/20110926-134514/_temporary/_attempt_201109110407_0167_r_000026_0/hbase/site=3815120/day=20110925/107-107-3815120-20110926-134514-r-00026 File does not exist. Holder DFSClient_attempt_201109110407_0167_r_000026_0 does not have any open files.
  6. 0

    HDFS FileSystem close Exception

    Stack Overflow | 3 years ago | lancerex
    org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.Le aseExpiredException: No lease on /tmp/test File does not exist. Holder DFSClient _NONMAPREDUCE_-1727094995_1 does not have any open files

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. org.apache.hadoop.ipc.RemoteException

      org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /opt/pentaho/mapreduce/4.4.0-1.3.0-cdh3u4/.lock File does not exist. Holder DFSClient_-1523444256 does not have any open files.

      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease()
    2. Apache Hadoop HDFS
      NameNode.complete
      1. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1593)
      2. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1584)
      3. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1639)
      4. org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1627)
      5. org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:687)
      5 frames
    3. Java RT
      Method.invoke
      1. sun.reflect.GeneratedMethodAccessor291.invoke(Unknown Source)
      2. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      3. java.lang.reflect.Method.invoke(Method.java:597)
      3 frames
    4. Hadoop
      Server$Handler$1.run
      1. org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
      2. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
      3. org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
      3 frames
    5. Java RT
      Subject.doAs
      1. java.security.AccessController.doPrivileged(Native Method)
      2. javax.security.auth.Subject.doAs(Subject.java:396)
      2 frames
    6. Hadoop
      RPC$Invoker.invoke
      1. org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
      2. org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
      3. org.apache.hadoop.ipc.Client.call(Client.java:1107)
      4. org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
      4 frames
    7. Unknown
      $Proxy13.complete
      1. $Proxy13.complete(Unknown Source)
      1 frame
    8. Java RT
      Method.invoke
      1. sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
      2. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      3. java.lang.reflect.Method.invoke(Method.java:597)
      3 frames
    9. Hadoop
      RetryInvocationHandler.invoke
      1. org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
      2. org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
      2 frames
    10. Unknown
      $Proxy13.complete
      1. $Proxy13.complete(Unknown Source)
      1 frame
    11. Apache Hadoop HDFS
      DistributedFileSystem.close
      1. org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:4035)
      2. org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3950)
      3. org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1354)
      4. org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:306)
      5. org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:365)
      5 frames
    12. Hadoop
      FileSystem$Cache$ClientFinalizer.run
      1. org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1665)
      2. org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:1635)
      2 frames