java.io.FileNotFoundException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • I tried to import a 5GB Apache 2 access log into logstash using the following configuration (adapted from http://www.logstash.net/docs/1.1.12/tutorials/10-minute-walkthrough/) == apache.log ============================================== input { tcp { type => "apache" port => 3333 } } filter { grok { type => "apache" pattern => "%{COMBINEDAPACHELOG}" } date { type => "apache" match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ] } } output { elasticsearch { embedded => true } } == apache.log ============================================== The access log file was fed to logstash via nc, again as shown at the link above. ================================================= nc localhost 3333 < access.log ================================================ I checked the status by querying ES via Curl as well as trying to get Kibana to do it's thing. In Kibana i set the start:end dates to 1.1.2010 to today, which i believe might have been an issue. Kibana constantly timed out, saying it couldn't access the local ES instance. After 2 hours of indexing, logstash started to log the following: == logstash log messages ============================================== log4j, [2014-03-14T20:34:37.804] WARN: org.elasticsearch.index.engine.robin: [Gill, Donald "Donny"] [logstash-2013.11.05][3] failed to read latest segment infos on flush java.io.FileNotFoundException: /Users/badlogic/logstash/data/elasticsearch/nodes/0/indices/logstash-2013.11.05/3/index/segments_2 (Too many open files) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241) at org.apache.lucene.store.FSDirectory$FSIndexInput.<init>(FSDirectory.java:388) at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:127) at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:80) at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80) at org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:471) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:324) at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400) at org.elasticsearch.index.engine.robin.RobinEngine.readLastCommittedSegmentsInfo(RobinEngine.java:296) at org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:952) at org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:563) at org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:186) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) == logstash log messages ============================================== Which actually indicates an issue in ES, but i'm not knowledgable enough about the ES/logstash/Kibana connection. Eventually the logstash process quit with: == logstash log messages ============================================== Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (IOError) Too many open files at org.jruby.RubyIO.select(org/jruby/RubyIO.java:3635) at RUBY.each_connection(jar:file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/ftw/server.rb:98) at RUBY.run(file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/rack/handler/ftw.rb:95) at RUBY.run(file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/logstash/kibana.rb:101) == logstash log messages ============================================== You can get the entire access log at libgdx.badlogicgames.com/access.log Everything was executed on localhost, with the following configuration: Mac OS X 10.9.2 java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
    via by Mario Zechner,
  • I tried to import a 5GB Apache 2 access log into logstash using the following configuration (adapted from http://www.logstash.net/docs/1.1.12/tutorials/10-minute-walkthrough/) == apache.log ============================================== input { tcp { type => "apache" port => 3333 } } filter { grok { type => "apache" pattern => "%{COMBINEDAPACHELOG}" } date { type => "apache" match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ] } } output { elasticsearch { embedded => true } } == apache.log ============================================== The access log file was fed to logstash via nc, again as shown at the link above. ================================================= nc localhost 3333 < access.log ================================================ I checked the status by querying ES via Curl as well as trying to get Kibana to do it's thing. In Kibana i set the start:end dates to 1.1.2010 to today, which i believe might have been an issue. Kibana constantly timed out, saying it couldn't access the local ES instance. After 2 hours of indexing, logstash started to log the following: == logstash log messages ============================================== log4j, [2014-03-14T20:34:37.804] WARN: org.elasticsearch.index.engine.robin: [Gill, Donald "Donny"] [logstash-2013.11.05][3] failed to read latest segment infos on flush java.io.FileNotFoundException: /Users/badlogic/logstash/data/elasticsearch/nodes/0/indices/logstash-2013.11.05/3/index/segments_2 (Too many open files) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241) at org.apache.lucene.store.FSDirectory$FSIndexInput.<init>(FSDirectory.java:388) at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:127) at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:80) at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80) at org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:471) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:324) at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400) at org.elasticsearch.index.engine.robin.RobinEngine.readLastCommittedSegmentsInfo(RobinEngine.java:296) at org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:952) at org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:563) at org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:186) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) == logstash log messages ============================================== Which actually indicates an issue in ES, but i'm not knowledgable enough about the ES/logstash/Kibana connection. Eventually the logstash process quit with: == logstash log messages ============================================== Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (IOError) Too many open files at org.jruby.RubyIO.select(org/jruby/RubyIO.java:3635) at RUBY.each_connection(jar:file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/ftw/server.rb:98) at RUBY.run(file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/rack/handler/ftw.rb:95) at RUBY.run(file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/logstash/kibana.rb:101) == logstash log messages ============================================== You can get the entire access log at libgdx.badlogicgames.com/access.log Everything was executed on localhost, with the following configuration: Mac OS X 10.9.2 java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
    via by Mario Zechner,
  • Unable to zip files using zip4j
    via Stack Overflow by kittu
    ,
  • Too many open files
    via GitHub by fxprunayre
    ,
  • Error On Connection to Database
    via GitHub by codeniac
    ,
  • JWNL for word stemming
    via Stack Overflow by Mehar Ali
    ,
  • Run gradle from perl script
    via by Unknown author,
    • java.io.FileNotFoundException: /Users/badlogic/logstash/data/elasticsearch
    • odes/0/indices/logstash-2013.11.05/3/index/segments_2 (Too many open files) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241) at org.apache.lucene.store.FSDirectory$FSIndexInput.<init>(FSDirectory.java:388) at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:127) at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:80) at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80) at org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:471) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:324) at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400) at org.elasticsearch.index.engine.robin.RobinEngine.readLastCommittedSegmentsInfo(RobinEngine.java:296) at org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:952) at org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:563) at org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:186) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744)

    Users with the same issue

    Unknown visitor
    Unknown visitor1 times, last one,
    Unknown visitor
    Unknown visitor1 times, last one,
    Unknown visitor
    Unknown visitor1 times, last one,
    Unknown visitor
    Unknown visitor1 times, last one,
    Unknown visitor
    Unknown visitor1 times, last one,
    32 more bugmates