java.io.FileNotFoundException: /Users/badlogic/logstash/data/elasticsearch/nodes/0/indices/logstash-2013.11.05/3/index/segments_2 (Too many open files)

JIRA | Mario Zechner | 3 years ago
  1. 0

    I tried to import a 5GB Apache 2 access log into logstash using the following configuration (adapted from http://www.logstash.net/docs/1.1.12/tutorials/10-minute-walkthrough/) == apache.log ============================================== input { tcp { type => "apache" port => 3333 } } filter { grok { type => "apache" pattern => "%{COMBINEDAPACHELOG}" } date { type => "apache" match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ] } } output { elasticsearch { embedded => true } } == apache.log ============================================== The access log file was fed to logstash via nc, again as shown at the link above. ================================================= nc localhost 3333 < access.log ================================================ I checked the status by querying ES via Curl as well as trying to get Kibana to do it's thing. In Kibana i set the start:end dates to 1.1.2010 to today, which i believe might have been an issue. Kibana constantly timed out, saying it couldn't access the local ES instance. After 2 hours of indexing, logstash started to log the following: == logstash log messages ============================================== log4j, [2014-03-14T20:34:37.804] WARN: org.elasticsearch.index.engine.robin: [Gill, Donald "Donny"] [logstash-2013.11.05][3] failed to read latest segment infos on flush java.io.FileNotFoundException: /Users/badlogic/logstash/data/elasticsearch/nodes/0/indices/logstash-2013.11.05/3/index/segments_2 (Too many open files) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241) at org.apache.lucene.store.FSDirectory$FSIndexInput.<init>(FSDirectory.java:388) at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:127) at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:80) at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80) at org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:471) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:324) at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400) at org.elasticsearch.index.engine.robin.RobinEngine.readLastCommittedSegmentsInfo(RobinEngine.java:296) at org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:952) at org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:563) at org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:186) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) == logstash log messages ============================================== Which actually indicates an issue in ES, but i'm not knowledgable enough about the ES/logstash/Kibana connection. Eventually the logstash process quit with: == logstash log messages ============================================== Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (IOError) Too many open files at org.jruby.RubyIO.select(org/jruby/RubyIO.java:3635) at RUBY.each_connection(jar:file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/ftw/server.rb:98) at RUBY.run(file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/rack/handler/ftw.rb:95) at RUBY.run(file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/logstash/kibana.rb:101) == logstash log messages ============================================== You can get the entire access log at libgdx.badlogicgames.com/access.log Everything was executed on localhost, with the following configuration: Mac OS X 10.9.2 java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

    JIRA | 3 years ago | Mario Zechner
    java.io.FileNotFoundException: /Users/badlogic/logstash/data/elasticsearch/nodes/0/indices/logstash-2013.11.05/3/index/segments_2 (Too many open files)
  2. 0

    I tried to import a 5GB Apache 2 access log into logstash using the following configuration (adapted from http://www.logstash.net/docs/1.1.12/tutorials/10-minute-walkthrough/) == apache.log ============================================== input { tcp { type => "apache" port => 3333 } } filter { grok { type => "apache" pattern => "%{COMBINEDAPACHELOG}" } date { type => "apache" match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ] } } output { elasticsearch { embedded => true } } == apache.log ============================================== The access log file was fed to logstash via nc, again as shown at the link above. ================================================= nc localhost 3333 < access.log ================================================ I checked the status by querying ES via Curl as well as trying to get Kibana to do it's thing. In Kibana i set the start:end dates to 1.1.2010 to today, which i believe might have been an issue. Kibana constantly timed out, saying it couldn't access the local ES instance. After 2 hours of indexing, logstash started to log the following: == logstash log messages ============================================== log4j, [2014-03-14T20:34:37.804] WARN: org.elasticsearch.index.engine.robin: [Gill, Donald "Donny"] [logstash-2013.11.05][3] failed to read latest segment infos on flush java.io.FileNotFoundException: /Users/badlogic/logstash/data/elasticsearch/nodes/0/indices/logstash-2013.11.05/3/index/segments_2 (Too many open files) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241) at org.apache.lucene.store.FSDirectory$FSIndexInput.<init>(FSDirectory.java:388) at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:127) at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:80) at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80) at org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:471) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:324) at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694) at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400) at org.elasticsearch.index.engine.robin.RobinEngine.readLastCommittedSegmentsInfo(RobinEngine.java:296) at org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:952) at org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:563) at org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:186) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) == logstash log messages ============================================== Which actually indicates an issue in ES, but i'm not knowledgable enough about the ES/logstash/Kibana connection. Eventually the logstash process quit with: == logstash log messages ============================================== Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (IOError) Too many open files at org.jruby.RubyIO.select(org/jruby/RubyIO.java:3635) at RUBY.each_connection(jar:file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/ftw/server.rb:98) at RUBY.run(file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/rack/handler/ftw.rb:95) at RUBY.run(file:/Users/badlogic/logstash/logstash-1.3.3-flatjar.jar!/logstash/kibana.rb:101) == logstash log messages ============================================== You can get the entire access log at libgdx.badlogicgames.com/access.log Everything was executed on localhost, with the following configuration: Mac OS X 10.9.2 java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

    JIRA | 3 years ago | Mario Zechner
    java.io.FileNotFoundException: /Users/badlogic/logstash/data/elasticsearch/nodes/0/indices/logstash-2013.11.05/3/index/segments_2 (Too many open files)
  3. 0

    open file descriptors leak (somehow related to lucene schema-index)

    GitHub | 3 years ago | tbaum
    java.io.FileNotFoundException: /usr/local/Cellar/tomcat/8.0.9/libexec/bin/data/neo4j/ab/schema/index/lucene/14/_an2.fdx (Too many open files)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Too many open files

    GitHub | 2 years ago | fxprunayre
    java.io.FileNotFoundException: /home/francois/dev/eea-geonetwork/web/src/main/webapp/WEB-INF/data/index/spatialindex.dbf (Too many open files)
  6. 0

    random FileNotFoundExceptions when performing a snapshot

    GitHub | 2 years ago | OlegYch
    org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException: [my_idx_redacted_7][10] Failed to perform snapshot (index files)

  1. Andreas Häber 1 times, last 2 weeks ago
  2. tyson925 5 times, last 7 months ago
  3. rp 13 times, last 8 months ago
30 unregistered visitors
Not finding the right solution?
Take a tour to get the most out of Samebug.

Tired of useless tips?

Automated exception search integrated into your IDE

Root Cause Analysis

  1. java.io.FileNotFoundException

    /Users/badlogic/logstash/data/elasticsearch/nodes/0/indices/logstash-2013.11.05/3/index/segments_2 (Too many open files)

    at java.io.RandomAccessFile.open()
  2. Java RT
    RandomAccessFile.<init>
    1. java.io.RandomAccessFile.open(Native Method)
    2. java.io.RandomAccessFile.<init>(RandomAccessFile.java:241)
    2 frames
  3. Lucene
    FilterDirectory.openInput
    1. org.apache.lucene.store.FSDirectory$FSIndexInput.<init>(FSDirectory.java:388)
    2. org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:127)
    3. org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:80)
    4. org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:80)
    4 frames
  4. ElasticSearch
    Store$StoreDirectory.openInput
    1. org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:471)
    1 frame
  5. Lucene
    SegmentInfos.read
    1. org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:324)
    2. org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:404)
    3. org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:843)
    4. org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:694)
    5. org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:400)
    5 frames
  6. org.elasticsearch.index
    RobinEngine.flush
    1. org.elasticsearch.index.engine.robin.RobinEngine.readLastCommittedSegmentsInfo(RobinEngine.java:296)
    2. org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:952)
    2 frames
  7. ElasticSearch
    TranslogService$TranslogBasedFlush$1.run
    1. org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:563)
    2. org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:186)
    2 frames
  8. Java RT
    Thread.run
    1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    3. java.lang.Thread.run(Thread.java:744)
    3 frames