java.io.IOException: Too many open files

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Hadoop-Hdfs-trunk - Build # 547 - Still Failing
    via by Apache Hudson Server,
  • Hadoop-Hdfs-trunk - Build # 549 - Still Failing
    via by Apache Hudson Server,
  • Hadoop-Hdfs-trunk - Build # 540 - Still Failing
    via by Apache Hudson Server,
  • Hadoop-Hdfs-trunk - Build # 542 - Still Failing
    via by Apache Hudson Server,
  • Hadoop-Hdfs-trunk - Build # 561 - Still Failing
    via by Apache Hudson Server,
  • Hadoop-Hdfs-trunk - Build # 562 - Still Failing
    via by Apache Hudson Server,
  • System getting stuck using elastic search
    via by Prashanth,
    • java.io.IOException: Too many open files at sun.nio.ch.IOUtil.initPipe(Native Method) at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49) at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18) at java.nio.channels.Selector.open(Selector.java:209) at org.apache.hadoop.ipc.Server$Responder.<init>(Server.java:602) at org.apache.hadoop.ipc.Server.<init>(Server.java:1511) at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:408) at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:332) at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:292) at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:47) at org.apache.hadoop.ipc.RPC.getServer(RPC.java:382) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:421) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:512) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:282) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:264) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1575) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1518) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1485) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:630) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:464) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:186) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:71) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:178) at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88) at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)

    Users with the same issue

    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,
    4 more bugmates