Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via hadoop-hdfs-dev by Apache Hudson Server, 1 year ago
/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data1/current/VERSION (Too many open files)
via redhat.com by Unknown author, 1 year ago
/tmp/path/ticketCache.data (Too many open files)
via Google Groups by Wesley Alan Wright, 8 months ago
/usr/local/dspace/search/segments (Too many open files)
via Jenkins JIRA by giuliano carlini, 2 years ago
via Jenkins JIRA by giuliano carlini, 1 year ago
java.io.FileNotFoundException: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build/test/data/dfs/data/data1/current/VERSION (Too many open files) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.write(Storage.java:265) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.write(Storage.java:259) at org.apache.hadoop.hdfs.server.common.Storage.writeAll(Storage.java:800) at org.apache.hadoop.hdfs.server.datanode.DataNode.register(DataNode.java:708) at org.apache.hadoop.hdfs.server.datanode.DataNode.runDatanodeDaemon(DataNode.java:1464) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:644) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:464) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:186) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:71) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:178) at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88) at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)