Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via hadoop-hdfs-dev by Apache Hudson Server, 1 year ago
via gigaspaces.org by Unknown author, 2 years ago
via Stack Overflow by radai
, 2 years ago
via Google Groups by Frank Olaf Sem-jacobsen, 2 years ago
via Google Groups by Kim Trang Le, 1 year ago
via Google Groups by Kim Trang Le, 2 years ago
java.io.IOException: Too many open files	at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)	at sun.nio.ch.EPollArrayWrapper.(EPollArrayWrapper.java:68)	at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:52)	at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)	at java.nio.channels.Selector.open(Selector.java:209)	at org.apache.hadoop.ipc.Server$Responder.(Server.java:602)	at org.apache.hadoop.ipc.Server.(Server.java:1510)	at org.apache.hadoop.ipc.RPC$Server.(RPC.java:408)	at org.apache.hadoop.ipc.WritableRpcEngine$Server.(WritableRpcEngine.java:332)	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:292)	at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:47)	at org.apache.hadoop.ipc.RPC.getServer(RPC.java:382)	at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:421)	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:512)	at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:282)	at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:264)	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1575)	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1518)	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1485)	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:678)	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:483)	at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:203)	at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:78)	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:195)	at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)	at org.apache.hadoop.hdfs.TestFileConcurrentReader.runTestUnfinishedBlockCRCError(TestFileConcurrentReader.java:315)	at org.apache.hadoop.hdfs.TestFileConcurrentReader.runTestUnfinishedBlockCRCError(TestFileConcurrentReader.java:302)	at org.apache.hadoop.hdfs.TestFileConcurrentReader.__CLR3_0_2u5mf5trj2(TestFileConcurrentReader.java:275)	at org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorTransferToVerySmallWrite(TestFileConcurrentReader.java:274)