This is a bug in some versions of the Arduino IDE. Try updating to the version 1.6.12 or further.

You have a different solution? A short tip here would help you and many other users who saw this issue last week.

  • Hadoop-Hdfs-trunk - Build # 554 - Still Failing
    via by Apache Hudson Server,
  • Hadoop-Hdfs-trunk - Build # 555 - Still Failing
    via by Apache Hudson Server,
  • [JENKINS-1921] Too many open files - Jenkins JIRA
    via by Unknown author,
  • Too many open files with svn
    via GitHub by dhireng
  • Too many open files
    via areca by nimdae
  • Hadoop-Hdfs-22-branch - Build # 14 - Still Failing
    via by Apache Hudson Server,
  • java IOEXCEPTION:too many open files
    via by tsaowe cao,
    • java.lang.RuntimeException: Error while running command to get file permissions : Cannot run program "/bin/ls": error=24, Too many open files at java.lang.ProcessBuilder.start( at org.apache.hadoop.util.Shell.runCommand( at at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute( at org.apache.hadoop.util.Shell.execCommand( at org.apache.hadoop.util.Shell.execCommand( at org.apache.hadoop.fs.RawLocalFileSystem.execCommand( at org.apache.hadoop.fs.RawLocalFileSystem.access$100( at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo( at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission( at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck( at org.apache.hadoop.util.DiskChecker.checkDir( at org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs( at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance( at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode( at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode( at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes( at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster( at org.apache.hadoop.hdfs.MiniDFSCluster.<init>( at org.apache.hadoop.hdfs.MiniDFSCluster.<init>( at org.apache.hadoop.hdfs.MiniDFSCluster$ at org.apache.hadoop.hdfs.TestFileConcurrentReader.init( at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp( Caused by: error=24, Too many open files at java.lang.UNIXProcess.<init>( at java.lang.ProcessImpl.start( at java.lang.ProcessBuilder.start( at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo( ... 14 more

    Users with the same issue

    Unknown visitor1 times, last one,
    5 times, last one,
    3 times, last one,
    Unknown User
    1 times, last one,
    16 times, last one,
    124 more bugmates