java.io.FileNotFoundException: /tmp/spark-local-20140324074221-b8f1/01/temp_1ab674f9-4556-4239-9f21-688dfc9f17d2 (Too many open files)

Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Samebug tips

,

try new webdriver if you have this error with firefox while uploading files

Solutions on the web

via nabble.com by Unknown author, 1 year ago
/tmp/spark-local-20140324074221-b8f1/01/temp_1ab674f9-4556-4239-9f21-688dfc9f17d2 (Too many open files)
via Sonatype JIRA by Peter Lynch, 1 year ago
/Volumes/D/app/nexus/2.3-SNAPSHOT/nexus-professional-2.3-20121113.114232-64-bundle/sonatype-work/nexus/timeline/persist/timeline.2012-11-14.14-10-09-0330.dat (Too many open files)
via areca by casteels
, 1 year ago
C:\Documents and Settings\casteels\Areca\Workspace\null\log\paul.08-05-01.log (Het systeem kan het opgegeven pad niet vinden)
via Oracle Community by user305787, 1 year ago
/tmp/u01/appldev/EZIDEV/inst/apps/EZIDEV_ezi-dev-ebs-001/logs/appl/conc/out/o617091.out.out (No such file or directory)
via Oracle Community by user305787, 1 year ago
/tmp/u01/appldev/EZIDEV/inst/apps/EZIDEV_ezi-dev-ebs-001/logs/appl/conc/out/o617091.out.out (No such file or directory)
via Coderanch by francis varkey, 1 year ago
/opt/test/logs/test01082008.log (Too many open files)
java.io.FileNotFoundException: /tmp/spark-local-20140324074221-b8f1/01/temp_1ab674f9-4556-4239-9f21-688dfc9f17d2 (Too many open files)
at java.io.FileOutputStream.openAppend(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:192)
at org.apache.spark.storage.DiskBlockObjectWriter.open(BlockObjectWriter.scala:113)
at org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:174)
at org.apache.spark.util.collection.ExternalAppendOnlyMap.spill(ExternalAppendOnlyMap.scala:191)
at org.apache.spark.util.collection.ExternalAppendOnlyMap.insert(ExternalAppendOnlyMap.scala:141)
at org.apache.spark.Aggregator.combineValuesByKey(Aggregator.scala:59)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$1.apply(PairRDDFunctions.scala:95)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$1.apply(PairRDDFunctions.scala:94)
at org.apache.spark.rdd.RDD$$anonfun$3.apply(RDD.scala:471)
at org.apache.spark.rdd.RDD$$anonfun$3.apply(RDD.scala:471)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
at org.apache.spark.scheduler.Task.run(Task.scala:53)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

Users with the same issue

6 times, 6 months ago
Samebug visitor profile picture
Unknown user
Once, 2 years ago
Samebug visitor profile picture
Unknown user
Once, 2 years ago

Write tip

Know the solutions? Share your knowledge to help other developers to debug faster.