Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via nabble.com by Unknown author, 1 year ago
24, 2014 at 1:39 AM, Kane < > wrote: >>> >>> Got a bit further, i think out of memory error was caused by setting >>> spark.spill to false. Now i have this error, is there an easy way to >>> increase file limit for spark, cluster-wide?: >>> >>> java.io.FileNotFoundException: >>> >>> /tmp/spark-local-20140324074221-b8f1/01/temp_1ab674f9-4556-4239-9f21-688dfc9f17d2 >>> (Too many open files)
cluster.ClusterTaskSetManager: Loss was due to
> java.io.FileNotFoundException
> java.io.FileNotFoundException:
> /tmp/spark-local-20140417145643-a055/3c/shuffle_1_218_1157 (Too many
> open files)
>
> ulimit -n tells me I can open 32000 files. Here's a plot of lsof on a
> worker node during a failed .distinct():
> 
 , you can see tasks fail when Spark
> tries to open 32000 files.
>
> I never ran into this in 0.7.3. Is there a parameter I can set to tell
> Spark to use less than 32000 files?
>
> On Mon, Mar 24, 2014 at 10:23 AM, Aaron Davidson <
> wrote:
>> Look up setting ulimit, though note the distinction between soft and hard
>> limits, and that updating your hard limit may require changing
>> /etc/security/limits.confand restarting each worker.
>>
>>
>> On Mon, Mar 24, 2014 at 1:39 AM, Kane <
> wrote:
>>>
>>> Got a bit further, i think out of memory error was caused by setting
>>> spark.spill to false. Now i have this error, is there an easy way to
>>> increase file limit for spark, cluster-wide?:
>>>
>>> java.io.FileNotFoundException:
>>>
>>> /tmp/spark-local-20140324074221-b8f1/01/temp_1ab674f9-4556-4239-9f21-688dfc9f17d2
>>> (Too many open files)	at java.io.FileOutputStream.openAppend(Native Method)	at java.io.FileOutputStream.(FileOutputStream.java:192)