cluster.ClusterTaskSetManager: Loss was due to
> /tmp/spark-local-20140417145643-a055/3c/shuffle_1_218_1157 (Too many
> open files)
> ulimit -n tells me I can open 32000 files. Here's a plot of lsof on a
> worker node during a failed .distinct():
, you can see tasks fail when Spark
> tries to open 32000 files.
> I never ran into this in 0.7.3. Is there a parameter I can set to tell
> Spark to use less than 32000 files?
> On Mon, Mar 24, 2014 at 10:23 AM, Aaron Davidson <
>> Look up setting ulimit, though note the distinction between soft and hard
>> limits, and that updating your hard limit may require changing
>> /etc/security/limits.confand restarting each worker.
>> On Mon, Mar 24, 2014 at 1:39 AM, Kane <
>>> Got a bit further, i think out of memory error was caused by setting
>>> spark.spill to false. Now i have this error, is there an easy way to
>>> increase file limit for spark, cluster-wide?:
>>> (Too many open files)
at java.io.FileOutputStream.openAppend(Native Method)