There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Apache Spark User List - distinct on huge dataset
    via by Unknown author,
    • cluster.ClusterTaskSetManager: Loss was due to > > > /tmp/spark-local-20140417145643-a055/3c/shuffle_1_218_1157 (Too many > open files) > > ulimit -n tells me I can open 32000 files. Here's a plot of lsof on a > worker node during a failed .distinct(): >  , you can see tasks fail when Spark > tries to open 32000 files. > > I never ran into this in 0.7.3. Is there a parameter I can set to tell > Spark to use less than 32000 files? > > On Mon, Mar 24, 2014 at 10:23 AM, Aaron Davidson < > wrote: >> Look up setting ulimit, though note the distinction between soft and hard >> limits, and that updating your hard limit may require changing >> /etc/security/limits.confand restarting each worker. >> >> >> On Mon, Mar 24, 2014 at 1:39 AM, Kane < > wrote: >>> >>> Got a bit further, i think out of memory error was caused by setting >>> spark.spill to false. Now i have this error, is there an easy way to >>> increase file limit for spark, cluster-wide?: >>> >>> >>> >>> /tmp/spark-local-20140324074221-b8f1/01/temp_1ab674f9-4556-4239-9f21-688dfc9f17d2 >>> (Too many open files) at Method) at<init>(
    No Bugmate found.