; Date: Wed, 23 Sep 2015 15:53:54 -0700
>> Subject: Debugging too many files open exception issue in Spark shuffle
>> From: dbt...@dbtsai.com
>> To: user@spark.apache.org
>
>>
>> Hi,
>>
>> Recently, we ran into this notorious exception while doing large
>&g
Hi,
Recently, we ran into this notorious exception while doing large
shuffle in mesos at Netflix. We ensure that `ulimit -n` is a very
large number, but still have the issue.
It turns out that mesos overrides the `ulimit -n` to a small number
causing the problem. It's very non-trivial to debug
That is interesting.
I don't have any Mesos experience, but just want to know the reason why it does
so.
Yong
> Date: Wed, 23 Sep 2015 15:53:54 -0700
> Subject: Debugging too many files open exception issue in Spark shuffle
> From: dbt...@dbtsai.com
> To: user@spark.apache