Ah I see it was SPARK-2711 (and PR1707). In that case, it's possible
that you are just having more spilling as a result of the patch and so
the filesystem is opening more files. I would try increasing the
ulimit.

How much memory do your executors have?

- Patrick

On Sun, Sep 21, 2014 at 10:29 PM, Patrick Wendell <pwend...@gmail.com> wrote:
> Hey the numbers you mentioned don't quite line up - did you mean PR 2711?
>
> On Sun, Sep 21, 2014 at 8:45 PM, Reynold Xin <r...@databricks.com> wrote:
>> It seems like you just need to raise the ulimit?
>>
>>
>> On Sun, Sep 21, 2014 at 8:41 PM, Nishkam Ravi <nr...@cloudera.com> wrote:
>>
>>> Recently upgraded to 1.1.0. Saw a bunch of fetch failures for one of the
>>> workloads. Tried tracing the problem through change set analysis. Looks
>>> like the offending commit is 4fde28c from Aug 4th for PR1707. Please see
>>> SPARK-3633 for more details.
>>>
>>> Thanks,
>>> Nishkam
>>>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to