@SK:
Make sure ulimit has taken effect as Todd mentioned. You can verify via
ulimit -a. Also make sure you have proper kernel parameters set in
/etc/sysctl.conf (MacOSX)

On Tue, Oct 7, 2014 at 3:57 PM, Lisonbee, Todd <todd.lison...@intel.com>
wrote:

>
> Are you sure the new ulimit has taken effect?
>
> How many cores are you using?  How many reducers?
>
>         "In general if a node in your cluster has C assigned cores and you
> run
>         a job with X reducers then Spark will open C*X files in parallel
> and
>         start writing. Shuffle consolidation will help decrease the total
>         number of files created but the number of file handles open at any
>         time doesn't change so it won't help the ulimit problem."
>
> Quoted from Patrick at:
>
> http://apache-spark-user-list.1001560.n3.nabble.com/quot-Too-many-open-files-quot-exception-on-reduceByKey-td2462.html
>
> Thanks,
>
> Todd
>
> -----Original Message-----
> From: SK [mailto:skrishna...@gmail.com]
> Sent: Tuesday, October 7, 2014 2:12 PM
> To: u...@spark.incubator.apache.org
> Subject: Re: Shuffle files
>
> - We set ulimit to 500000. But I still get the same "too many open files"
> warning.
>
> - I tried setting consolidateFiles to True, but that did not help either.
>
> I am using a Mesos cluster.   Does Mesos have any limit on the number of
> open files?
>
> thanks
>
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Shuffle-files-tp15185p15869.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to