.. to force spills to disk ...

That is pretty smart as at some point you are inevitable going to run out
of memory.

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 18 April 2016 at 08:17, Luca Guerra <lgue...@bitbang.com> wrote:

> Hi Mich,
>
> I have only 32 cores, I have tested with 2 GB of memory per worker to
> force spills to disk. My application had 12 cores and 3 cores per executor.
>
>
>
> Thank you very much.
>
> Luca
>
>
>
> *Da:* Mich Talebzadeh [mailto:mich.talebza...@gmail.com]
> *Inviato:* venerdì 15 aprile 2016 18:56
> *A:* Luca Guerra <lgue...@bitbang.com>
> *Cc:* user @spark <user@spark.apache.org>
> *Oggetto:* Re: How many disks for spark_local_dirs?
>
>
>
> Is that 32 CPUs or 32 cores?
>
>
>
> So in this configuration assuming 32 cores you have I worker with how much
> memory (deducting memory for OS etc) and 32 cores.
>
>
>
> What is the ratio  of memory per core in this case?
>
>
>
> HTH
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn  
> *https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
>
>
> On 15 April 2016 at 16:15, luca_guerra <lgue...@bitbang.com> wrote:
>
> Hi,
> I'm looking for a solution to improve my Spark cluster performances, I have
> read from http://spark.apache.org/docs/latest/hardware-provisioning.html:
> "We recommend having 4-8 disks per node", I have tried both with one and
> two
> disks but I have seen that with 2 disks the execution time is doubled. Any
> explanations about this?
>
> This is my configuration:
> 1 machine with 140 GB RAM 2 disks and 32 CPU (I know that is an unusual
> configuration) and on this I have a standalone Spark cluster with 1 Worker.
>
> Thank you very much for the help.
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-many-disks-for-spark-local-dirs-tp26790.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
>

Reply via email to