Ideally you should use less.. 75 % would be good to leave enough for
scratch space for shuffle writes & system processes.

Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>



On Fri, May 23, 2014 at 1:41 AM, Andrew Or <and...@databricks.com> wrote:

> Hi Ibrahim,
>
> If your worker machines only have 8GB of memory, then launching executors
> with all the memory will leave no room for system processes. There is no
> guideline, but I usually leave around 1GB just to be safe, so
>
> conf.set("spark.executor.memory", "7g")
>
> Andrew
>
>
> 2014-05-22 7:23 GMT-07:00 İbrahim Rıza HALLAÇ <ibrahimrizahal...@live.com>
> :
>
>  In my situation each slave has 8 GB memory.  I want to use the maximum
>> memory that I can: .set("spark.executor.memory", "?g")
>> How can I determine the amount of memory I should set ? It fails when I
>> set it to 8GB.
>>
>
>

Reply via email to