Hi Gurus,

The parameter spark.yarn.executor.memoryOverhead is explained as below:

spark.yarn.executor.memoryOverhead 
executorMemory * 0.10, with minimum of 384
The amount of off-heap memory (in megabytes) to be allocated per executor. This 
is memory that accounts for things like VM overheads, interned strings, other 
native overheads, etc. This tends to grow with the executor size (typically 
6-10%).                                                                         
                                                                                
                                                         

So does that mean that for executor of 10GB this should be ideally set to ~ 10% 
= 1GB?                                                                          
                                                                                
                               
What would happen if we set it higher to say 30% ~ 3GB.
What is this memory is exactly used for (as opposed to memory allocated to the 
executor)?

Thanking you

Reply via email to