Hi folks,

I couldn't find much literature on this so I figured I could ask here.

Does anyone have experience in tuning the memory settings and interval
times of the Spark History Server?
Let's say I have 500 applications at 0.5 G each with a
*spark.history.fs.update.interval*  of 400s.
Is there a direct memory correlation that can help me set an optimum value?

Looking for some advice if anyone has tuned the History Server to render
large amounts of applications.

Thanks.
-- 
Regards,
Neelesh S. Salian

Reply via email to