Hello! 
My name is Valerii. I have noticed strange memory behaivour of Spark's
executor on my cluster. Cluster works in standalone mode with 3 workers.
Application runs in cluster mode.
>From topology configuration
spark.executor.memory              1536m
I checked heap usage via JVisualVM:
http://joxi.ru/Q2KqBMdSvYpDrj
and via htop:
http://joxi.ru/Vm63RWeCvG6L2Z

I have 2 questions regarding Spark's executors memory usage:
1. Why does Max Heap Size change during executor work?
2. Why does Memory usage via htop greater than executor's heap size? 

Thank you!




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Executor-Memory-Usage-tp23083.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to