It makes sense what you said. But, when I proportionately reduce the heap
size, then also the problem persists. For instance, if I use 160 GB heap for
48 cores, whereas 80 GB heap for 24 cores, then also with 24 cores the
performance is better (although worse than 160 GB with 24 cores) than
48-core case. It is only when I use 40 GB with 24 cores that I see 48-core
case performing better than 24-core case.

Does it mean that there is no relation between thread count and heap size?
If so, can you please suggest how can I calculate heap sizes for fair
comparisons?

My real trouble is when I compare performance of application when run with
(1) a single node of 48 cores and 160GB heap, and (2) 8 nodes of 6 cores and
20GB each. In this comparison (2) performs far better than (1), and I don't
understand the reason.


Lokesh



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Ideal-core-count-within-a-single-JVM-tp9566p9810.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to