[ https://issues.apache.org/jira/browse/SPARK-10572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon updated SPARK-10572: --------------------------------- Labels: bulk-closed (was: ) > Investigate the contentions bewteen tasks in the same executor > -------------------------------------------------------------- > > Key: SPARK-10572 > URL: https://issues.apache.org/jira/browse/SPARK-10572 > Project: Spark > Issue Type: Task > Components: Scheduler, Spark Core > Reporter: Davies Liu > Priority: Major > Labels: bulk-closed > > According to the benchmark results Jesse F Chen, It's surprised to see there > are so much difference (4X) in term of number of executors, we should > investigate the reason. > ``` > > Just be curious how the difference would be if you use 20 executors > > and 20G memory for each executor.. > So I tried the following combinations: > (GB X # executors) (query response time in secs) > 20X20 415 > 10X40 230 > 5X80 141 > 4X100 128 > 2X200 104 > CPU utilization is high so spreading more JVMs onto more vCores helps in this > case. > For other workloads where memory utilization outweighs CPU, i can see larger > JVM > sizes maybe more beneficial. It's for sure case-by-case. > Seems overhead for codegen and scheduler overhead are negligible. > ``` > https://www.mail-archive.com/user@spark.apache.org/msg36486.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org