[ https://issues.apache.org/jira/browse/SPARK-21408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Marcelo Vanzin resolved SPARK-21408. ------------------------------------ Resolution: Fixed Assignee: Marcelo Vanzin Fix Version/s: 2.3.0 > Default RPC dispatcher thread pool size too large for small executors > --------------------------------------------------------------------- > > Key: SPARK-21408 > URL: https://issues.apache.org/jira/browse/SPARK-21408 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Affects Versions: 2.3.0 > Reporter: Marcelo Vanzin > Assignee: Marcelo Vanzin > Priority: Minor > Fix For: 2.3.0 > > > This is the code that sizes the RPC dispatcher thread pool: > {noformat} > private val threadpool: ThreadPoolExecutor = { > val numThreads = > nettyEnv.conf.getInt("spark.rpc.netty.dispatcher.numThreads", > math.max(2, Runtime.getRuntime.availableProcessors())) > val pool = ThreadUtils.newDaemonFixedThreadPool(numThreads, > "dispatcher-event-loop") > {noformat} > That is based on the number of available cores on the host, instead of the > number of cores the executor was told to use. Meaning if you start an > executor with a single "core" on a host with 64 CPUs, you'll get 64 threads, > which is kinda overkill. > Using the allocated cores + a lower bound is probably a better approach. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org