GitHub user NiharS opened a pull request:

    https://github.com/apache/spark/pull/21885

    [SPARK-24926] Ensure numCores is used consistently in all netty 
configurations

    ## What changes were proposed in this pull request?
    
    Netty could just ignore user-provided configurations. In particular, 
spark.driver.cores would be ignored when considering the number of cores 
available to netty (which would usually just default to 
Runtime.availableProcessors() ). In transport configurations, the number of 
threads are based directly on how many cores the system believes it has 
available, and in yarn cluster mode this would generally overshoot the 
user-preferred value.
    
    ## How was this patch tested?
    
    As this is mostly a configuration change, tests were done manually by 
adding spark-submit confs and verifying the number of threads started by netty 
was what was expected.
    
    Passes scalastyle checks from dev/run-tests
    
    Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/NiharS/spark usableCores

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/21885.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #21885
    
----
commit 6967dc6cbf064cb3ee046706ef09605b64ddb584
Author: Nihar Sheth <niharrsheth@...>
Date:   2018-07-26T17:20:52Z

    Properly plumb numUsableCores from spark.driver.cores

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to