Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23055#discussion_r238048453
  
    --- Diff: docs/configuration.md ---
    @@ -190,6 +190,8 @@ of the most common options to set are:
         and it is up to the application to avoid exceeding the overhead memory 
space
         shared with other non-JVM processes. When PySpark is run in YARN or 
Kubernetes, this memory
         is added to executor resource requests.
    +
    +    NOTE: This configuration is not supported on Windows.
    --- End diff --
    
    I would say it's just supported or unsupported tho rather than making it 
complicated. I'm not 100% sure how it works on Windows with other modes. For 
instance, extra memory will probably allocated via Yarn but looks no one tested 
it before. Technically, it does not work on local mode as well.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to