Github user rdblue commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23055#discussion_r235082191
  
    --- Diff: 
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala ---
    @@ -74,8 +74,13 @@ private[spark] abstract class BasePythonRunner[IN, OUT](
       private val reuseWorker = conf.getBoolean("spark.python.worker.reuse", 
true)
       // each python worker gets an equal part of the allocation. the worker 
pool will grow to the
       // number of concurrent tasks, which is determined by the number of 
cores in this executor.
    -  private val memoryMb = conf.get(PYSPARK_EXECUTOR_MEMORY)
    +  private val memoryMb = if (Utils.isWindows) {
    --- End diff --
    
    There is no configuration to change needed on the JVM side.
    
    The JVM should communicate to Python how much memory it is allocated. If 
Python can limit itself to that amount, then that's fine. If the JVM doesn't 
expect Python to be able to limit, why would it not tell Python how much memory 
it was allocated?
    
    There is no benefit to making this change that I can see.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to