Github user rdblue commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23055#discussion_r234084002
  
    --- Diff: 
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala ---
    @@ -74,8 +74,13 @@ private[spark] abstract class BasePythonRunner[IN, OUT](
       private val reuseWorker = conf.getBoolean("spark.python.worker.reuse", 
true)
       // each python worker gets an equal part of the allocation. the worker 
pool will grow to the
       // number of concurrent tasks, which is determined by the number of 
cores in this executor.
    -  private val memoryMb = conf.get(PYSPARK_EXECUTOR_MEMORY)
    +  private val memoryMb = if (Utils.isWindows) {
    --- End diff --
    
    I mean that it is brittle to try to use `resource` if the JVM has set the 
property. You handle the `ImportError`, but the JVM could set the request and 
Python would break again.
    
    I think that this should not be entirely disabled on Windows. Resource 
requests to YARN or other schedulers should include this memory. The only 
feature that should be disabled is the resource limiting on the python side.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to