Github user holdenk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21977#discussion_r207692117
  
    --- Diff: core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala 
---
    @@ -51,6 +52,17 @@ private[spark] class PythonRDD(
       val bufferSize = conf.getInt("spark.buffer.size", 65536)
       val reuseWorker = conf.getBoolean("spark.python.worker.reuse", true)
     
    +  val memoryMb = {
    --- End diff --
    
    I think there might be a misunderstanding on what `reuseWorker` means 
perhaps. The workers will be reused but the decision on if we fork in Python or 
not is based on if we are in Windows or not. How about we both go and read the 
code path there and see if we reach the same understanding? I could be off too.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to