Github user rdblue commented on a diff in the pull request: https://github.com/apache/spark/pull/21977#discussion_r207636504 --- Diff: core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala --- @@ -51,6 +52,17 @@ private[spark] class PythonRDD( val bufferSize = conf.getInt("spark.buffer.size", 65536) val reuseWorker = conf.getBoolean("spark.python.worker.reuse", true) + val memoryMb = { --- End diff -- I thought the comments below were clear: if a single worker is reused, it gets the entire allocation. If each core starts its own worker, each one gets an equal share. If `reuseWorker` is actually ignored, then this needs to be updated.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org