This is an automated email from the ASF dual-hosted git repository. gurwls223 pushed a commit to branch branch-3.4 in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.4 by this push: new df800d01679 [SPARK-41775][PYTHON][FOLLOW-UP] Updating docs for readability df800d01679 is described below commit df800d016794f51ee18ccb6bfbab4ee5e8cae796 Author: Rithwik Ediga Lakhamsani <rithwik.ed...@databricks.com> AuthorDate: Wed Feb 22 11:30:43 2023 +0900 [SPARK-41775][PYTHON][FOLLOW-UP] Updating docs for readability ### What changes were proposed in this pull request? Added minor UI fixes. <img width="732" alt="image" src="https://user-images.githubusercontent.com/81988348/220488925-eda62d80-d54d-41e9-a9ec-53d02b6fb94d.png"> <img width="725" alt="image" src="https://user-images.githubusercontent.com/81988348/220488948-929b1c35-4da7-4317-9883-078c2a57896a.png"> <img width="693" alt="image" src="https://user-images.githubusercontent.com/81988348/220488975-fdc34ae5-a539-4557-993c-d740232b29b5.png"> ### Why are the changes needed? For easy to read documentation. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? N/A Closes #40110 from rithwik-db/docs-update-2. Authored-by: Rithwik Ediga Lakhamsani <rithwik.ed...@databricks.com> Signed-off-by: Hyukjin Kwon <gurwls...@apache.org> (cherry picked from commit ba24dcec42bcd45caee5a4866137bc352cba02ef) Signed-off-by: Hyukjin Kwon <gurwls...@apache.org> --- python/pyspark/ml/torch/distributor.py | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/python/pyspark/ml/torch/distributor.py b/python/pyspark/ml/torch/distributor.py index b062d743646..a0a9c5aa932 100644 --- a/python/pyspark/ml/torch/distributor.py +++ b/python/pyspark/ml/torch/distributor.py @@ -263,9 +263,23 @@ class TorchDistributor(Distributor): .. versionadded:: 3.4.0 + Parameters + ---------- + num_processes : int, optional + An integer that determines how many different concurrent + tasks are allowed. We expect spark.task.gpus = 1 for GPU-enabled training. Default + should be 1; we don't want to invoke multiple cores/gpus without explicit mention. + local_mode : bool, optional + A boolean that determines whether we are using the driver + node for training. Default should be false; we don't want to invoke executors without + explicit mention. + use_gpu : bool, optional + A boolean that indicates whether or not we are doing training + on the GPU. Note that there are differences in how GPU-enabled code looks like and + how CPU-specific code looks like. + Examples -------- - Run PyTorch Training locally on GPU (using a PyTorch native function) >>> def train(learning_rate): --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org