[ 
https://issues.apache.org/jira/browse/SPARK-24374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16488646#comment-16488646
 ] 

Daniel Galvez commented on SPARK-24374:
---------------------------------------

Some comments:
 * GPU scheduling for YARN is well under way: 
[https://github.com/apache/spark/pull/20761.] I am not sure about the other 
resource managers, but I expect that this is fairly easy work to do.
 * Did you choose version 3.0.0 because you anticipate this requiring backwards 
incompatible changes?
 * openmpi (the only MPI I am familiar with) uses ssh by default for performing 
mpirun on other machines. This means that you may have to think a little about 
credentials within Apache Spark, annoyingly. (I think there alternatives to 
ssh, but I am not familiar.)
 * I don't consider Barrier a great name. I prefer "Resident". This is coming 
from my perspective as a GPU programmer. In CUDA, you use barriers to account 
for the fact that not all threads of execution are guaranteed to be "resident" 
(i.e., executing all at the same time, in lock step). Were they all resident, 
you actually wouldn't need a thread barrier, since they would all reach the 
barrier at the same time. Thus, I don't like the name barrier necessarily. If 
you wanted to take a term from CUDA, they use the word "cooperative" to refer 
to GPU kernels where all your threads of execution are guaranteed to be 
resident at the same time.

> SPIP: Support Barrier Scheduling in Apache Spark
> ------------------------------------------------
>
>                 Key: SPARK-24374
>                 URL: https://issues.apache.org/jira/browse/SPARK-24374
>             Project: Spark
>          Issue Type: Epic
>          Components: ML, Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>            Priority: Major
>              Labels: SPIP
>         Attachments: SPIP_ Support Barrier Scheduling in Apache Spark.pdf
>
>
> (See details in the linked/attached SPIP doc.)
> {quote}
> The proposal here is to add a new scheduling model to Apache Spark so users 
> can properly embed distributed DL training as a Spark stage to simplify the 
> distributed training workflow. For example, Horovod uses MPI to implement 
> all-reduce to accelerate distributed TensorFlow training. The computation 
> model is different from MapReduce used by Spark. In Spark, a task in a stage 
> doesn’t depend on any other tasks in the same stage, and hence it can be 
> scheduled independently. In MPI, all workers start at the same time and pass 
> messages around. To embed this workload in Spark, we need to introduce a new 
> scheduling model, tentatively named “barrier scheduling”, which launches 
> tasks at the same time and provides users enough information and tooling to 
> embed distributed DL training. Spark can also provide an extra layer of fault 
> tolerance in case some tasks failed in the middle, where Spark would abort 
> all tasks and restart the stage.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to