Thanks this is good feedback.

I was worried with the dynamic nature of Yarn containers that it would be hard 
to coordinate wire up, and you have confirmed that.

Thanks

Brock Palen
www.umich.edu/~brockp
CAEN Advanced Computing
XSEDE Campus Champion
bro...@umich.edu
(734)936-1985



> On Oct 27, 2014, at 11:25 AM, Ralph Castain <r...@open-mpi.org> wrote:
> 
> 
>> On Oct 26, 2014, at 9:56 PM, Brock Palen <bro...@umich.edu> wrote:
>> 
>> We are starting to look at supporting MPI on our Hadoop/Spark YARN based 
>> cluster.
> 
> You poor soul…
> 
>>  I found a bunch of referneces to Hamster, but what I don't find is if it 
>> was ever merged into regular OpenMPI, and if so is it just another RM 
>> integration?  Or does it need more setup?
> 
> When I left Pivotal, it was based on a copy of the OMPI trunk that sat 
> somewhere in the 1.7 series, I believe. Last contact I had indicated they 
> were trying to update, but I’m not sure they were successful.
> 
>> 
>> I found this:
>> http://pivotalhd.docs.pivotal.io/doc/2100/Hamster.html
> 
> Didn’t know they had actually (finally) released it, so good to know. Just so 
> you are aware, there are major problems running MPI under Yarn as it just 
> isn’t designed for MPI support. What we did back then was add a JNI layer so 
> that ORTE could run underneath it, and then added a PMI-like service to 
> provide the wireup support (since Yarn couldn’t be used to exchange the info 
> itself). You also have the issue that Yarn doesn’t understand the need for 
> all the procs to be launched together, and so you have to modify Yarn so it 
> will ensure that the MPI procs are all running or else you’ll hang in 
> MPI_Init.
> 
>> 
>> Which appears to imply extra setup required.  Is this documented anywhere 
>> for OpenMPI?
> 
> I’m afraid you’ll just have to stick with the Pivotal-provided version as the 
> integration is rather complicated. Don’t expect much in the way of 
> performance! This was purely intended as a way for “casual” MPI users to make 
> use of “free” time on their Hadoop cluster, not for any serious technical 
> programming.
> 
>> 
>> Brock Palen
>> www.umich.edu/~brockp
>> CAEN Advanced Computing
>> XSEDE Campus Champion
>> bro...@umich.edu
>> (734)936-1985
>> 
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2014/10/25593.php
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/10/25613.php

Reply via email to