You were limpid. What we're trying to say here, it's that the
solution you described few emails ago, doesn't work. At least it
doesn't work for what we want to do (i.e. what Aurelien described in
his first email). We [really] need 2 separate MPI worlds, that we
will connect at a later momen
Guess I was unclear, George - I don't know enough about Aurelien's app to
know if it is capable of (or trying to) run as one job, or not.
What has been described on this thread to-date is, in fact, a corner case.
Hence the proposal of another way to possibly address a corner case without
disruptin
It's not about the app. It's about the MPI standard. With one mpirun
you start one MPI application (SPMD or MPMD but still only one). The
first impact of this, is all processes started with one mpirun
command will belong to the same MPI_COMM_WORLD.
Our mpirun is in fact equivalent to the mp
On 7/26/07 4:22 PM, "Aurelien Bouteiller" wrote:
>> mpirun -hostfile big_pool -n 10 -host 1,2,3,4 application : -n 2 -host
>> 99,100 ft_server
>
> This will not work: this is a way to launch MIMD jobs, that share the
> same COMM_WORLD. Not the way to launch two different applications that
> i
mpirun -hostfile big_pool -n 10 -host 1,2,3,4 application : -n 2 -host
99,100 ft_server
This will not work: this is a way to launch MIMD jobs, that share the
same COMM_WORLD. Not the way to launch two different applications that
interact trough Accept/Connect.
Direct consequence on simple NA
On 7/26/07 2:24 PM, "Aurelien Bouteiller" wrote:
> Ralph H Castain wrote:
>> After some investigation, I'm afraid that I have to report that this - as
>> far as I understand what you are doing - may no longer work in Open MPI in
>> the future (and I'm pretty sure isn't working in the trunk tod
Ralph H Castain wrote:
After some investigation, I'm afraid that I have to report that this - as
far as I understand what you are doing - may no longer work in Open MPI in
the future (and I'm pretty sure isn't working in the trunk today except
[maybe] in the special case of hostfile - haven't ver
Hi Aurelien
Perhaps some bad news on this subject - see below.
On 7/26/07 7:53 AM, "Ralph H Castain" wrote:
>
>
>
> On 7/26/07 7:33 AM, "rolf.vandeva...@sun.com"
> wrote:
>
>> Aurelien Bouteiller wrote:
>>> Currently I proceed to two different mpirun with a single orte
>>> seed holding
On 7/26/07 7:33 AM, "rolf.vandeva...@sun.com"
wrote:
> Aurelien Bouteiller wrote:
>
>> Hi Ralph and everyone,
>>
>> I just want to make sure the proposed usecases does not break one of the
>> current open MPI feature I require. For FT purposes, I need to get some
>> specific hosts (lets say
Aurelien Bouteiller wrote:
Hi Ralph and everyone,
I just want to make sure the proposed usecases does not break one of the
current open MPI feature I require. For FT purposes, I need to get some
specific hosts (lets say with a better MTBF). Those hosts are not part
of the MPI_COMM_WORLD but
Hi Ralph and everyone,
I just want to make sure the proposed usecases does not break one of the
current open MPI feature I require. For FT purposes, I need to get some
specific hosts (lets say with a better MTBF). Those hosts are not part
of the MPI_COMM_WORLD but are used to deploy FT service
11 matches
Mail list logo