Hy Jeff, thanks for replying.
Does it mean that you don't have it working properly yet? I read the thread
at the devel list where you addressed the problem and a possible solution,
but I was not able to find a conclusion about the problem.
I'm in trouble without this function. Probably I'll need
Unfortunately, I think that this is a known problem with INTERCOMM_MERGE and
COMM_SPAWN parents and children:
https://svn.open-mpi.org/trac/ompi/ticket/2904
On Jan 26, 2012, at 12:11 PM, Rodrigo Oliveira wrote:
> Hi there, I tried to understand the behavior Thatyene said and I think is a
Hi there, I tried to understand the behavior Thatyene said and I think is a
bug in open mpi implementation.
I do not know what exactly is happening because I am not an expert in ompi
code, but I could see that when one process define its color as *
MPI_UNDEFINED*, one of the processes on the
It seems the split is blocking when must return MPI_COMM_NULL, in the case
I have one process with a color that does not exist in the other group or
with the color = MPI_UNDEFINED.
On Wed, Jan 25, 2012 at 4:28 PM, Rodrigo Oliveira wrote:
> Hi Thatyene,
>
> I took a
Hi Thatyene,
I took a look in your code and it seems to be logically correct. Maybe
there is some problem when you call the split function having one client
process with color = MPI_UNDEFINED. I understood you are trying to isolate
one of the client process to do something applicable only to it,
Hi there!
I've been trying to use the MPI_Comm_split function on an
intercommunicator, but I didn't have success. My application is very simple
and consists of a server that spawns 2 clients. After that, I want to split
the intercommunicator between the server and the clients so that one client
On Dec 12, 2011, at 9:45 AM, Josh Hursey wrote:
For MPI_Comm_split, all processes in the input communicator (oldcomm
or MPI_COMM_WORLD in your case) must call the operation since it is
collective over the input communicator. In your program rank 0 is not
calling the operation, so
On Dec 12, 2011, at 9:45 AM, Josh Hursey wrote:
> For MPI_Comm_split, all processes in the input communicator (oldcomm
> or MPI_COMM_WORLD in your case) must call the operation since it is
> collective over the input communicator. In your program rank 0 is not
> calling the operation, so
For MPI_Comm_split, all processes in the input communicator (oldcomm
or MPI_COMM_WORLD in your case) must call the operation since it is
collective over the input communicator. In your program rank 0 is not
calling the operation, so MPI_Comm_split is waiting for it to
participate.
If you want
I am attempting to split my application into multiple master+workers
groups using MPI_COMM_split. My MPI revision is shown as:
mpirun --tag-output ompi_info -v ompi full --parsable
[1,0]:package:Open MPI root@build-x86-64 Distribution
[1,0]:ompi:version:full:1.4.3
[1,0]:ompi:version:svn:r23834
> The tree is not symmetrical in that the valid values for the 10th
> parameter depends on the values selected in the 0th to 9th parameter
> (all the ancestry in the tree), for e.g., we may have a lot of nodes in
> the left of the tree than in the right, see attachment ( I hope they're
> allowed )
On Nov 24, 2010, at 4:55 PM, Hicham Mouline wrote:
> The tree is not symmetrical in that the valid values for the 10th parameter
> depends on the values selected in the 0th to 9th parameter (all the ancestry
> in the tree), for e.g., we may have a lot of nodes in the left of the tree
> than in
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Bill Rankin
> Sent: 24 November 2010 15:54
> To: Open MPI Users
> Subject: Re: [OMPI users] MPI_Comm_split
>
> In this case, creating all those communicator
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Bill Rankin
> Sent: 23 November 2010 19:32
> To: Open MPI Users
> Subject: Re: [OMPI users] MPI_Comm_split
>
> Hicham:
>
> > If I have a 256 mpi
Hicham:
> If I have a 256 mpi processes in 1 communicator, am I able to split
> that communicator, then again split the resulting 2 subgroups, then
> again the resulting 4 subgroups and so on, until potentially having 256
> subgroups?
You can. But as the old saying goes: "just because you *can*
Hello
If I have a 256 mpi processes in 1 communicator, am I able to split that
communicator, then again split the resulting 2 subgroups, then again the
resulting 4 subgroups and so on, until potentially having 256 subgroups?
Is this insane in terms of performance?
regards,
16 matches
Mail list logo