ect I've observed is related to cache use. If the problem
fits in cache it is much faster. With cores sharing cache it can even
be advantageous to *undersubscribe* the problem. i.e. schedule 2
processes on a quad core so each can have the full cache.
-- Mark Borgerding
Klymak Jody wrote
se lamboot is an explicit step.
I've almost got my head wrapped around the technique in
http://www.open-mpi.org/community/lists/users/2007/10/4327.php
Are there any shortcuts I could take for the case where all the clients
are already in a group?
--
Mark Borgerding
3dB Labs, Inc
Innovate. Develop. Deliver.
Robert Kubrick wrote:
You should be able to merge each child communicator from each accept
thread into a global comm anyway.
Can you elaborate? I am struggling to see how to implement this. A
pointer to sample code would be helpful.
Specifically, I'd like to be able to have a single process
Hello boys and girls. I just wanted to drop a line and give you an update.
First of all, my simple question:
In what files can I find the source code for "mca_oob.oob_send" and
"mca_oob.oob_recv"? I'm having a hard time following the initialization
code that populates the struct of callbacks.
ch of this is pretty
similar to the daemon MPI_Publish_name+MPI_Lookup_name approach. The
main difference being which processes come first.
Mark Borgerding wrote:
I'm afraid I can't dictate to the customer that they must upgrade.
The target platform is RHEL 5.2 ( uses openmpi 1.
perly spawns
on other nodes in the 1.3 release. I sincerely doubt we will backport
a fix to 1.2.
On Jul 30, 2008, at 6:49 AM, Mark Borgerding wrote:
I keep checking my email in hopes that someone will come up with
something that Matt or I might've missed.
I'm just having a hard time acce
ears to be having
a problem in the latest 1.2 release. So I don't think comm_spawn is
"useless". ;-)
I'm checking this morning to ensure that singletons properly spawns on
other nodes in the 1.3 release. I sincerely doubt we will backport a
fix to 1.2.
On Jul 30, 2008, at
I keep checking my email in hopes that someone will come up with
something that Matt or I might've missed.
I'm just having a hard time accepting that something so fundamental
would be so broken.
The MPI_Comm_spawn command is essentially useless without the ability to
spawn processes on other n
tp://www.open-mpi.org/faq/?category=running#simple-spmd-run
There are several explanations there pertaining to hostfiles.
On Jul 29, 2008, at 11:57 AM, Mark Borgerding wrote:
I listed the node names in the path named in ompi_info --param rds
hostfile -- no luck.
I also tried copying that fil
ago where the hostnames were assumed to
contain a numeric pattern?
-- Mark
Ralph Castain wrote:
For the 1.2 release, I believe you will find the enviro param is
OMPI_MCA_rds_hostfile_path - you can check that with "ompi_info".
On Jul 29, 2008, at 11:10 AM, Mark Borgerding wr
go in your -hostfile
file? All of the hosts you intend to use have to be in that file, even
if they don't get used until the comm_spawn.
On Jul 29, 2008, at 9:08 AM, Mark Borgerding wrote:
I've tried lots of different values for the "host" key in the info
handle.
I'
mpi 1.2.5)
-- Mark
Ralph Castain wrote:
The string "localhost" may not be recognized in the 1.2 series for
comm_spawn. Do a "hostname" and use that string instead - should work.
Ralph
On Jul 28, 2008, at 10:38 AM, Mark Borgerding wrote:
When I add the info parameter in
Check.
Parent has high=0
Children have high=1
Jeff Squyres wrote:
Ok, good.
One thing to check is that you have put different values for the
"high" value between the parent group and the children group.
On Jul 28, 2008, at 3:42 PM, Mark Borgerding wrote:
I should've b
ly do not guarantee
binary compatibility between any of our releases.
On Jul 28, 2008, at 10:16 AM, Mark Borgerding wrote:
I am using version 1.2.4 (Fedora 9) and 1.2.5 ( CentOS 5.2 )
A little clarification:
The children do not actually wake up when the parent *sends* data to
them, but only
code that causes the error:
MPI_Info info;
MPI_Info_create( &info );
MPI_Info_set(info,"host","localhost");
MPI_Comm_spawn( cmd , MPI_ARGV_NULL , nkids , info , 0 ,
MPI_COMM_SELF , &kid , errs );
Mark Borgerding wrote:
Thanks, I don't know how
Hope that helps
Ralph
On Jul 28, 2008, at 8:54 AM, Mark Borgerding wrote:
How does openmpi decide which hosts are used with MPI_Comm_spawn? All
the docs I've found talk about specifying hosts on the mpiexec/mpirun
command and so are not applicable.
I am unable to spawn on anything but l
How does openmpi decide which hosts are used with MPI_Comm_spawn? All
the docs I've found talk about specifying hosts on the mpiexec/mpirun
command and so are not applicable.
I am unable to spawn on anything but localhost (which makes for a pretty
uninteresting cluster).
When I run
ompi_info -
children's calls to MPI_Intercomm_merge return
-- Mark
Aurélien Bouteiller wrote:
Ok, I'll check to see what happens. Which version of Open MPI are you
using ?
Aurelien
Le 27 juil. 08 à 23:13, Mark Borgerding a écrit :
I got something working, but I'm not 100% sure why.
Th
I got something working, but I'm not 100% sure why.
The children woke up and returned from their calls to
MPI_Intercomm_merge only after
the parent used the intercomm to send some data to the children via
MPI_Send.
Mark Borgerding wrote:
Perhaps I am doing something wrong. The chil
< "parent call to MPI_Comm_spawn returns" << endl;
for (k=0;k
MPI_Intercomm_merge is what you are looking for.
Aurelien
Le 26 juil. 08 à 13:23, Mark Borgerding a écrit :
Okay, so I've gotten a little bit closer.
I'm using MPI_Comm_spawn to start several children
rent group so I can efficiently Send/Recv
data between them..
Is this possible?
Plan B: I guess if there is no elegant way to merge all those processes
into one group, I can connect sockets and make intercomms to talk from
the parent directly to each child.
-- Mark
Mark Borgerding wrote:
I am writing a code module that plugs into a larger application
framework. That framework loads my code module as a shared object.
So I do not control how the first process gets started, but I still want
it to be able to start and participate in an MPI group.
Here's roughly what I want to happ
22 matches
Mail list logo