So, it appears that for a machine of this type (dual quad core cpu's),
this approach would be correct for my tests...
[jpummill@n1 bin]$ more my-hosts
n1 slots=8 max_slots=8
and subsequently, launch two jobs in this configuration...
/home/jpummill/openmpi-1.2.2/bin/mpirun --hostfile my-hosts -n
The cleaner way to define such an environment is by using the max-
slots and/or slots options in the hostfile. Here is a FAQ entry about
how Open MPI deal with these options (http://www.open-mpi.org/faq/?
category=running#mpirun-scheduling).
george.
On Oct 26, 2007, at 10:52 AM, Jeff Pumm
Jeff,
A simple suggestion: put eight (or whatever the number of cores is)
identical entries for each node, such as
compute-0-0
compute-0-0
compute-0-0
compute-0-0
compute-0-0
compute-0-0
compute-0-0
compute-0-0
compute-0-1
compute-0-1
compute-0-1
compute-0-1
...
It seems to work for my
I am doing some testing on a variety of 8-core nodes in which I just
want to execute a couple of executables and have them distributed to the
available cores without overlapping. Typically, this would be done with
a parameter like /-machinefile machines/, but I have no idea what names
to put in
hi, We are facing some problem when calling MPI_Send over IB.
The problem looks similar to ticket
https://svn.open-mpi.org/trac/ompi/ticket/232, but this time its for IB
Interface. When forcefully running the program using --mca btl tcp,self its
running fine. On Ib, its giving error messa
This is not an MPI problem.
Without looking at your code in detail, I'm guessing that you're
accessing memory without any regard to memory layout and/or caching.
Such an access pattern will therefore thrash your L1 and L2 caches
and access memory in a truly horrible pattern that guarantees
Thanks,
The array bounds are the same on all the nodes and also the compute nodes
are identical i.e. SunFire V890 nodes. And I have also changed the root
process to be on different nodes, but the problem remains the same. I
still dont understand the problem very well and my progress is in stand
s
Hi, Please ensure if following things are correct1) The array
bounds are equal. Means \"my_x\" and \"size_y\" has the same value on all
nodes.2) Nodes are homogenous. To check that, you could decide root to be some
different node and run the program-NeerajOn Fri, 26 Oct 2007 10:13:15 +0500
(
Thanks for your reply,
I used MPI_Wtime for my application but even then process 0 took longer
time executing the mentioned code segment. I might be worng, but what I
see is process 0 takes more time to access the array elements than other
processes. Now I dont see what to do because the mentione