Since I wrote and/or support most of the OMPI DRM interface code at one time or
another, guess I'll add my $0.02 here. :-)
There is no simple, nor obvious "winning", answer here. There really aren't all
that many DRMs out there when you filter the list according to the number of
places that use
Hi Eugene
Your argument is correct.
I would cast the conclusion slightly different, though:
The "pencil" decomposition scales better with the number of
processors than the "book" decomposition.
Your example shows this well for the extreme case
where the number of processes
is equal to the array
Hi, All,
This may seem like an odd query (or not; perhaps it has been brought up
before). My work recently involves HPC usability i.e. making things
easier for new users by abstracting away the scheduler. I've been
working with DRMAA for interfacing with DRMs and it occurred to me: what
would be
Eugene Loh wrote:
I don't understand what "waste" or "redundant calculations" you're
referring to here. Let's say each cell is updated based on itself and
neighboring values. Each cell has a unique "owner" who computes the
updated value. That value is shared with neighbors if the cell is n
Gus Correa wrote:
The redundant calculations of overlap points on neighbor subdomains
in general cannot be avoided.
Exchanging the overlap data across neighbor subdomain processes
cannot be avoided either.
However, **full overlap slices** are exchanged after each computational
step (in our case
Gus Correa wrote:
Also, I wonder why you want to decompose on both X and Y ("pencils"),
and not only X ("books"),
which may give you a smaller/simpler domain decomposition
and communication footprint.
Whether you can or cannot do this way depends on your
computation, which I don't know about.
Hi Derek
PS - The same book "MPI the complete reference" has a thorough
description of MPI types in Chapter 3.
You may want to create and use a MPI_TYPE_VECTOR with the
appropriate count, blocklength, and stride, to exchange all the
"0..Z" overlap slices in a single swoop.
(If I understood right,
Hi Justin
I think you are confusing OpenMP and OpenMPI.
You sound like you're using OpenMP. This mailing list is for OpenMPI, a
specific implementation of MPI. OpenMP and MPI, while having some
overlapping aims, are completely separate.
I suggest you post your query to an OpenMP mailing list.
Hi Derek
Typically in the domain decomposition codes we have here
(atmosphere, oceans, climate)
there is an overlap across the boundaries of subdomains.
Unless your computation is so "embarrassingly parallel" that
each process can operate from start to end totally independent from
the others, y
It already seems that you have a good idea of what challenges you're
facing. So, I'm unclear which part you're asking about.
Which cells do you need to update [x][y][z]? It sounds like you need
nearest neighbors. So, one technique is to allocate on each process
not just the subsection of da
Hi all. I am relatively new to MPI, and so this may be covered somewhere else,
but I can't seem to find any links to tutorials mentioning any specifics, so
perhaps someone here can help.
In C, I have a 3D array that I have dynamically allocated and access like
Array[x][y][z]. I was hoping to ca
Hello everyone,
I have come across a situation where I am trying to make
private variables that passed to subroutines using modules. Here is the
situation, The main program calls two different routines. These routines are
functionally different but utilize the same variable n
Probably a bug - I don't recall if/when anyone actually tested that code path.
I'll have a look...probably in the hostfile parser.
What version are you using?
On Mar 10, 2010, at 8:26 AM, Olivier Riff wrote:
> Oops sorry I made the test too fast: it still does not work properly with
> several
Oops sorry I made the test too fast: it still does not work properly with
several logins:
I start on user1's machine:
mpirun -np 2 --machinefile machinefile.txt MyProgram
with machinefile:
user1@machine1 slots=1
user2@machine2 slots=1
and I got :
user1@machine2 password prompt ?! (there is no us
Since we don't have an obvious answer for this, I have filed ticket 2236 to
track this issue:
https://svn.open-mpi.org/trac/ompi/ticket/2336
On Mar 8, 2010, at 6:56 AM, TRINH Minh Hieu wrote:
>
> Hello,
>
> I changed the test code (hetero.c, in attach) so that the master (where data
> i
OK, it works now thanks. I forgot to add the slots information in the
machinefile.
Cheers,
Olivier
2010/3/10 Ralph Castain
> It is the exact same syntax inside of the machinefile:
>
> user1@machine1 slots=4
> user2@machine2 slots=3
>
>
>
> On Mar 10, 2010, at 5:41 AM, Olivier Riff wrote
It is the exact same syntax inside of the machinefile:
user1@machine1 slots=4
user2@machine2 slots=3
On Mar 10, 2010, at 5:41 AM, Olivier Riff wrote:
> Hello,
>
> I am using openmpi on several machines which have different user accounts and
> I cannot find a way to specify the login for
Hi,
Am 10.03.2010 um 13:41 schrieb Olivier Riff:
I am using openmpi on several machines which have different user
accounts and I cannot find a way to specify the login for each
machine in the machinefile passed to mpirun.
The only solution I found is to use the -host argument of mpirun,
su
Hello,
I am using openmpi on several machines which have different user accounts
and I cannot find a way to specify the login for each machine in the
machinefile passed to mpirun.
The only solution I found is to use the -host argument of mpirun, such as:
mpirun -np 2 --host user1@machine1,user2@ma
PLEASE UNSUBSCRIBE ME FROM THIS LIST!
2010/3/10 马少杰
>
>
>
> 2010-03-10
> --
> 马少杰
> --
> *Dear Sir:*
> * I can use openmpi with blcr to save checkpoint and restart my
> mpi applications. Now , I want to torque also support ope
2010-03-10
马少杰
Dear Sir:
I can use openmpi with blcr to save checkpoint and restart my mpi
applications. Now , I want to torque also support openmpi CR. I have known
that toque can support serial program by use of blcr. Can torque also support
openmpi CR? How should I do ?
Hi,
I set up a Linux Cluster with differnt Distributions ( 1x Debian Lenny,
4x OpenSuse11.2 ) and openmpi-1.4.1 , all my test applications ran perfekt.
Now I decided to create a Debian-Live System (Lenny) with openmpi-1.4.1,
to include some more Pc's in our Student-Pool, and always get the
folowi
22 matches
Mail list logo