...@open-mpi.org
> > 10/29/2008 12:36 PM
> > Please respond to
> > Open MPI Users
> >
> > To
> >
> > Open MPI Users
> >
> > cc
> >
> >
> > Subject
> >
> > Re: [OMPI users] Working with a CellBlade cluster
> >
> >
10/29/2008 12:36 PM
Please respond to
Open MPI Users
To
Open MPI Users
cc
Subject
Re: [OMPI users] Working with a CellBlade cluster
Thank you very much Mi and Lenny for your detailed replies.
I believe I can summarize the infos to allow for
'Working with a QS22 CellBlade cluster&
: cc
users-bounces@ope
n-mpi.org Subject
Re: [OMPI users] Working with a
Thank you very much Mi and Lenny for your detailed replies.
I believe I can summarize the infos to allow for
'Working with a QS22 CellBlade cluster' like this:
- Yes, messages are efficiently handled with "-mca btl openib,sm,self"
- Better to go to the OMPI-1.3 version ASAP
- It is currently mor
Subject
Please respond to
Open MPI UsersRe: [OMPI users]
Working with a
ky" *
> > Sent by: users-boun...@open-mpi.org
> >
> > 10/23/2008 01:52 PM Please respond to
> > Open MPI Users
> >
> >
> > To
> >
> > "Open MPI Users"
> > cc
> >
> >
> >
PM Please respond to
> Open MPI Users
>
>
> To
>
> "Open MPI Users"
> cc
>
>
> Subject
>
> Re: [OMPI users] Working with a CellBlade cluster
> According to *
> https://svn.open-mpi.org/trac/ompi/milestone/Open%20MPI%201.3*<https://s
Re: [OMPI users]
Wor
gt;
>
>
> *"Lenny Verkhovsky" *
> Sent by: users-boun...@open-mpi.org
>
> 10/23/2008 05:48 AM Please respond to
> Open MPI Users
>
>
> To
>
> "Open MPI Users"
> cc
>
>
> Subject
>
> Re: [OMP
Hi, Lenny,
So rank file map will be supported in OpenMPI 1.3?I'm using
OpenMPI1.2.6 and did not find parameter "rmaps_rank_file_".
Do you have idea when OpenMPI 1.3 will be available?OpenMPI 1.3
has quite a few features I'm looking for.
Thanks,
Mi
1. MCA BTL parameters
With "-mca btl openib,self", both message between two Cell processors on
one QS22 and messages between two QS22s go through IB.
With "-mca btl openib,sm,slef", message on one QS22 go through shared
memory, message between QS22 go through IB,
Depending on the message si
Hi,
If I understand you correctly the most suitable way to do it is by paffinity
that we have in Open MPI 1.3 and the trank.
how ever usually OS is distributing processes evenly between sockets by it
self.
There still no formal FAQ due to a multiple reasons but you can read how to
use it in the
Working with a CellBlade cluster (QS22), the requirement is to have one
instance of the executable running on each socket of the blade (there are 2
sockets). The application is of the 'domain decomposition' type, and each
instance is required to often send/receive data with both the remote blades
13 matches
Mail list logo