Hi,
Could someone let me know about the status of multithread support in
openMPI and MVAPICH. I got some details about MVAPICH which says that
it is supported for MVAPICH2 but I am not sure of the same for
openMPI.
Any updates will indeed be helpful.
Best regards,
-Chev
Hi,
mpirun internally uses ssh to launch a program on multiple nodes.
I would like to see the various parameters that are sent to each of
the nodes. How can I do this?
-chev
Hi,
Can we have a FORTRAN stop statement before MPI_Finalise?
What is the expected behaviour?
-Chev
h regards to the multi-LID stuff in Open MPI. :-)
On Dec 4, 2006, at 1:27 PM, Chevchenkovic Chevchenkovic wrote:
> Thanks for that.
>
> Suppose, if there there are multiple interconnects, say ethernet and
> infiniband and a million byte of data is to be sent, then in this
> ca
sends, do you mean to say that each send
will go through different BTLs in a RR manner if they are connected
to the same port?
-chev
On 12/4/06, Gleb Natapov wrote:
On Mon, Dec 04, 2006 at 10:53:26PM +0530, Chevchenkovic Chevchenkovic wrote:
> Hi,
> It is not clear from the code as mentio
, Chevchenkovic Chevchenkovic wrote:
> Also could you please tell me which part of the openMPI code needs to
> be touched so that I can do some modifications in it to incorporate
> changes regarding LID selection...
>
It depend what do you want to do. The part that does RR over all
availabl
Also could you please tell me which part of the openMPI code needs to
be touched so that I can do some modifications in it to incorporate
changes regarding LID selection...
On 12/4/06, Chevchenkovic Chevchenkovic wrote:
Is it possible to control the LID where the send and recvs are
posted.. on
Is it possible to control the LID where the send and recvs are
posted.. on either ends?
On 12/3/06, Gleb Natapov wrote:
On Sun, Dec 03, 2006 at 07:03:33PM +0530, Chevchenkovic Chevchenkovic wrote:
> Hi,
> I had this query. I hope some expert replies to it.
> I have 2 nodes connected
Hi,
I had this query. I hope some expert replies to it.
I have 2 nodes connected point-to-point using infiniband cable. There
are multiple LIDs for each of the end node ports.
When I give an MPI_Send, Are the sends are posted on different LIDs
on each of the end nodes OR they are they posted on
Hi,
here is a sample code that I ran to allocate memory using MPI_Alloc_mem
call.
*
#include "mpi.h"
#include
int main( int argc, char *argv[] )
{
int err;
int j, count = 100;
char *ap;
MPI_Init( &argc, &argv );
MPI_Errhandler_set( MPI_COMM_WORLD, MPI_ERRORS_RETURN )
Hi,
Thanks for the reply,
A few Additional questions,
1. Does OpenMPI has the optimisations required to ensure that when send/recv
is called between 2 ranks on the same node, the shared memory kind of
methods should be used?
2. If a programmer wants to implement such a logic(optimisations for l
Hi,
I had the following setup:
Rank 0 process on node 1 wants to send an array of particular size to Rank
1 process on same node.
1. What are the optimisations that can be done/invoked while running mpirun
to perform this memory to memory transfer efficiently?
2. Is there any performance gain if
12 matches
Mail list logo