Re: [OMPI users] Open MPI collectives algorithm selection

2015-05-20 Thread George Bosilca
Khalid, The rule number zero is always selected by default. If the size you look for (message or communicator) is larger than zero then another rule will be selected, otherwise zero is the best selection. Same thing for communicator and size, a consistent approach from my perspective. If you don'

Re: [OMPI users] Open MPI collectives algorithm selection

2015-05-20 Thread Khalid Hasanov
George, Thank you for your answer. Another confusing thing is that, If I use some communicator size which does not exist in the configuration file, some rule from the configuration file will be used anyway. For example, let say I have a configuration file with two communicator sizes 5 and 16. If

Re: [OMPI users] Open MPI collectives algorithm selection

2015-05-20 Thread George Bosilca
Khalid, The way we designed these rules was to define intervals in a 2 dimension space (communicator size, message size). You should imagine these rules as exclusive, you match them in the order defined by the configuration file and you use the algorithm defined by the last matching rule. Georg

Re: [OMPI users] Performance differences using mpirun and SLURM

2015-05-20 Thread Ralph Castain
There are major differences when you launch via srun vs mpirun as the OMPI daemons provide a lot of “helper” info to the procs that they don’t get when directly launched. We’ve tried to minimize those differences, but we can’t necessarily get them all. We’ve seen reports of this before, but the

[OMPI users] Performance differences using mpirun and SLURM

2015-05-20 Thread Patrick LeGresley
Hi, I've noticed some performance differences when using mpirun and SLURM for job startup. Below I've included example output from the OSU bidirectional bandwidth benchmark that seem to show a significant difference in bandwidth for larger message sizes. I've looked at the OpenMPI FAQ for runnin

Re: [OMPI users] 'The MPI_Comm_rank() function was called before MPI_INIT was invoked'

2015-05-20 Thread Gilles Gouaillardet
Hi Mohammad, the error message is self explanatory. you cannot invoke MPI functions before invoking MPI_Init or after MPI_Finalize the easiest way to solve your problem is to move the MPI_Init call to the beginning of your program. Cheers, Gilles On Wednesday, May 20, 2015, #MOHAMMAD ASIF KHAN

Re: [OMPI users] cuIpcOpenMemHandle failure when using OpenMPI 1.8.5 with CUDA 7.0 and Multi-Process Service

2015-05-20 Thread Rolf vandeVaart
-Original Message- >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Lev Givon >Sent: Tuesday, May 19, 2015 10:25 PM >To: Open MPI Users >Subject: Re: [OMPI users] cuIpcOpenMemHandle failure when using >OpenMPI 1.8.5 with CUDA 7.0 and Multi-Process Service > >Received from Rolf

[OMPI users] 'The MPI_Comm_rank() function was called before MPI_INIT was invoked'

2015-05-20 Thread #MOHAMMAD ASIF KHAN#
Hi, I am using caffe-parallel toolbox for deep learning. The framework has been parallelized using mpi. For my implementation I am using Open mpi 1.6.5??. The installation stage for openmpi goes fine but when I run the code following error appears: *** The MPI_Comm_rank() function was c

Re: [OMPI users] Openmpi 1.8.5 on Linux with threading support

2015-05-20 Thread Nilo Menezes
Thank you, That seems to solve the problem. Best Regards, Nilo Menezes On 5/19/2015 3:34 PM, Ralph Castain wrote: It looks like you have PSM enabled cards on your system as well as Ethernet, and we are picking that up. Try adding "-mca pml ob1" to your cmd line and see if that helps On Tu