Re: [OMPI users] MPI_ERR_IN_STATUS from MPI_Bcast?

2011-02-09 Thread Jeremiah Willcock
On Wed, 9 Feb 2011, Jeremiah Willcock wrote: I get the following Open MPI error from 1.4.1: *** An error occurred in MPI_Bcast *** on communicator MPI COMMUNICATOR 3 SPLIT FROM 0 *** MPI_ERR_IN_STATUS: error code in status *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) (hostname and

[OMPI users] MPI_ERR_IN_STATUS from MPI_Bcast?

2011-02-09 Thread Jeremiah Willcock
I get the following Open MPI error from 1.4.1: *** An error occurred in MPI_Bcast *** on communicator MPI COMMUNICATOR 3 SPLIT FROM 0 *** MPI_ERR_IN_STATUS: error code in status *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) (hostname and port removed from each line). There is no

Re: [OMPI users] Mpirun --app option not working

2011-02-09 Thread Ralph Castain
Gus is correct - the -host option needs to be in the appfile On Feb 9, 2011, at 3:32 PM, Gus Correa wrote: > Sindhi, Waris PW wrote: >> Hi, >>I am having trouble using the --app option with OpenMPI's mpirun >> command. The MPI processes launched with the --app option get launched >> on the

Re: [OMPI users] Default hostfile not being used by mpirun

2011-02-09 Thread Jeff Squyres
You may have mentioned this in a prior mail, but what version are you using? I tested and am unable to replicate your problem -- my openmpi-mca-params.conf file is always read. Double check the value of your mca_param_files MCA parameter: shell$ ompi_info --param mca param_files Mine comes

Re: [OMPI users] Totalview not showing main program on startup with OpenMPI 1.3.x and 1.4.x

2011-02-09 Thread Dennis McRitchie
Thanks Terry. Unfortunately, -fno-omit-frame-pointer is the default for the Intel compiler when -g is used, which I am using since it is necessary for source level debugging. So the compiler kindly tells me that it is ignoring your suggested option when I specify it. :) Also, since I can

Re: [OMPI users] Mpirun --app option not working

2011-02-09 Thread Gus Correa
Sindhi, Waris PW wrote: Hi, I am having trouble using the --app option with OpenMPI's mpirun command. The MPI processes launched with the --app option get launched on the linux node that mpirun command is executed on. The same MPI executable works when specified on the command line using

Re: [OMPI users] Totalview not showing main program on startup with OpenMPI 1.3.x and 1.4.x

2011-02-09 Thread Terry Dontje
This sounds like something I ran into some time ago that involved the compiler omitting frame pointers. You may want to try to compile your code with -fno-omit-frame-pointer. I am unsure if you may need to do the same while building MPI though. --td On 02/09/2011 02:49 PM, Dennis McRitchie

[OMPI users] Totalview not showing main program on startup with OpenMPI 1.3.x and 1.4.x

2011-02-09 Thread Dennis McRitchie
Hi, I'm encountering a strange problem and can't find it having been discussed on this mailing list. When building and running my parallel program using any recent Intel compiler and OpenMPI 1.2.8, TotalView behaves entirely correctly, displaying the "Process mpirun is a parallel job. Do you

[OMPI users] Mpirun --app option not working

2011-02-09 Thread Sindhi, Waris PW
Hi, I am having trouble using the --app option with OpenMPI's mpirun command. The MPI processes launched with the --app option get launched on the linux node that mpirun command is executed on. The same MPI executable works when specified on the command line using the -np option. Please let

Re: [hwloc-users] Problem getting cpuset of MPI task

2011-02-09 Thread Brice Goglin
Le 09/02/2011 16:53, Hendryk Bockelmann a écrit : > Since I am new to hwloc there might be a misunderstanding from my > side, but I have a problem getting the cpuset of MPI tasks. I just > want to run a simple MPI program to see on which cores (or CPUs in > case of hyperthreading or SMT) the tasks

Re: [hwloc-users] Problem getting cpuset of MPI task

2011-02-09 Thread Samuel Thibault
Hendryk Bockelmann, le Wed 09 Feb 2011 16:57:43 +0100, a écrit : > Since I am new to hwloc there might be a misunderstanding from my side, > but I have a problem getting the cpuset of MPI tasks. >/* get native cpuset of this process */ >cpuset = hwloc_bitmap_alloc(); >

Re: [OMPI users] Segmentation fault with SLURM and non-local nodes

2011-02-09 Thread Samuel K. Gutierrez
On Feb 8, 2011, at 8:21 PM, Ralph Castain wrote: I would personally suggest not reconfiguring your system simply to support a particular version of OMPI. The only difference between the 1.4 and 1.5 series wrt slurm is that we changed a few things to support a more recent version of slurm.

Re: [OMPI users] Unknown overhead in "mpirun -am ft-enable-cr"

2011-02-09 Thread Joshua Hursey
It looks like the logic in the configure script is turning on the FT thread for you when you specify both '--with-ft=cr' and '--enable-mpi-threads'. Can you send me the output of 'ompi_info'? Can you also try the MCA parameter that I mentioned earlier to see if that changes the performance? I

Re: [OMPI users] Unknown overhead in "mpirun -am ft-enable-cr"

2011-02-09 Thread Nguyen Toan
Hi Josh, Thanks for the reply. I did not use the '--enable-ft-thread' option. Here is my build options: CFLAGS=-g \ ./configure \ --with-ft=cr \ --enable-mpi-threads \ --with-blcr=/home/nguyen/opt/blcr \ --with-blcr-libdir=/home/nguyen/opt/blcr/lib \ --prefix=/home/nguyen/opt/openmpi \