[OMPI users] OpenMPI exits when subsequent tail -f in script is interrupted

2011-04-22 Thread Pablo Lopez Rios
Hi, I'm having a bit of a problem with wrapping mpirun in a script. The script needs to run an MPI job in the background and tail -f the output. Pressing Ctrl+C should stop tail -f, and the MPI job should continue. However mpirun seems to detect the SIGINT that was meant for tail, and kills t

Re: [OMPI users] intel compiler linking issue and issue of environment variable on remote node, with open mpi 1.4.3

2011-04-22 Thread Ralph Castain
On Apr 22, 2011, at 1:42 PM, ya...@adina.com wrote: > Open MPI 1.4.3 + Intel Compilers V8.1 summary: > (in case someone likes to refer to it later) > > (1) To make all Open MPI executables statically linked and > independent of any dynamic libraries, > "--disable-shared" and "--enable-static" o

Re: [OMPI users] intel compiler linking issue and issue of environment variable on remote node, with open mpi 1.4.3

2011-04-22 Thread yanyg
Open MPI 1.4.3 + Intel Compilers V8.1 summary: (in case someone likes to refer to it later) (1) To make all Open MPI executables statically linked and independent of any dynamic libraries, "--disable-shared" and "--enable-static" options should BOTH be fowarded to configure, and "-i-static" opti

Re: [OMPI users] btl_openib_cpc_include rdmacm questions

2011-04-22 Thread Brock Palen
On Apr 21, 2011, at 6:49 PM, Ralph Castain wrote: > > On Apr 21, 2011, at 4:41 PM, Brock Palen wrote: > >> Given that part of our cluster is TCP only, openib wouldn't even startup on >> those hosts > > That is correct - it would have no impact on those hosts > >> and this would be ignored on

Re: [OMPI users] Bug in MPI_scatterv Fortran-90 implementation

2011-04-22 Thread Jeff Squyres
Oops! Missed that; thanks. I've committed the change to the trunk and filed CMRs to bring the fix to v1.4 and v1.5. Thanks for reporting the issue. On Apr 22, 2011, at 1:03 AM, Stanislav Sazykin wrote: > Jeff, > > No, the patch did not solve the problem. Looking more, > there is another p

Re: [OMPI users] MPI_Gatherv error

2011-04-22 Thread David Zhang
I wonder if this is related to the problem reported in [OMPI users] Bug in MPI_scatterv Fortran-90 implementation On Thu, Apr 21, 2011 at 7:19 PM, Zhangping Wei wrote: > Dear all, > > I am a beginner of MPI, right now I try to use MPI_GATHERV in my code, the > test code just gather the value of a

Re: [OMPI users] huge VmRSS on rank 0 after MPI_Init when using "btl_openib_receive_queues" option

2011-04-22 Thread Eloi Gaudry
it varies with the receive_queues specification *and* with the number of mpi processes: memory_consumed = nb_mpi_process * nb_buffers * (buffer_size + low_buffer_count_watermark + credit_window_size ) éloi On 04/22/2011 12:26 AM, Jeff Squyres wrote: Does it vary exactly according to your re

Re: [OMPI users] Bug in MPI_scatterv Fortran-90 implementation

2011-04-22 Thread Stanislav Sazykin
Jeff, No, the patch did not solve the problem. Looking more, there is another place where the interfaces come up, in mpi-f90-interfaces.h.sh in ompi/mpi/f90/scripts If I manually change the two arguments to arrays from scalars in both scripts after running configure but before "make", then it wo