[OMPI users] About the Open-MPI point-to-point messaging layers

2012-06-30 Thread Sébastien Boisvert
Hello, I really like Open-MPI and its Modular Component Architecture. The --mca parameters are so useful for learning and testing things ! So here are my questions. I know that the default point-to-point messaging layer is ob1 (the Obi-Wan Kenobi PML). I know that there is also the PML cm (the

Re: [OMPI users] Performance scaled messaging and random crashes

2012-06-30 Thread Sébastien Boisvert
Hello, Just to give an update on the list: Today, I implemented message data reliability verification in my code using the CRC32 algorithm. Without PSM, everything runs fine. With PSM, I get these errors: Error: RayPlatform detected a message corruption ! Tag: RAY_MPI_TAG_REQUEST_VERTEX_COVE

[OMPI users] RE : fortran program with integer kind=8 using openmpi

2012-06-30 Thread Secretan Yves
Well, With openmpi compiled with Fortran default integer*8, MPI_TYPE_2INTEGER seem to have an incorrect size. The attached Fortran program shows it, When run on openmpi with integer*8 Size of MPI_INTEGER is 8 Size of MPI_INTEGER4 is 4 Size of MPI_INTE

Re: [OMPI users] Cannot build openmpi-1.6 on

2012-06-30 Thread Ralph Castain
Add --disable-vt to your configure line - if you don't need VampirTrace, just bypass the problem On Jun 30, 2012, at 8:32 AM, John R. Cary wrote: > My system: > > $ uname -a > Linux multipole.txcorp.com 2.6.32-220.17.1.el6.x86_64 #1 SMP Wed May 16 > 00:01:37 BST 2012 x86_64 x86_64 x86_64 GNU/L

[OMPI users] Cannot build openmpi-1.6 on

2012-06-30 Thread John R. Cary
My system: $ uname -a Linux multipole.txcorp.com 2.6.32-220.17.1.el6.x86_64 #1 SMP Wed May 16 00:01:37 BST 2012 x86_64 x86_64 x86_64 GNU/Linux $ gcc --version gcc (GCC) 4.6.3 Copyright (C) 2011 Configured with '/scr_multipole/cary/vorpalall/builds/openmpi-1.6/configure' \ --prefix=/scr_multi