Re: [OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun

2013-09-04 Thread Ralph Castain
Jeff and I were looking at a similar issue today and suddenly realized that the mappings were different - i.e., what ranks are on what nodes differs depending on how you launch. You might want to check if that's the issue here as well. Just launch the attached program using mpirun vs srun and ch

Re: [OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun

2013-09-04 Thread Christopher Samuel
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 04/09/13 18:33, George Bosilca wrote: > You can confirm that the slowdown happen during the MPI > initialization stages by profiling the application (especially the > MPI_Init call). NAMD helpfully prints benchmark and timing numbers during the in

[OMPI devel] [bugs] OSC-related datatype bugs

2013-09-04 Thread Kawashima, Takahiro
Hi, I and my colleague found 3 OSC-related bugs in OMPI datatype code. One for trunk and v1.6/v1.7 branches, and two for only v1.6 branch. (1) OMPI_DATATYPE_ALIGN_PTR should be placed after memcpy Last year I reported a bug in OMPI datatype code and it was fixed in r25721. But the fix was no

Re: [OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun

2013-09-04 Thread Jeff Squyres (jsquyres)
On Sep 4, 2013, at 4:33 AM, George Bosilca wrote: > You can confirm that the slowdown happen during the MPI initialization stages > by profiling the application (especially the MPI_Init call). You can also try just launching "MPI hello world" (i.e., examples/hello_c.c). It just calls MPI_INIT

Re: [OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun

2013-09-04 Thread Ralph Castain
This is 1.7.3 - there is no comm thread in ORTE in that version. On Sep 4, 2013, at 1:33 AM, George Bosilca wrote: > You can confirm that the slowdown happen during the MPI initialization stages > by profiling the application (especially the MPI_Init call). > > Another possible cause of slowdo

Re: [OMPI devel] MPI_Is_thread_main() with provided=MPI_THREAD_SERIALIZED

2013-09-04 Thread George Bosilca
OK, I take that back. Based on the MPI standard (age 488) only the thread that called MPI_Init or MPI_Init_thread must return true in this case. The logic I was exposing in my previous email is left to the user. George. On Sep 4, 2013, at 12:11 , George Bosilca wrote: > You're in the SERIA

Re: [OMPI devel] MPI_Is_thread_main() with provided=MPI_THREAD_SERIALIZED

2013-09-04 Thread George Bosilca
You're in the SERIALIZED mode, so any thread can make MPI calls. As in such mode there is no notion of thread_main, consistently returning true out of MPI_Is_thread_main seem like a reasonable approach. This function will have a different behavior in the FUNNELED mode. George. On Sep 4, 2013

[OMPI devel] MPI_Is_thread_main() with provided=MPI_THREAD_SERIALIZED

2013-09-04 Thread Lisandro Dalcin
I'm using Open MPI 1.6.5 as packaged in Fedora 19. This build does not enable THREAD_MULTIPLE support: $ ompi_info | grep Thread Thread support: posix (MPI_THREAD_MULTIPLE: no, progress: no) In my code I call MPI_Init_thread(required=MPI_THREAD_MULTIPLE). After that, MPI_Query_thread()

Re: [OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun

2013-09-04 Thread George Bosilca
You can confirm that the slowdown happen during the MPI initialization stages by profiling the application (especially the MPI_Init call). Another possible cause of slowdown might be the communication thread in the ORTE. If it remains active outside the initialization it will definitively distu