Re: [OMPI devel] Collective communications may be abend when it use over 2GiB buffer

2012-03-16 Thread Tomoya Adachi
Hi George, I'm a member of Fujitsu MPI development team. Thank you for picking up the issue. We checked the changesets and unfortunately found they are incomplete. Our testing method is as follows: - Using LLVM clang to compile trunk with -ftrapv (integer overflow detection) because GCC's -ftr

Re: [OMPI devel] poor btl sm latency

2012-03-16 Thread Matthias Jurenz
"Unfortunately" also Platform MPI benefits from disabled ASLR: shared L2/L1I caches (core 0 and 1) and enabled ASLR: $ mpirun -np 2 taskset -c 0,1 ./NPmpi_pcmpi -u 1 -n 100 Now starting the main loop 0: 1 bytes 100 times --> 17.07 Mbps in 0.45 usec disabled ASLR: $ mpiru

Re: [OMPI devel] v1.5 r26132 broken on multiple nodes?

2012-03-16 Thread Eugene Loh
I updated trac 3047. Thanks for the additional patch: "mpirun -H hostname" now works. On 3/15/2012 5:15 PM, Ralph Castain wrote: Let me know what you find - I took a look at the code and it looks correct. All required changes were included in the patch that was applied to the branch. On Ma

[OMPI devel] Fwd: [hwloc-devel] possible membind changes coming in the Linux kernel

2012-03-16 Thread Jeffrey Squyres
This isn't strictly related to Open MPI, but all of us here care about NUMA, locality, and performance, so I thought I'd pass along something that Brice forwarded to the hwloc-devel list. See Brice's note below, and the original mail to the LKML below that. Begin forwarded message: > From: Br