Re: [OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun

2013-07-23 Thread Ralph Castain
Not to 1.6 series, but it is in the about-to-be-released 1.7.3, and will be there from that point onwards. Still waiting to see if it resolves the difference. On Jul 23, 2013, at 4:28 PM, Christopher Samuel wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 23/07/13 19:34, Joshu

Re: [OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun

2013-07-23 Thread Christopher Samuel
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 23/07/13 19:34, Joshua Ladd wrote: > Hi, Chris Hi Joshua, I've quoted you in full as I don't think your message made it through to the slurm-dev list (at least I've not received it from there yet). > Funny you should mention this now. We identif

[OMPI devel] OpenSHMEM up on bitbucket

2013-07-23 Thread Joshua Ladd
Dear OMPI Developers, I have put Mellanox OpenSHMEM up for review on my Bitbucket. Please "git" and test at your leisure. Questions, comments, and critiques are most welcome. git clone https://jladd_m...@bitbucket.org/jladd_math/mlnx-oshmem.git

Re: [OMPI devel] 'make re-install' : remove 'ortecc' symlink also

2013-07-23 Thread Jeff Squyres (jsquyres)
Hmm, I think we do, but it looks like we might have done it wrong for OSs that have an $(EXEEXT), namely Windows. Can you test this trunk patch and see if it fixes the issue? On Jul 14, 2013, at 5:35 PM, Vasiliy wrote: > Makefile: please, remove/check for 'ortecc' symlink before proceeding

Re: [OMPI devel] basename: a faulty warning 'extra operand --test-name' in tests causes test-driver to fail

2013-07-23 Thread Jeff Squyres (jsquyres)
Sorry for the delay in replying... Great! In run_tests, does changing progname="`basename $*`" to progname="`basename $1`" fix the problem for you? On Jul 14, 2013, at 3:51 AM, Vasiliy wrote: > I'm happy to provide you with an update on 'extra operand --test-name' > occasionally being fed

Re: [OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun

2013-07-23 Thread Joshua Ladd
Hi, Chris Funny you should mention this now. We identified and diagnosed the issue some time ago as a combination of SLURM's PMI1 implementation and some of, what I'll call, OMPI's topology requirements (probably not the right word.) Here's what is happening, in a nutshell, when you launch with

[OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun

2013-07-23 Thread Christopher Samuel
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi there slurm-dev and OMPI devel lists, Bringing up a new IBM SandyBridge cluster I'm running a NAMD test case and noticed that if I run it with srun rather than mpirun it goes over 20% slower. These are all launched from an sbatch script too. Slur