Thanks Mark,

HP-MPI is configured correctly on the system - an HP XC 3000 >800 cores.
It works for all the other users (none gromacs) and no I've tested it, I
can launch an mpi job which runs fine on the login node (two quad core
xeons)

It seems to be an issue with the number of processors passed from the BSUB
settings in the LSF script via srun to gromacs. I just wondered whether
anyone else had 4.5.3 working in a similar setup?

Most of the LSF/SLURM stuff I can google (not much) seems to be pre 4.5
where mdrun_mpi will take the depreciated command line switch to tell it
the number of processors

Lee

==================================
Dr Lee Larcombe
- Lecturer in Genetics & Computational Biology
- London Technology Network Business Fellow
- Course Director of MSc Applied Bioinformatics

Course website: http://bit.ly/cM6SkT
Group Research: http://bit.ly/bfqtyo

Bioinformatics Group, Cranfield University
Bedfordshire MK43 0AL.   T: +44 (0)1234 758320
==================================






On 16/04/2011 22:34, "Mark Abraham" <mark.abra...@anu.edu.au> wrote:

>On 16/04/2011 12:13 AM, Larcombe, Lee wrote:
>> Hi gmx-users
>>
>> We have an HPC setup running HP_MPI and LSF/SLURM. Gromacs 4.5.3 has
>>been compiled with mpi support
>> The compute nodes on the system contain 2 x dual core Xeons which the
>>system sees as 4 processors
>>
>> An LSF script called gromacs_run.lsf is as shown below
>>
>> #BSUB -N
>> #BSUB -J "gromacsTest5"
>> #BSUB -u l.larco...@cranfield.ac.uk
>> #BSUB -n 4
>> #BSUB -q short
>> #BSUB -o %J.log
>> mpirun -srun mdrun_mpi -v -s xxx.tpr -o xxx.trr
>>
>> Queued with:
>>
>> Bsub<  gromacs_run.lsf
>>
>> This is intended to run 1 mdrun on a single node using all four cores
>>of the two xeons. The result is that although the job is only submitted
>>to one compute node, 4 mdruns are launched on each of the 4 cores = 16
>>jobs. These are all the same as if mdrun has not been compiled with mpi
>>support.
>
>mdrun_mpi will run one process on each core that the MPI configuration
>declares is available. Spawning four separate runs of four processes
>indicates that MPI is not configured to reflect the hardware (since each
>run thinks it can have four processes), or that the submission script is
>inappropriate (since four runs get spawned), or both. We can't help
>there. Trouble-shoot with a trivial MPI test program.
>
>> If I tell srun to start just one task with "mpirun -srun -n1 mdrun_mpi
>>-v -s xxx.tpr ­o xxx.trr" it starts one job on each core instead of 4:
>>
>> NNODES=1, MYRANK=0, HOSTNAME=comp195
>> NNODES=1, MYRANK=0, HOSTNAME=comp195
>> NNODES=1, MYRANK=0, HOSTNAME=comp195
>> NNODES=1, MYRANK=0, HOSTNAME=comp195
>>
>> Logs show 4 mdrun_mpi starts, 4 file read ins and I get 4 of all run
>>files in CWD. I am sure
>
>
>
>> that mdrun_mpi is indeed compiled with mpi support - although our
>>sysadmin did that, not me. For example, if I try and execute "mdrun_mpi
>>­h" I get a message from HP­MPI and have to execute "mpirun mdrun_mpi
>>­h" to see the help text.
>>
>> Does anyone have any experience of running with this setup  - any ideas?
>>
>> Thanks
>> Lee
>
>-- 
>gmx-users mailing list    gmx-users@gromacs.org
>http://lists.gromacs.org/mailman/listinfo/gmx-users
>Please search the archive at
>http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>Please don't post (un)subscribe requests to the list. Use the
>www interface or send it to gmx-users-requ...@gromacs.org.
>Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to