Re: [gmx-users] problems with non pbc simulations in parallel

2010-03-10 Thread Gavin Melaugh
Hi Berk

Cheers for your help

Gavin

Berk Hess wrote:
> Hi,
>
> This is a silly bug with nose-hoover and pbc=no.
> I fixed it for 4.0.8 (if we will ever release that).
>
> To fix it you only need to move a brace up 4 lines in src/mdlib/init.c
> Or you can use the v-rescale thermostat.
>
> Berk
>
> --- a/src/mdlib/init.c
> +++ b/src/mdlib/init.c
> @@ -119,9 +119,9 @@ static void set_state_entries(t_state
> *state,t_inputrec *ir,
> int nnodes)
>  if (ir->epc != epcNO) {
>state->flags |= (1<  }
> -if (ir->etc == etcNOSEHOOVER) {
> -  state->flags |= (1< -}
> +  }
> +  if (ir->etc == etcNOSEHOOVER) {
> +state->flags |= (1<}
>if (ir->etc == etcNOSEHOOVER || ir->etc == etcVRESCALE) {
>  state->flags |= (1<
>
> > Date: Wed, 10 Mar 2010 14:16:38 +
> > From: gmelaug...@qub.ac.uk
> > To: gmx-users@gromacs.org
> > Subject: [gmx-users] problems with non pbc simulations in parallel
> >
> > Hi all
> >
> > I have installed gromacs-4.0.7-parallel with open mpi. I have
> > successfully ran a few short simulations on 2,3 and 4 nodes using pbc. I
> > am now interested in simulating a cluster of 32 molecules with no pbc in
> > parallel and the simulation doe not proceed. I have set by box vectors
> > to 0 0 0 in the conf.gro file, pbc = no in the mdp file, and use
> > dparticle decomposition. The feedback I get from the following command
> >
> > nohup mpirun -np 2 /local1/gromacs-4.0.7-parallel/bin/mdrun -pd -s &
> >
> > is
> >
> > Back Off! I just backed up md.log to ./#md.log.1#
> > Reading file topol.tpr, VERSION 4.0.7 (single precision)
> > starting mdrun 'test of 32 hexylcage molecules'
> > 1000 steps, 0.0 ps.
> > [emerald:22662] *** Process received signal ***
> > [emerald:22662] Signal: Segmentation fault (11)
> > [emerald:22662] Signal code: Address not mapped (1)
> > [emerald:22662] Failing at address: (nil)
> > [emerald:22662] [ 0] /lib64/libpthread.so.0 [0x7fbc17eefa90]
> > [emerald:22662] [ 1]
> > /local1/gromacs-4.0.7-parallel/bin/mdrun(nosehoover_tcoupl+0x74)
> [0x436874]
> > [emerald:22662] [ 2]
> > /local1/gromacs-4.0.7-parallel/bin/mdrun(update+0x171) [0x4b2311]
> > [emerald:22662] [ 3]
> > /local1/gromacs-4.0.7-parallel/bin/mdrun(do_md+0x2608) [0x42dd38]
> > [emerald:22662] [ 4]
> > /local1/gromacs-4.0.7-parallel/bin/mdrun(mdrunner+0xe33) [0x430973]
> > [emerald:22662] [ 5]
> > /local1/gromacs-4.0.7-parallel/bin/mdrun(main+0x5b8) [0x431128]
> > [emerald:22662] [ 6] /lib64/libc.so.6(__libc_start_main+0xe6)
> > [0x7fbc17ba6586]
> > [emerald:22662] [ 7] /local1/gromacs-4.0.7-parallel/bin/mdrun [0x41e1e9]
> > [emerald:22662] *** End of error message ***
> >
> --
> > mpirun noticed that process rank 1 with PID 22662 on node emerald exited
> > on signal 11 (Segmentation fault).
> >
> > p.s I have ran several of these non pbc simulations with the same system
> > in serial and have never experienced a problem. Has anyone ever come
> > across this sort of problem before? and if so could you please provide
> > some advice.
> >
> > Many Thanks
> >
> > Gavin
> >
> > --
> > gmx-users mailing list gmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search before
> posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
> 
> Express yourself instantly with MSN Messenger! MSN Messenger
> <http://clk.atdmt.com/AVE/go/onm00200471ave/direct/01/>

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


RE: [gmx-users] problems with non pbc simulations in parallel

2010-03-10 Thread Berk Hess

Hi,

This is a silly bug with nose-hoover and pbc=no.
I fixed it for 4.0.8 (if we will ever release that).

To fix it you only need to move a brace up 4 lines in src/mdlib/init.c
Or you can use the v-rescale thermostat.

Berk

--- a/src/mdlib/init.c
+++ b/src/mdlib/init.c
@@ -119,9 +119,9 @@ static void set_state_entries(t_state *state,t_inputrec *ir,
int nnodes)
 if (ir->epc != epcNO) {
   state->flags |= (1<etc == etcNOSEHOOVER) {
-  state->flags |= (1<etc == etcNOSEHOOVER) {
+state->flags |= (1<etc == etcNOSEHOOVER || ir->etc == etcVRESCALE) {
 state->flags |= (1< Date: Wed, 10 Mar 2010 14:16:38 +
> From: gmelaug...@qub.ac.uk
> To: gmx-users@gromacs.org
> Subject: [gmx-users] problems with non pbc simulations in parallel
> 
> Hi all
> 
> I have installed gromacs-4.0.7-parallel with open mpi. I have
> successfully ran a few short simulations on 2,3 and 4 nodes using pbc. I
> am now interested in simulating a cluster of 32 molecules with no pbc in
> parallel and the simulation doe not proceed. I have set by box vectors
> to 0 0 0 in the conf.gro file, pbc = no in the mdp file, and use
> dparticle decomposition. The feedback I get from the following command
> 
> nohup mpirun -np 2 /local1/gromacs-4.0.7-parallel/bin/mdrun -pd -s &
> 
> is
> 
> Back Off! I just backed up md.log to ./#md.log.1#
> Reading file topol.tpr, VERSION 4.0.7 (single precision)
> starting mdrun 'test of 32 hexylcage molecules'
> 1000 steps,  0.0 ps.
> [emerald:22662] *** Process received signal ***
> [emerald:22662] Signal: Segmentation fault (11)
> [emerald:22662] Signal code: Address not mapped (1)
> [emerald:22662] Failing at address: (nil)
> [emerald:22662] [ 0] /lib64/libpthread.so.0 [0x7fbc17eefa90]
> [emerald:22662] [ 1]
> /local1/gromacs-4.0.7-parallel/bin/mdrun(nosehoover_tcoupl+0x74) [0x436874]
> [emerald:22662] [ 2]
> /local1/gromacs-4.0.7-parallel/bin/mdrun(update+0x171) [0x4b2311]
> [emerald:22662] [ 3]
> /local1/gromacs-4.0.7-parallel/bin/mdrun(do_md+0x2608) [0x42dd38]
> [emerald:22662] [ 4]
> /local1/gromacs-4.0.7-parallel/bin/mdrun(mdrunner+0xe33) [0x430973]
> [emerald:22662] [ 5]
> /local1/gromacs-4.0.7-parallel/bin/mdrun(main+0x5b8) [0x431128]
> [emerald:22662] [ 6] /lib64/libc.so.6(__libc_start_main+0xe6)
> [0x7fbc17ba6586]
> [emerald:22662] [ 7] /local1/gromacs-4.0.7-parallel/bin/mdrun [0x41e1e9]
> [emerald:22662] *** End of error message ***
> --
> mpirun noticed that process rank 1 with PID 22662 on node emerald exited
> on signal 11 (Segmentation fault).
> 
> p.s I have ran several of these non pbc simulations with the same system
> in serial and have never experienced a problem. Has anyone ever come
> across this sort of problem before? and if so could you please provide
> some advice.
> 
> Many Thanks
> 
> Gavin
> 
> -- 
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
  
_
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] problems with non pbc simulations in parallel

2010-03-10 Thread Maurício Menegatti Rigo
Sorry Gavin,

I didnt resolve the problem yet. I just use your question as mine too. The
only thing I add was that my processor is an i7.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] problems with non pbc simulations in parallel

2010-03-10 Thread Gavin Melaugh
Maurício Menegatti Rigo wrote:
> I'm facing the same problem, just after I became to run the molecular
> dynamics at i7 processor.
>
Hi Mauricio

Was your response to my query. If so did you resolve the problem?

Gavin
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] problems with non pbc simulations in parallel

2010-03-10 Thread Maurício Menegatti Rigo
I'm facing the same problem, just after I became to run the molecular
dynamics at i7 processor.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] problems with non pbc simulations in parallel

2010-03-10 Thread Gavin Melaugh
Hi all

I have installed gromacs-4.0.7-parallel with open mpi. I have
successfully ran a few short simulations on 2,3 and 4 nodes using pbc. I
am now interested in simulating a cluster of 32 molecules with no pbc in
parallel and the simulation doe not proceed. I have set by box vectors
to 0 0 0 in the conf.gro file, pbc = no in the mdp file, and use
dparticle decomposition. The feedback I get from the following command

nohup mpirun -np 2 /local1/gromacs-4.0.7-parallel/bin/mdrun -pd -s &

is

Back Off! I just backed up md.log to ./#md.log.1#
Reading file topol.tpr, VERSION 4.0.7 (single precision)
starting mdrun 'test of 32 hexylcage molecules'
1000 steps,  0.0 ps.
[emerald:22662] *** Process received signal ***
[emerald:22662] Signal: Segmentation fault (11)
[emerald:22662] Signal code: Address not mapped (1)
[emerald:22662] Failing at address: (nil)
[emerald:22662] [ 0] /lib64/libpthread.so.0 [0x7fbc17eefa90]
[emerald:22662] [ 1]
/local1/gromacs-4.0.7-parallel/bin/mdrun(nosehoover_tcoupl+0x74) [0x436874]
[emerald:22662] [ 2]
/local1/gromacs-4.0.7-parallel/bin/mdrun(update+0x171) [0x4b2311]
[emerald:22662] [ 3]
/local1/gromacs-4.0.7-parallel/bin/mdrun(do_md+0x2608) [0x42dd38]
[emerald:22662] [ 4]
/local1/gromacs-4.0.7-parallel/bin/mdrun(mdrunner+0xe33) [0x430973]
[emerald:22662] [ 5]
/local1/gromacs-4.0.7-parallel/bin/mdrun(main+0x5b8) [0x431128]
[emerald:22662] [ 6] /lib64/libc.so.6(__libc_start_main+0xe6)
[0x7fbc17ba6586]
[emerald:22662] [ 7] /local1/gromacs-4.0.7-parallel/bin/mdrun [0x41e1e9]
[emerald:22662] *** End of error message ***
--
mpirun noticed that process rank 1 with PID 22662 on node emerald exited
on signal 11 (Segmentation fault).

p.s I have ran several of these non pbc simulations with the same system
in serial and have never experienced a problem. Has anyone ever come
across this sort of problem before? and if so could you please provide
some advice.

Many Thanks

Gavin

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php