Re: [gmx-users] handling particle decomposition with distance restraints

2009-06-25 Thread jayant james
The energy minimization went on without any problem on 4 processors but the
problem occurs when I perform the MD run. Also, I did not get any error
message with relevance to LINCS etc...
JJ

On Wed, Jun 24, 2009 at 6:53 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 jayant james wrote:

 Yes my distance restraints are long because I am using FRET distances as
 distance restraints while performing MD simulations. Upon usage of this
 command

 mpirun -np 4  mdrun_mpi  -s pr -e pr -g md -o traj.trr -c pr.gro  -pd
   

 I get the following error!!   I did try giving yes after -pd but even then
 the same error message is repeated.


 Anything else printed to the screen or log file?  LINCS warnings or
 anything else?  Did energy minimization complete successfully?

 -Justin

  Back Off! I just backed up md.log to ./#md.log.9#
 Reading file pr.tpr, VERSION 4.0 (single precision)
 NNODES=4, MYRANK=1, HOSTNAME=localhost.localdomain
 NODEID=1 argc=13
 NODEID=3 argc=13
 [localhost:17514] *** Process received signal ***
 [localhost:17514] Signal: Segmentation fault (11)
 [localhost:17514] Signal code: Address not mapped (1)
 [localhost:17514] Failing at address: 0x134
 [localhost:17514] [ 0] /lib64/libpthread.so.0 [0x38dec0f0f0]
 [localhost:17514] [ 1] /lib64/libc.so.6(memcpy+0x15b) [0x38de08432b]
 [localhost:17514] [ 2]
 /usr/lib64/openmpi/1.2.4-gcc/libmpi.so.0(ompi_convertor_pack+0x152)
 [0x3886e45392]
 [localhost:17514] [ 3]
 /usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_btl_sm.so(mca_btl_sm_prepare_src+0x13d)
 [0x7f39dd118a4d]
 [localhost:17514] [ 4]
 /usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_start_rndv+0x140)
 [0x7f39dd735230]
 [localhost:17514] [ 5]
 /usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_pml_ob1.so(mca_pml_ob1_send+0x748)
 [0x7f39dd72e508]
 [localhost:17514] [ 6]
 /usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_coll_tuned.so(ompi_coll_tuned_bcast_intra_split_bintree+0x91c)
 [0x7f39dc6f735c]
 [localhost:17514] [ 7]
 /usr/lib64/openmpi/1.2.4-gcc/libmpi.so.0(MPI_Bcast+0x15c) [0x3886e4c40c]
 [localhost:17514] [ 8] mdrun_mpi(bcast_state+0x26c) [0x56d59c]
 [localhost:17514] [ 9] mdrun_mpi(mdrunner+0x1067) [0x42b807]
 [localhost:17514] [10] mdrun_mpi(main+0x3b4) [0x431c34]
 [localhost:17514] [11] /lib64/libc.so.6(__libc_start_main+0xe6)
 [0x38de01e576]
 [localhost:17514] [12] mdrun_mpi [0x413339]
 [localhost:17514] *** End of error message ***
 mpirun noticed that job rank 0 with PID 17514 on node
 localhost.localdomain exited on signal 11 (Segmentation fault).
 3 additional processes aborted (not shown)



 On Wed, Jun 24, 2009 at 6:04 PM, Justin A. Lemkul jalem...@vt.edumailto:
 jalem...@vt.edu wrote:



jayant james wrote:


I just replaced the old gmx 4.0 version with the 4.0.5 version
and still the same problem

NOTE: atoms involved in distance restraints should be within the
longest cut-off distance, if this is not the case mdrun
generates a fatal error, in that case use particle decomposition
(mdrun option -pd)

Well, does it work with -pd?  It looks like your distance restraints
are indeed quite long, so this looks like it is your only option.

-Justin



WARNING: Can not write distance restraint data to energy file
with domain decomposition



On Wed, Jun 24, 2009 at 5:10 PM, Justin A. Lemkul
jalem...@vt.edu mailto:jalem...@vt.edu
mailto:jalem...@vt.edu mailto:jalem...@vt.edu wrote:



   jayant james wrote:

   Hi!
   I am performing an mpi MD (on a quad core system) run with
   distance restraints. When I execute this command below
 without
   position restraints the MD run is distributed over 4 nodes
   perfectly well. But when I incorporate the distance
restraints I
   hit a road block
mpirun -np 4  mdrun_mpi  -s pr -e pr -g md -o traj.trr
-c pr.gro  
I get this error message (below). My pr.mdp and distance
   restraints files are given below the error message
   . *
   Question.* How do I handle this sitation? Do I increase
the long
   range cut off in the pr.mdp file? If you see my distance
   restraints file, my upper range of distances are close to
9nm!!


   Upgrade to the latest version (4.0.5), since there have been
   numerous improvements to domain decomposition throughout the
   development of version 4.0.

   -Justin

   Please guide.
   Thanks
   JJ

 --
   Back Off! I just backed up md.log to ./#md.log.6#
   Reading file pr.tpr, VERSION 4.0 (single precision)

   NOTE: atoms involved in distance 

[gmx-users] handling particle decomposition with distance restraints

2009-06-25 Thread chris . neale
Why not use the pull code? If you haev to use distance restraints,  
then try LAM mpi with your pd run. We had similar error messages with  
vanilla .mdp files using openmpi with large and complex systems that  
went away when we switched to LAM MPI. Our problems disappeared in gmx  
4 so we went back to openmpi for all systems as that mdrun_mpi version  
is faster in our hands.


I admit, there is no good reason why LAM would work and openMPI would  
not, but I have seen it happen before so it's worth a shot.


-- original message--

The energy minimization went on without any problem on 4 processors but the
problem occurs when I perform the MD run. Also, I did not get any error
message with relevance to LINCS etc...
JJ

___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] handling particle decomposition with distance restraints

2009-06-25 Thread Chris Neale

Let me re-emphasize that the pull code may be a good solution for you.

As per your request, I currently use the following without any problems:

fftw 3.1.2
gromacs 3.3.1 or 4.0.4
openmpi 1.2.6

Be especially aware that openmpi 1.3.0 and 1.3.1 are broken, as I posted 
here:


http://lists.gromacs.org/pipermail/gmx-users/2009-March/040844.html


To be clear, I have never experienced any openmpi-based problems with 
any version of gromacs 4 and openmpi 1.2.6.


I posted the original notice of our problems with openmpi (1.2.1) that 
were solved by using lam here.

http://www.mail-archive.com/gmx-users@gromacs.org/msg08257.html

Chris

jayant james wrote:

Hi!
thanks for your mail. Could you please share what OS and versions of 
fftw, openmpi and gmx you are currently using.

Thanks you
JJ

On Thu, Jun 25, 2009 at 12:28 PM, chris.ne...@utoronto.ca 
mailto:chris.ne...@utoronto.ca wrote:


Why not use the pull code? If you haev to use distance restraints,
then try LAM mpi with your pd run. We had similar error messages
with vanilla .mdp files using openmpi with large and complex
systems that went away when we switched to LAM MPI. Our problems
disappeared in gmx 4 so we went back to openmpi for all systems as
that mdrun_mpi version is faster in our hands.

I admit, there is no good reason why LAM would work and openMPI
would not, but I have seen it happen before so it's worth a shot.

-- original message--

The energy minimization went on without any problem on 4
processors but the
problem occurs when I perform the MD run. Also, I did not get any
error
message with relevance to LINCS etc...
JJ

___
gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use thewww
interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




--
Jayasundar Jayant James

www.chick.com/reading/tracts/0096/0096_01.asp 
http://www.chick.com/reading/tracts/0096/0096_01.asp)




___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] handling particle decomposition with distance restraints

2009-06-25 Thread Chris Neale

We are running SUSE Linux

It's best to keep all of this on the mailing list in case it becomes 
useful to somebody else.



jayant james wrote:

Hi!
thanks for your mail. I have never used pull code before and so I am a 
bit apprehensive but I do accept your suggestion and I am working on 
that. By the wat what is the OS that you are using? Is it Suse or Fedora?

Thanks
Jayant James

On Thu, Jun 25, 2009 at 4:31 PM, Chris Neale chris.ne...@utoronto.ca 
mailto:chris.ne...@utoronto.ca wrote:


Let me re-emphasize that the pull code may be a good solution for you.

As per your request, I currently use the following without any
problems:

fftw 3.1.2
gromacs 3.3.1 or 4.0.4
openmpi 1.2.6

Be especially aware that openmpi 1.3.0 and 1.3.1 are broken, as I
posted here:

http://lists.gromacs.org/pipermail/gmx-users/2009-March/040844.html


To be clear, I have never experienced any openmpi-based problems
with any version of gromacs 4 and openmpi 1.2.6.

I posted the original notice of our problems with openmpi (1.2.1)
that were solved by using lam here.
http://www.mail-archive.com/gmx-users@gromacs.org/msg08257.html

Chris

jayant james wrote:

Hi!
thanks for your mail. Could you please share what OS and
versions of fftw, openmpi and gmx you are currently using.
Thanks you
JJ

On Thu, Jun 25, 2009 at 12:28 PM, chris.ne...@utoronto.ca
mailto:chris.ne...@utoronto.ca
mailto:chris.ne...@utoronto.ca
mailto:chris.ne...@utoronto.ca wrote:

   Why not use the pull code? If you haev to use distance
restraints,
   then try LAM mpi with your pd run. We had similar error
messages
   with vanilla .mdp files using openmpi with large and complex
   systems that went away when we switched to LAM MPI. Our
problems
   disappeared in gmx 4 so we went back to openmpi for all
systems as
   that mdrun_mpi version is faster in our hands.

   I admit, there is no good reason why LAM would work and openMPI
   would not, but I have seen it happen before so it's worth a
shot.

   -- original message--

   The energy minimization went on without any problem on 4
   processors but the
   problem occurs when I perform the MD run. Also, I did not
get any
   error
   message with relevance to LINCS etc...
   JJ

   ___
   gmx-users mailing listgmx-users@gromacs.org
mailto:gmx-users@gromacs.org
   mailto:gmx-users@gromacs.org mailto:gmx-users@gromacs.org

   http://lists.gromacs.org/mailman/listinfo/gmx-users
   Please search the archive at http://www.gromacs.org/search
before
   posting!
   Please don't post (un)subscribe requests to the list. Use
thewww
   interface or send it to gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org
   mailto:gmx-users-requ...@gromacs.org
mailto:gmx-users-requ...@gromacs.org.

   Can't post? Read http://www.gromacs.org/mailing_lists/users.php




-- 
Jayasundar Jayant James


www.chick.com/reading/tracts/0096/0096_01.asp
http://www.chick.com/reading/tracts/0096/0096_01.asp
http://www.chick.com/reading/tracts/0096/0096_01.asp)





--
Jayasundar Jayant James

www.chick.com/reading/tracts/0096/0096_01.asp 
http://www.chick.com/reading/tracts/0096/0096_01.asp)




___
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] handling particle decomposition with distance restraints

2009-06-24 Thread jayant james
Hi!
I am performing an mpi MD (on a quad core system) run with distance
restraints. When I execute this command below  without position restraints
the MD run is distributed over 4 nodes perfectly well. But when I
incorporate the distance restraints I hit a road block

mpirun -np 4  mdrun_mpi  -s pr -e pr -g md -o traj.trr -c pr.gro  

I get this error message (below). My pr.mdp and distance restraints files
are given below the error message
. *
Question.* How do I handle this sitation? Do I increase the long range cut
off in the pr.mdp file? If you see my distance restraints file, my upper
range of distances are close to 9nm!!
Please guide.
Thanks
JJ
--
Back Off! I just backed up md.log to ./#md.log.6#
Reading file pr.tpr, VERSION 4.0 (single precision)

NOTE: atoms involved in distance restraints should be within the longest
cut-off distance, if this is not the case mdrun generates a fatal error, in
that case use particle decomposition (mdrun option
-pd)



WARNING: Can not write distance restraint data to energy file with domain
decomposition

---
Program mdrun_mpi, VERSION 4.0.2
Source code file: domdec.c, line: 5842

Fatal error:
There is no domain decomposition for 4 nodes that is compatible with the
given box and a minimum cell size of 9.85926 nm
Change the number of nodes or mdrun option -rdd or
-dds
Look in the log file for details on the domain
decomposition
---

*pr.mdp*

;   User spoel (236)
;   Wed Nov  3 17:12:44 1993
;   Input file
;
title   =  Yo
cpp =  /usr/bin/cpp
define  =  -DDISRES
constraints =  none
;constraint_algorithm =  lincs
;lincs_order =  4
integrator  =  md
dt  =  0.001; ps !
nsteps  =  400  ; total 2.0ns.
nstcomm =  1
nstxout =  5
nstvout =  5
nstfout =  5
nstlog  =  5
nstenergy   =  500
nstlist =  10
ns_type =  grid
rlist   =  1.0
coulombtype =  PME
rcoulomb=  1.0
vdwtype =  cut-off
rvdw=  1.4
fourierspacing  = 0.12
fourier_nx  = 0
fourier_ny  = 0
fourier_nz  = 0
pme_order   = 4
ewald_rtol  = 1e-5
optimize_fft= yes
disre   =  simple
disre_weighting =  equal
; Berendsen temperature coupling is on in two groups
Tcoupl  =  V-rescale
tc-grps =  Protein Non-Protein
tau_t   =  0.1 0.1
ref_t   =  300 300
; Energy monitoring
energygrps  = Protein Non-Protein
;tnc Non-Protein tnt NMR tni
; Pressure coupling is not on
Pcoupl  =  parrinello-rahman
tau_p   =  0.5
compressibility =  4.5e-5
ref_p   =  1.0
;simulated annealing
;Type of annealing form each temperature group (no/single/periodic)
;annealing  =   no, no, no, single, no
;
;Number of annealing points to use for specifying annealing in each group
;annealing_npoints   =  0, 0, 0, 9, 0
;
; List of times at the annealing points for each group
;annealing_time   =  0 25 50 75 100 125 150 175 200
; Temp.at each annealing point, for each group.
;annealing_temp  =  300 350 400 450 500 450 400 350 300

*
 distance restraints file*

 distance_restraints ]
;   ai  aj  typeindex   type'   low up1 up2 fac
;TnT240-TnI131, 145, 151, 160, 167  (ca+-7)
201938891   1   1   3.913.915.31
0.574679
201940561   2   1   4.864.866.26
0.409911
201941331   3   1   5.695.697.09
0.457947
201942071   4   1   6.636.638.03
0.323852
201942731   5   1   7.147.148.54
0.294559
;TnT276- Tni 131,145,151,160,167,5,17,27,40
243438891   6   1   1.341.342.74
4.884769
243440561   7   1   2.132.133.53
0.523368
243441331   8   1   3.663.665.06
0.409911
243442071   9   1   4.484.485.88
0.547825
243442731   10  1   5.435.436.83
0.285938
243426281   11  1   5.895.897.29
0.241333
243427191   12  1   4.764.766.16
0.366358
243428241   13  1   3.813.815.21
0.644145
243429721   14  1   3.103.104.50
0.431009
;TnT288- Tni 131,145,151,160,167,5,17,27,40
25573889  

Re: [gmx-users] handling particle decomposition with distance restraints

2009-06-24 Thread Justin A. Lemkul



jayant james wrote:

Hi!
I am performing an mpi MD (on a quad core system) run with distance 
restraints. When I execute this command below  without position 
restraints the MD run is distributed over 4 nodes perfectly well. But 
when I incorporate the distance restraints I hit a road block
 
mpirun -np 4  mdrun_mpi  -s pr -e pr -g md -o traj.trr -c pr.gro  
 
I get this error message (below). My pr.mdp and distance restraints 
files are given below the error message

. *
Question.* How do I handle this sitation? Do I increase the long range 
cut off in the pr.mdp file? If you see my distance restraints file, my 
upper range of distances are close to 9nm!!


Upgrade to the latest version (4.0.5), since there have been numerous 
improvements to domain decomposition throughout the development of version 4.0.


-Justin


Please guide.
Thanks
JJ
--
Back Off! I just backed up md.log to ./#md.log.6#
Reading file pr.tpr, VERSION 4.0 (single precision)

NOTE: atoms involved in distance restraints should be within the longest 
cut-off distance, if this is not the case mdrun generates a fatal error, 
in that case use particle decomposition (mdrun option 
-pd)  




WARNING: Can not write distance restraint data to energy file with 
domain decomposition


---
Program mdrun_mpi, VERSION 4.0.2  
Source code file: domdec.c, line: 5842


Fatal error:
There is no domain decomposition for 4 nodes that is compatible with the 
given box and a minimum cell size of 9.85926 nm
Change the number of nodes or mdrun option -rdd or 
-dds
Look in the log file for details on the domain 
decomposition   
---


*pr.mdp*

;   User spoel (236)
;   Wed Nov  3 17:12:44 1993
;   Input file
;
title   =  Yo
cpp =  /usr/bin/cpp
define  =  -DDISRES
constraints =  none
;constraint_algorithm =  lincs
;lincs_order =  4
integrator  =  md
dt  =  0.001; ps !
nsteps  =  400  ; total 2.0ns.
nstcomm =  1
nstxout =  5
nstvout =  5
nstfout =  5
nstlog  =  5
nstenergy   =  500
nstlist =  10
ns_type =  grid
rlist   =  1.0
coulombtype =  PME
rcoulomb=  1.0
vdwtype =  cut-off
rvdw=  1.4
fourierspacing  = 0.12
fourier_nx  = 0
fourier_ny  = 0
fourier_nz  = 0
pme_order   = 4
ewald_rtol  = 1e-5
optimize_fft= yes
disre   =  simple
disre_weighting =  equal
; Berendsen temperature coupling is on in two groups
Tcoupl  =  V-rescale
tc-grps =  Protein Non-Protein
tau_t   =  0.1 0.1
ref_t   =  300 300
; Energy monitoring
energygrps  = Protein Non-Protein
;tnc Non-Protein tnt NMR tni
; Pressure coupling is not on
Pcoupl  =  parrinello-rahman
tau_p   =  0.5
compressibility =  4.5e-5
ref_p   =  1.0
;simulated annealing
;Type of annealing form each temperature group (no/single/periodic)
;annealing  =   no, no, no, single, no
;
;Number of annealing points to use for specifying annealing in each group
;annealing_npoints   =  0, 0, 0, 9, 0
;
; List of times at the annealing points for each group
;annealing_time   =  0 25 50 75 100 125 150 175 200
; Temp.at each annealing point, for each group.
;annealing_temp  =  300 350 400 450 500 450 400 350 300

*
 distance restraints file*

 distance_restraints ]
;   ai  aj  typeindex   type'   low up1 up2 fac
;TnT240-TnI131, 145, 151, 160, 167  (ca+-7)
201938891   1   1   3.913.915.31
0.574679
201940561   2   1   4.864.866.26
0.409911
201941331   3   1   5.695.697.09
0.457947
201942071   4   1   6.636.638.03
0.323852
201942731   5   1   7.147.148.54
0.294559

;TnT276- Tni 131,145,151,160,167,5,17,27,40
243438891   6   1   1.341.342.74
4.884769
243440561   7   1   2.132.133.53
0.523368
243441331   8   1   3.663.665.06
0.409911
24344207 

Re: [gmx-users] handling particle decomposition with distance restraints

2009-06-24 Thread jayant james
I just replaced the old gmx 4.0 version with the 4.0.5 version and still the
same problem

NOTE: atoms involved in distance restraints should be within the longest
cut-off distance, if this is not the case mdrun generates a fatal error, in
that case use particle decomposition (mdrun option -pd)


WARNING: Can not write distance restraint data to energy file with domain
decomposition



On Wed, Jun 24, 2009 at 5:10 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 jayant james wrote:

 Hi!
 I am performing an mpi MD (on a quad core system) run with distance
 restraints. When I execute this command below  without position restraints
 the MD run is distributed over 4 nodes perfectly well. But when I
 incorporate the distance restraints I hit a road block
  mpirun -np 4  mdrun_mpi  -s pr -e pr -g md -o traj.trr -c pr.gro  
  I get this error message (below). My pr.mdp and distance restraints files
 are given below the error message
 . *
 Question.* How do I handle this sitation? Do I increase the long range cut
 off in the pr.mdp file? If you see my distance restraints file, my upper
 range of distances are close to 9nm!!


 Upgrade to the latest version (4.0.5), since there have been numerous
 improvements to domain decomposition throughout the development of version
 4.0.

 -Justin

  Please guide.
 Thanks
 JJ

 --
 Back Off! I just backed up md.log to ./#md.log.6#
 Reading file pr.tpr, VERSION 4.0 (single precision)

 NOTE: atoms involved in distance restraints should be within the longest
 cut-off distance, if this is not the case mdrun generates a fatal error, in
 that case use particle decomposition (mdrun option -pd)




 WARNING: Can not write distance restraint data to energy file with domain
 decomposition

 ---
 Program mdrun_mpi, VERSION 4.0.2  Source code file:
 domdec.c, line: 5842
 Fatal error:
 There is no domain decomposition for 4 nodes that is compatible with the
 given box and a minimum cell size of 9.85926 nm
 Change the number of nodes or mdrun option -rdd or -dds
  Look in the log file for
 details on the domain decomposition

 ---

 *pr.mdp*

 ;   User spoel (236)
 ;   Wed Nov  3 17:12:44 1993
 ;   Input file
 ;
 title   =  Yo
 cpp =  /usr/bin/cpp
 define  =  -DDISRES
 constraints =  none
 ;constraint_algorithm =  lincs
 ;lincs_order =  4
 integrator  =  md
 dt  =  0.001; ps !
 nsteps  =  400  ; total 2.0ns.
 nstcomm =  1
 nstxout =  5
 nstvout =  5
 nstfout =  5
 nstlog  =  5
 nstenergy   =  500
 nstlist =  10
 ns_type =  grid
 rlist   =  1.0
 coulombtype =  PME
 rcoulomb=  1.0
 vdwtype =  cut-off
 rvdw=  1.4
 fourierspacing  = 0.12
 fourier_nx  = 0
 fourier_ny  = 0
 fourier_nz  = 0
 pme_order   = 4
 ewald_rtol  = 1e-5
 optimize_fft= yes
 disre   =  simple
 disre_weighting =  equal
 ; Berendsen temperature coupling is on in two groups
 Tcoupl  =  V-rescale
 tc-grps =  Protein Non-Protein
 tau_t   =  0.1 0.1
 ref_t   =  300 300
 ; Energy monitoring
 energygrps  = Protein Non-Protein
 ;tnc Non-Protein tnt NMR tni
 ; Pressure coupling is not on
 Pcoupl  =  parrinello-rahman
 tau_p   =  0.5
 compressibility =  4.5e-5
 ref_p   =  1.0
 ;simulated annealing
 ;Type of annealing form each temperature group (no/single/periodic)
 ;annealing  =   no, no, no, single, no
 ;
 ;Number of annealing points to use for specifying annealing in each group
 ;annealing_npoints   =  0, 0, 0, 9, 0
 ;
 ; List of times at the annealing points for each group
 ;annealing_time   =  0 25 50 75 100 125 150 175 200
 ; Temp.at each annealing point, for each group.
 ;annealing_temp  =  300 350 400 450 500 450 400 350 300

 *
  distance restraints file*

  distance_restraints ]
 ;   ai  aj  typeindex   type'   low up1 up2
 fac
 ;TnT240-TnI131, 145, 151, 160, 167  (ca+-7)
201938891   1   1   3.913.915.31
  0.574679
201940561   2   1   4.864.866.26
  0.409911
201941331   3   1   5.695.697.09
  0.457947
201942071   4   1   6.636.638.03
  0.323852
201942731   5   1   7.147.14

Re: [gmx-users] handling particle decomposition with distance restraints

2009-06-24 Thread Justin A. Lemkul



jayant james wrote:


I just replaced the old gmx 4.0 version with the 4.0.5 version and still 
the same problem


NOTE: atoms involved in distance restraints should be within the longest 
cut-off distance, if this is not the case mdrun generates a fatal error, 
in that case use particle decomposition (mdrun option -pd)  



Well, does it work with -pd?  It looks like your distance restraints are indeed 
quite long, so this looks like it is your only option.


-Justin




WARNING: Can not write distance restraint data to energy file with 
domain decomposition




On Wed, Jun 24, 2009 at 5:10 PM, Justin A. Lemkul jalem...@vt.edu 
mailto:jalem...@vt.edu wrote:




jayant james wrote:

Hi!
I am performing an mpi MD (on a quad core system) run with
distance restraints. When I execute this command below  without
position restraints the MD run is distributed over 4 nodes
perfectly well. But when I incorporate the distance restraints I
hit a road block
 mpirun -np 4  mdrun_mpi  -s pr -e pr -g md -o traj.trr -c pr.gro  
 I get this error message (below). My pr.mdp and distance
restraints files are given below the error message
. *
Question.* How do I handle this sitation? Do I increase the long
range cut off in the pr.mdp file? If you see my distance
restraints file, my upper range of distances are close to 9nm!!


Upgrade to the latest version (4.0.5), since there have been
numerous improvements to domain decomposition throughout the
development of version 4.0.

-Justin

Please guide.
Thanks
JJ

--
Back Off! I just backed up md.log to ./#md.log.6#
Reading file pr.tpr, VERSION 4.0 (single precision)

NOTE: atoms involved in distance restraints should be within the
longest cut-off distance, if this is not the case mdrun
generates a fatal error, in that case use particle decomposition
(mdrun option -pd)  
   



WARNING: Can not write distance restraint data to energy file
with domain decomposition

---
Program mdrun_mpi, VERSION 4.0.2  Source
code file: domdec.c, line: 5842
Fatal error:

There is no domain decomposition for 4 nodes that is compatible
with the given box and a minimum cell size of 9.85926 nm
Change the number of nodes or mdrun option -rdd or -dds
   Look in
the log file for details on the domain decomposition
 
---


*pr.mdp*

;   User spoel (236)
;   Wed Nov  3 17:12:44 1993
;   Input file
;
title   =  Yo
cpp =  /usr/bin/cpp
define  =  -DDISRES
constraints =  none
;constraint_algorithm =  lincs
;lincs_order =  4
integrator  =  md
dt  =  0.001; ps !
nsteps  =  400  ; total 2.0ns.
nstcomm =  1
nstxout =  5
nstvout =  5
nstfout =  5
nstlog  =  5
nstenergy   =  500
nstlist =  10
ns_type =  grid
rlist   =  1.0
coulombtype =  PME
rcoulomb=  1.0
vdwtype =  cut-off
rvdw=  1.4
fourierspacing  = 0.12
fourier_nx  = 0
fourier_ny  = 0
fourier_nz  = 0
pme_order   = 4
ewald_rtol  = 1e-5
optimize_fft= yes
disre   =  simple
disre_weighting =  equal
; Berendsen temperature coupling is on in two groups
Tcoupl  =  V-rescale
tc-grps =  Protein Non-Protein
tau_t   =  0.1 0.1
ref_t   =  300 300
; Energy monitoring
energygrps  = Protein Non-Protein
;tnc Non-Protein tnt NMR tni
; Pressure coupling is not on
Pcoupl  =  parrinello-rahman
tau_p   =  0.5
compressibility =  4.5e-5
ref_p   =  1.0

Re: [gmx-users] handling particle decomposition with distance restraints

2009-06-24 Thread jayant james
Yes my distance restraints are long because I am using FRET distances as
distance restraints while performing MD simulations. Upon usage of this
command

mpirun -np 4  mdrun_mpi  -s pr -e pr -g md -o traj.trr -c pr.gro  -pd


I get the following error!!   I did try giving yes after -pd but even then
the same error message is repeated.

Back Off! I just backed up md.log to ./#md.log.9#
Reading file pr.tpr, VERSION 4.0 (single precision)
NNODES=4, MYRANK=1, HOSTNAME=localhost.localdomain
NODEID=1 argc=13
NODEID=3 argc=13
[localhost:17514] *** Process received signal ***
[localhost:17514] Signal: Segmentation fault (11)
[localhost:17514] Signal code: Address not mapped (1)
[localhost:17514] Failing at address: 0x134
[localhost:17514] [ 0] /lib64/libpthread.so.0 [0x38dec0f0f0]
[localhost:17514] [ 1] /lib64/libc.so.6(memcpy+0x15b) [0x38de08432b]
[localhost:17514] [ 2]
/usr/lib64/openmpi/1.2.4-gcc/libmpi.so.0(ompi_convertor_pack+0x152)
[0x3886e45392]
[localhost:17514] [ 3]
/usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_btl_sm.so(mca_btl_sm_prepare_src+0x13d)
[0x7f39dd118a4d]
[localhost:17514] [ 4]
/usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_start_rndv+0x140)
[0x7f39dd735230]
[localhost:17514] [ 5]
/usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_pml_ob1.so(mca_pml_ob1_send+0x748)
[0x7f39dd72e508]
[localhost:17514] [ 6]
/usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_coll_tuned.so(ompi_coll_tuned_bcast_intra_split_bintree+0x91c)
[0x7f39dc6f735c]
[localhost:17514] [ 7]
/usr/lib64/openmpi/1.2.4-gcc/libmpi.so.0(MPI_Bcast+0x15c) [0x3886e4c40c]
[localhost:17514] [ 8] mdrun_mpi(bcast_state+0x26c) [0x56d59c]
[localhost:17514] [ 9] mdrun_mpi(mdrunner+0x1067) [0x42b807]
[localhost:17514] [10] mdrun_mpi(main+0x3b4) [0x431c34]
[localhost:17514] [11] /lib64/libc.so.6(__libc_start_main+0xe6)
[0x38de01e576]
[localhost:17514] [12] mdrun_mpi [0x413339]
[localhost:17514] *** End of error message ***
mpirun noticed that job rank 0 with PID 17514 on node localhost.localdomain
exited on signal 11 (Segmentation fault).
3 additional processes aborted (not shown)



On Wed, Jun 24, 2009 at 6:04 PM, Justin A. Lemkul jalem...@vt.edu wrote:



 jayant james wrote:


 I just replaced the old gmx 4.0 version with the 4.0.5 version and still
 the same problem

 NOTE: atoms involved in distance restraints should be within the longest
 cut-off distance, if this is not the case mdrun generates a fatal error, in
 that case use particle decomposition (mdrun option -pd)


 Well, does it work with -pd?  It looks like your distance restraints are
 indeed quite long, so this looks like it is your only option.

 -Justin



 WARNING: Can not write distance restraint data to energy file with domain
 decomposition



 On Wed, Jun 24, 2009 at 5:10 PM, Justin A. Lemkul jalem...@vt.edumailto:
 jalem...@vt.edu wrote:



jayant james wrote:

Hi!
I am performing an mpi MD (on a quad core system) run with
distance restraints. When I execute this command below  without
position restraints the MD run is distributed over 4 nodes
perfectly well. But when I incorporate the distance restraints I
hit a road block
 mpirun -np 4  mdrun_mpi  -s pr -e pr -g md -o traj.trr -c pr.gro
  
 I get this error message (below). My pr.mdp and distance
restraints files are given below the error message
. *
Question.* How do I handle this sitation? Do I increase the long
range cut off in the pr.mdp file? If you see my distance
restraints file, my upper range of distances are close to 9nm!!


Upgrade to the latest version (4.0.5), since there have been
numerous improvements to domain decomposition throughout the
development of version 4.0.

-Justin

Please guide.
Thanks
JJ

  
 --
Back Off! I just backed up md.log to ./#md.log.6#
Reading file pr.tpr, VERSION 4.0 (single precision)

NOTE: atoms involved in distance restraints should be within the
longest cut-off distance, if this is not the case mdrun
generates a fatal error, in that case use particle decomposition
(mdrun option -pd)


WARNING: Can not write distance restraint data to energy file
with domain decomposition

---
Program mdrun_mpi, VERSION 4.0.2  Source
code file: domdec.c, line: 5842Fatal error:
There is no domain decomposition for 4 nodes that is compatible
with the given box and a minimum cell size of 9.85926 nm
Change the number of nodes or mdrun option -rdd or -dds
   Look in
the log file for details on the domain decomposition

  
 

Re: [gmx-users] handling particle decomposition with distance restraints

2009-06-24 Thread Justin A. Lemkul



jayant james wrote:
Yes my distance restraints are long because I am using FRET distances as 
distance restraints while performing MD simulations. Upon usage of this 
command


mpirun -np 4  mdrun_mpi  -s pr -e pr -g md -o traj.trr -c pr.gro  
-pd   


I get the following error!!   I did try giving yes after -pd but even 
then the same error message is repeated.




Anything else printed to the screen or log file?  LINCS warnings or anything 
else?  Did energy minimization complete successfully?


-Justin


Back Off! I just backed up md.log to ./#md.log.9#
Reading file pr.tpr, VERSION 4.0 (single precision)
NNODES=4, MYRANK=1, HOSTNAME=localhost.localdomain
NODEID=1 argc=13
NODEID=3 argc=13
[localhost:17514] *** Process received signal ***
[localhost:17514] Signal: Segmentation fault (11)
[localhost:17514] Signal code: Address not mapped (1)
[localhost:17514] Failing at address: 0x134
[localhost:17514] [ 0] /lib64/libpthread.so.0 [0x38dec0f0f0]
[localhost:17514] [ 1] /lib64/libc.so.6(memcpy+0x15b) [0x38de08432b]
[localhost:17514] [ 2] 
/usr/lib64/openmpi/1.2.4-gcc/libmpi.so.0(ompi_convertor_pack+0x152) 
[0x3886e45392]
[localhost:17514] [ 3] 
/usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_btl_sm.so(mca_btl_sm_prepare_src+0x13d) 
[0x7f39dd118a4d]
[localhost:17514] [ 4] 
/usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_start_rndv+0x140) 
[0x7f39dd735230]
[localhost:17514] [ 5] 
/usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_pml_ob1.so(mca_pml_ob1_send+0x748) 
[0x7f39dd72e508]
[localhost:17514] [ 6] 
/usr/lib64/openmpi/1.2.4-gcc/openmpi/mca_coll_tuned.so(ompi_coll_tuned_bcast_intra_split_bintree+0x91c) 
[0x7f39dc6f735c]
[localhost:17514] [ 7] 
/usr/lib64/openmpi/1.2.4-gcc/libmpi.so.0(MPI_Bcast+0x15c) [0x3886e4c40c]

[localhost:17514] [ 8] mdrun_mpi(bcast_state+0x26c) [0x56d59c]
[localhost:17514] [ 9] mdrun_mpi(mdrunner+0x1067) [0x42b807]
[localhost:17514] [10] mdrun_mpi(main+0x3b4) [0x431c34]
[localhost:17514] [11] /lib64/libc.so.6(__libc_start_main+0xe6) 
[0x38de01e576]

[localhost:17514] [12] mdrun_mpi [0x413339]
[localhost:17514] *** End of error message ***
mpirun noticed that job rank 0 with PID 17514 on node 
localhost.localdomain exited on signal 11 (Segmentation fault).

3 additional processes aborted (not shown)



On Wed, Jun 24, 2009 at 6:04 PM, Justin A. Lemkul jalem...@vt.edu 
mailto:jalem...@vt.edu wrote:




jayant james wrote:


I just replaced the old gmx 4.0 version with the 4.0.5 version
and still the same problem

NOTE: atoms involved in distance restraints should be within the
longest cut-off distance, if this is not the case mdrun
generates a fatal error, in that case use particle decomposition
(mdrun option -pd)  



Well, does it work with -pd?  It looks like your distance restraints
are indeed quite long, so this looks like it is your only option.

-Justin



WARNING: Can not write distance restraint data to energy file
with domain decomposition



On Wed, Jun 24, 2009 at 5:10 PM, Justin A. Lemkul
jalem...@vt.edu mailto:jalem...@vt.edu
mailto:jalem...@vt.edu mailto:jalem...@vt.edu wrote:



   jayant james wrote:

   Hi!
   I am performing an mpi MD (on a quad core system) run with
   distance restraints. When I execute this command below
 without
   position restraints the MD run is distributed over 4 nodes
   perfectly well. But when I incorporate the distance
restraints I
   hit a road block
mpirun -np 4  mdrun_mpi  -s pr -e pr -g md -o traj.trr
-c pr.gro  
I get this error message (below). My pr.mdp and distance
   restraints files are given below the error message
   . *
   Question.* How do I handle this sitation? Do I increase
the long
   range cut off in the pr.mdp file? If you see my distance
   restraints file, my upper range of distances are close to
9nm!!


   Upgrade to the latest version (4.0.5), since there have been
   numerous improvements to domain decomposition throughout the
   development of version 4.0.

   -Justin

   Please guide.
   Thanks
   JJ
 
 --

   Back Off! I just backed up md.log to ./#md.log.6#
   Reading file pr.tpr, VERSION 4.0 (single precision)

   NOTE: atoms involved in distance restraints should be
within the
   longest cut-off distance, if this is not the case mdrun
   generates a fatal error, in that case use particle
decomposition
   (mdrun option -pd)