Re: [gmx-users] running g_tune_pme on stampede

2014-12-06 Thread Mark Abraham
On Sat, Dec 6, 2014 at 12:16 AM, Kevin Chen fch6...@gmail.com wrote:

 Hi,

 Has anybody tried g_tune_pme on stampede before? It appears stampede only
 support ibrun, but not mpi -np type of stuff. So I assume one could launch
 g_tune_pme with mpi using command like this (without -np option),

 Ibrun g_tune_pme -s cutoff.tpr -launch


You should be trying to run mdrun from g_tune_pme in parallel, not trying
to run g_tune_pme in parallel. Make sure you've read g_tune_pme -h to find
out what environment and command line variables you should be setting.

Unfortunately, it failed. Any suggestion is welcome!


More information than it failed is needed to get a useful suggestion.

Mark


 Thanks in advance

 Kevin Chen






 -Original Message-
 From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se [mailto:
 gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of Szilárd
 Páll
 Sent: Friday, December 5, 2014 12:54 PM
 To: Discussion list for GROMACS users
 Subject: Re: [gmx-users] multinode issue

 On a second thought (and a quick googling), it _seems_ that this is an
 issue caused by the following:
 - the OpenMP runtime gets initialized outside mdrun and its threads (or
 just the master thread), get their affinity set;
 - mdrun then executes the sanity check, point at which
 omp_get_num_procs(), reports 1 CPU most probably because the master thread
 is bound to a single core.

 This alone should not be a big deal as long as the affinity settings get
 correctly overridden in mdrun. However this can have the ugly side-effect
 that, if mdrun's affinity setting gets disabled (if mdrun detects the
 externally set affinities it back off or if not all cores/hardware threads
 are used), all compute threads will inherit the affinity set previously and
 multiple threads will run on a the same core.

 Note that this warning should typically not cause a crash, but it is
 telling you that something is not quite right, so it may be best to start
 with eliminating this warning (hints: I_MPI_PIN for Intel MPI, -cc for
 Cray's aprun, --cpu-bind for slurm).

 Cheers,
 --
 Szilárd


 On Fri, Dec 5, 2014 at 7:35 PM, Szilárd Páll pall.szil...@gmail.com
 wrote:
  I don't think this is a sysconf issue. As you seem to have 16-core (hw
  thread?) nodes, it looks like sysnconf returned the correct value
  (16), but the OpenMP runtime actually returned 1. This typically means
  that the OpenMP runtime was initialized outside mdrun and for some
  reason (which I'm not sure about) it returns 1.
 
  My guess is that your job scheduler is multi-threading aware and by
  default assumes 1 core/hardware thread per rank so you may want to set
  some rank depth/width option.
 
  --
  Szilárd
 
 
  On Fri, Dec 5, 2014 at 1:37 PM, Éric Germaneau german...@sjtu.edu.cn
 wrote:
  Thank you Mark,
 
  Yes this was the end of the log.
  I tried an other input and got the same issue:
 
 Number of CPUs detected (16) does not match the number reported by
 OpenMP (1).
 Consider setting the launch configuration manually!
 Reading file yukuntest-70K.tpr, VERSION 4.6.3 (single precision)
 [16:node328] unexpected disconnect completion event from [0:node299]
 Assertion failed in file ../../dapl_conn_rc.c at line 1179: 0
 internal ABORT - process 16
 
  Actually, I'm running some test for our users, I'll talk with the
  admin about how to  return information to the standard sysconf()
  routine in the usual way.
  Thank you,
 
 Éric.
 
 
  On 12/05/2014 07:38 PM, Mark Abraham wrote:
 
  On Fri, Dec 5, 2014 at 9:15 AM, Éric Germaneau
  german...@sjtu.edu.cn
  wrote:
 
  Dear all,
 
  I use impi and when I submit o job (via LSF) to more than one node
  I get the following message:
 
  Number of CPUs detected (16) does not match the number reported by
  OpenMP (1).
 
  That suggests this machine has not be set up to return information
  to the standard sysconf() routine in the usual way. What kind of
 machine is this?
 
  Consider setting the launch configuration manually!
 
  Reading file test184000atoms_verlet.tpr, VERSION 4.6.2 (single
  precision)
 
  I hope that's just a 4.6.2-era .tpr, but nobody should be using
  4.6.2 mdrun because there was a bug in only that version affecting
  precisely these kinds of issues...
 
  [16:node319] unexpected disconnect completion event from
  [11:node328]
 
  Assertion failed in file ../../dapl_conn_rc.c at line 1179: 0
  internal ABORT - process 16
 
  I submit doing
 
  mpirun -np 32 -machinefile nodelist $EXE -v -deffnm $INPUT
 
  The machinefile looks like this
 
  node328:16
  node319:16
 
  I'm running the release 4.6.7.
  I do not set anything about OpenMP for this job, I'd like to have
  32 MPI process.
 
  Using one node it works fine.
  Any hints here?
 
  Everything seems fine. What was the end of the .log file? Can you
  run another MPI test program thus?
 
  Mark
 
 
   

Re: [gmx-users] multinode issue

2014-12-06 Thread Éric Germaneau

Dear Mark, Dear Szilárd,

Thank you for your help.
I did try different I_MPI... option without success.
Something I can't figure is I can run jobs with 2 or more OpenMP threads 
per MPI process, but not just one.

It crash doing one OpenMP threads per MPI process, even I disable I_MPI_PIN.

  Éric.


On 12/06/2014 02:54 AM, Szilárd Páll wrote:

On a second thought (and a quick googling), it _seems_ that this is an
issue caused by the following:
- the OpenMP runtime gets initialized outside mdrun and its threads
(or just the master thread), get their affinity set;
- mdrun then executes the sanity check, point at which
omp_get_num_procs(), reports 1 CPU most probably because the master
thread is bound to a single core.

This alone should not be a big deal as long as the affinity settings
get correctly overridden in mdrun. However this can have the ugly
side-effect that, if mdrun's affinity setting gets disabled (if mdrun
detects the externally set affinities it back off or if not all
cores/hardware threads are used), all compute threads will inherit the
affinity set previously and multiple threads will run on a the same
core.

Note that this warning should typically not cause a crash, but it is
telling you that something is not quite right, so it may be best to
start with eliminating this warning (hints: I_MPI_PIN for Intel MPI,
-cc for Cray's aprun, --cpu-bind for slurm).

Cheers,
--
Szilárd


On Fri, Dec 5, 2014 at 7:35 PM, Szilárd Páll pall.szil...@gmail.com wrote:

I don't think this is a sysconf issue. As you seem to have 16-core (hw
thread?) nodes, it looks like sysnconf returned the correct value
(16), but the OpenMP runtime actually returned 1. This typically means
that the OpenMP runtime was initialized outside mdrun and for some
reason (which I'm not sure about) it returns 1.

My guess is that your job scheduler is multi-threading aware and by
default assumes 1 core/hardware thread per rank so you may want to set
some rank depth/width option.

--
Szilárd


On Fri, Dec 5, 2014 at 1:37 PM, Éric Germaneau german...@sjtu.edu.cn wrote:

Thank you Mark,

Yes this was the end of the log.
I tried an other input and got the same issue:

Number of CPUs detected (16) does not match the number reported by
OpenMP (1).
Consider setting the launch configuration manually!
Reading file yukuntest-70K.tpr, VERSION 4.6.3 (single precision)
[16:node328] unexpected disconnect completion event from [0:node299]
Assertion failed in file ../../dapl_conn_rc.c at line 1179: 0
internal ABORT - process 16

Actually, I'm running some test for our users, I'll talk with the admin
about how to  return information
to the standard sysconf() routine in the usual way.
Thank you,

Éric.


On 12/05/2014 07:38 PM, Mark Abraham wrote:

On Fri, Dec 5, 2014 at 9:15 AM, Éric Germaneau german...@sjtu.edu.cn
wrote:


Dear all,

I use impi and when I submit o job (via LSF) to more than one node I get
the following message:

 Number of CPUs detected (16) does not match the number reported by
 OpenMP (1).


That suggests this machine has not be set up to return information to the
standard sysconf() routine in the usual way. What kind of machine is this?

 Consider setting the launch configuration manually!

 Reading file test184000atoms_verlet.tpr, VERSION 4.6.2 (single
 precision)


I hope that's just a 4.6.2-era .tpr, but nobody should be using 4.6.2
mdrun
because there was a bug in only that version affecting precisely these
kinds of issues...

 [16:node319] unexpected disconnect completion event from [11:node328]

 Assertion failed in file ../../dapl_conn_rc.c at line 1179: 0
 internal ABORT - process 16

I submit doing

 mpirun -np 32 -machinefile nodelist $EXE -v -deffnm $INPUT

The machinefile looks like this

 node328:16
 node319:16

I'm running the release 4.6.7.
I do not set anything about OpenMP for this job, I'd like to have 32 MPI
process.

Using one node it works fine.
Any hints here?


Everything seems fine. What was the end of the .log file? Can you run
another MPI test program thus?

Mark



   Éric.

--
Éric Germaneau (???), Specialist
Center for High Performance Computing
Shanghai Jiao Tong University
Room 205 Network Center, 800 Dongchuan Road, Shanghai 200240 China
M:german...@sjtu.edu.cn P:+86-136-4161-6480 W:http://hpc.sjtu.edu.cn
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Éric Germaneau (???), Specialist
Center for High Performance Computing
Shanghai Jiao Tong University
Room 205 Network Center, 800 Dongchuan Road, Shanghai 200240 China
Email:german...@sjtu.edu.cn 

[gmx-users] Regarding Gromacs 5.0 parallel installation

2014-12-06 Thread Bikash Ranjan Sahoo
Dear All,
 I am facing some problem in runiing mdrun using -nt flag. In my
cluster I have installed gromacs 4.5.5 and 5.0. For checking I made 5.0
with DOUBLE=ON. Now I can run  mdrun using -nt= 30 in Gromacs 4.5.5 by
allowing mdrun in 30 CPUs. But the same command is not working in Gromacs
5.0. The mdrun_d  -s em.tpr -nt 30 is showing error. After careful
inspection, I came to know that Gromacs 5.0 is unable to access the threads
by default. I tried many ways using many different flags in each trial
installation(e.g,. -DGMX_THREAD_MPI=ON   or  -DGMX_SHARED_THREAD=ON or
-DGMX_FLOAT=ON -DGMX_SSE=ON). But the error is telling Non-default thread
affinity set. Even during cmake run, i got few warnings.

The command

*cmake .. -DGMX_SHARED_THREAD=ON -DBUILD_SHARED_LIBS=ON
-DGMX_PREFER_STATIC_LIBS=ON -DGMX_DOUBLE=ON
-DCMAKE_INSTALL_PREFIX=/user1/tanpaku/bussei/bics@1986/Bikash/cmake/gro/gro5.0
-DGMX_BUILD_OWN_FFTW=ON*


CMake Warning:
  Manually-specified variables were not used by the project:
GMX_SHARED_THREAD


This means the thread sharing is not successful. How can I modify the cmake
command. In my cluster there are 288 CPUs (SGI Al0x UV 100;  CPU - Intel
Xeon X7542). In the same cluster gromacs 4.5.5 is working fine, but gromacs
5.0 is not running for mdrun. Other commands like pdb2gmx, grompp,
editconf, solvate, genion etc are running well. But mdrun is not running as
it is unable to share the threads/nodes.

can somebody suggest how to set the environment as per the last line of
error message is concern (Highlighted in red)


GROMACS:  gmx mdrun, VERSION 5.0.2
Executable:   /user1/Bikash/gro5.0/bin/gmx
Library dir:  /user1Bikash/gro5.0/share/gromacs/top
Command line:
  mdrun_d -v -s em.tpr -nt 30


Back Off! I just backed up md.log to ./#md.log.2#

Number of hardware threads detected (288) does not match the number
reported by OpenMP (276).
Consider setting the launch configuration manually!
Reading file em.tpr, VERSION 5.0.2 (single precision)
The number of OpenMP threads was set by environment variable
OMP_NUM_THREADS to 6

Non-default thread affinity set, disabling internal thread affinity
Using 5 MPI threads
Segmentation fault
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] A small Questions of umbrella samping

2014-12-06 Thread vg

Dear Justin

thank you for your reply. 

i have thought about the possibility you said before, but some paper make me 
confuse.

the paper at: dx.doi.org/10.1021/ja303286e | J. Am. Chem. Soc. 2012, 134, 
10959−10965 and PLOS Computational Biology | www.ploscompbiol.org 3 January 
2014 | Volume 10 | Issue 1 | e1003417.

they use CGMD simulation to calculate PMF of two big protein in the membrane, 
the PMF image which they get also have a steep increase 
at small values along the reaction coordinate. so, why?

i put these figures at:
https://t.williamgates.net/thumb-4EA4_5482C129.jpg
https://t.williamgates.net/thumb-FA03_5482C129.jpg
https://t.williamgates.net/thumb-E3CB_5482C129.jpg
https://t.williamgates.net/thumb-F117_5482C4F9.jpg

Cao




Dear gromacs users

I read the tutorial of umbrella sampling and have a small question I don't know

I do a umbrella samping simulation as the tutorial show us, the pmf image also 
like the tutorial result . 
But I was read another papers like the explain paper of g-wham: JCTC 2010.6 
3713-3720. The pmf image have a immediately move up in figure 12 。

How could I get the pmf image like this? do I need to take more window in the 
high energy barrier position or just use some commands behind the g-wham or 
modify the map file ?

Look forward reply and thank you very much

P.S. i take the two figure in attachment please check.

Your sincerely  Cao



China ,TianJin, Nanking university 
School of physics
Ph.d

发自我的 iPad
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] running g_tune_pme on stampede

2014-12-06 Thread Carsten Kutzner

On 06 Dec 2014, at 00:16, Kevin Chen fch6...@gmail.com wrote:

 Hi,
 
 Has anybody tried g_tune_pme on stampede before? It appears stampede only 
 support ibrun, but not mpi -np type of stuff. So I assume one could launch 
 g_tune_pme with mpi using command like this (without -np option),
 
 Ibrun g_tune_pme -s cutoff.tpr -launch
Try 

export MPIRUN=ibrun
export MDRUN=$( which mdrun)
g_tune_pme -s …

Carsten

 
 Unfortunately, it failed. Any suggestion is welcome!
 
 Thanks in advance
 
 Kevin Chen
 
 
 
 
 
 
 -Original Message-
 From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of 
 Szilárd Páll
 Sent: Friday, December 5, 2014 12:54 PM
 To: Discussion list for GROMACS users
 Subject: Re: [gmx-users] multinode issue
 
 On a second thought (and a quick googling), it _seems_ that this is an issue 
 caused by the following:
 - the OpenMP runtime gets initialized outside mdrun and its threads (or just 
 the master thread), get their affinity set;
 - mdrun then executes the sanity check, point at which omp_get_num_procs(), 
 reports 1 CPU most probably because the master thread is bound to a single 
 core.
 
 This alone should not be a big deal as long as the affinity settings get 
 correctly overridden in mdrun. However this can have the ugly side-effect 
 that, if mdrun's affinity setting gets disabled (if mdrun detects the 
 externally set affinities it back off or if not all cores/hardware threads 
 are used), all compute threads will inherit the affinity set previously and 
 multiple threads will run on a the same core.
 
 Note that this warning should typically not cause a crash, but it is telling 
 you that something is not quite right, so it may be best to start with 
 eliminating this warning (hints: I_MPI_PIN for Intel MPI, -cc for Cray's 
 aprun, --cpu-bind for slurm).
 
 Cheers,
 --
 Szilárd
 
 
 On Fri, Dec 5, 2014 at 7:35 PM, Szilárd Páll pall.szil...@gmail.com wrote:
 I don't think this is a sysconf issue. As you seem to have 16-core (hw
 thread?) nodes, it looks like sysnconf returned the correct value 
 (16), but the OpenMP runtime actually returned 1. This typically means 
 that the OpenMP runtime was initialized outside mdrun and for some 
 reason (which I'm not sure about) it returns 1.
 
 My guess is that your job scheduler is multi-threading aware and by 
 default assumes 1 core/hardware thread per rank so you may want to set 
 some rank depth/width option.
 
 --
 Szilárd
 
 
 On Fri, Dec 5, 2014 at 1:37 PM, Éric Germaneau german...@sjtu.edu.cn wrote:
 Thank you Mark,
 
 Yes this was the end of the log.
 I tried an other input and got the same issue:
 
   Number of CPUs detected (16) does not match the number reported by
   OpenMP (1).
   Consider setting the launch configuration manually!
   Reading file yukuntest-70K.tpr, VERSION 4.6.3 (single precision)
   [16:node328] unexpected disconnect completion event from [0:node299]
   Assertion failed in file ../../dapl_conn_rc.c at line 1179: 0
   internal ABORT - process 16
 
 Actually, I'm running some test for our users, I'll talk with the 
 admin about how to  return information to the standard sysconf() 
 routine in the usual way.
 Thank you,
 
   Éric.
 
 
 On 12/05/2014 07:38 PM, Mark Abraham wrote:
 
 On Fri, Dec 5, 2014 at 9:15 AM, Éric Germaneau 
 german...@sjtu.edu.cn
 wrote:
 
 Dear all,
 
 I use impi and when I submit o job (via LSF) to more than one node 
 I get the following message:
 
Number of CPUs detected (16) does not match the number reported by
OpenMP (1).
 
 That suggests this machine has not be set up to return information 
 to the standard sysconf() routine in the usual way. What kind of machine 
 is this?
 
Consider setting the launch configuration manually!
 
Reading file test184000atoms_verlet.tpr, VERSION 4.6.2 (single
precision)
 
 I hope that's just a 4.6.2-era .tpr, but nobody should be using 
 4.6.2 mdrun because there was a bug in only that version affecting 
 precisely these kinds of issues...
 
[16:node319] unexpected disconnect completion event from 
 [11:node328]
 
Assertion failed in file ../../dapl_conn_rc.c at line 1179: 0
internal ABORT - process 16
 
 I submit doing
 
mpirun -np 32 -machinefile nodelist $EXE -v -deffnm $INPUT
 
 The machinefile looks like this
 
node328:16
node319:16
 
 I'm running the release 4.6.7.
 I do not set anything about OpenMP for this job, I'd like to have 
 32 MPI process.
 
 Using one node it works fine.
 Any hints here?
 
 Everything seems fine. What was the end of the .log file? Can you 
 run another MPI test program thus?
 
 Mark
 
 
  Éric.
 
 --
 Éric Germaneau (???), Specialist
 Center for High Performance Computing Shanghai Jiao Tong University 
 Room 205 Network Center, 800 Dongchuan Road, Shanghai 200240 China 
 M:german...@sjtu.edu.cn P:+86-136-4161-6480 
 

[gmx-users] Make DNA stay in the center of the box during Simulation

2014-12-06 Thread Hovakim Grabski
Dear Gromacs users,I'm trying to run a simulation between a DNA(12 base pair) 
and 6 molecules of methylene blue,is there any effective way to make the DNA 
stay in the center of the box during md simulation?
Thanks in advance,Hovakim
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] A small Questions of umbrella samping

2014-12-06 Thread Justin Lemkul



On 12/6/14 3:59 AM, vg wrote:


Dear Justin

thank you for your reply.

i have thought about the possibility you said before, but some paper make me 
confuse.

the paper at: dx.doi.org/10.1021/ja303286e | J. Am. Chem. Soc. 2012, 134, 
10959−10965 and PLOS Computational Biology | www.ploscompbiol.org 3 January 
2014 | Volume 10 | Issue 1 | e1003417.

they use CGMD simulation to calculate PMF of two big protein in the membrane, 
the PMF image which they get also have a steep increase
at small values along the reaction coordinate. so, why?

i put these figures at:
https://t.williamgates.net/thumb-4EA4_5482C129.jpg
https://t.williamgates.net/thumb-FA03_5482C129.jpg
https://t.williamgates.net/thumb-E3CB_5482C129.jpg
https://t.williamgates.net/thumb-F117_5482C4F9.jpg



Most of those images are too small to have their axes legible, but you need to 
realize that different systems have different geometries and the reaction 
coordinates appear (maybe? again, illegible in a couple cases) to be over 
different length scales.  You're comparing apples and oranges; trying to make 
one outcome look like another is unproductive.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] multinode issue

2014-12-06 Thread Mark Abraham
On Sat, Dec 6, 2014 at 9:29 AM, Éric Germaneau german...@sjtu.edu.cn
wrote:

 Dear Mark, Dear Szilárd,

 Thank you for your help.
 I did try different I_MPI... option without success.
 Something I can't figure is I can run jobs with 2 or more OpenMP threads
 per MPI process, but not just one.
 It crash doing one OpenMP threads per MPI process, even I disable
 I_MPI_PIN.


OK, well that points to something being configured incorrectly in IMPI,
rather than any of the other theories. Try OpenMPI ;-)

Mark



   Éric.



 On 12/06/2014 02:54 AM, Szilárd Páll wrote:

 On a second thought (and a quick googling), it _seems_ that this is an
 issue caused by the following:
 - the OpenMP runtime gets initialized outside mdrun and its threads
 (or just the master thread), get their affinity set;
 - mdrun then executes the sanity check, point at which
 omp_get_num_procs(), reports 1 CPU most probably because the master
 thread is bound to a single core.

 This alone should not be a big deal as long as the affinity settings
 get correctly overridden in mdrun. However this can have the ugly
 side-effect that, if mdrun's affinity setting gets disabled (if mdrun
 detects the externally set affinities it back off or if not all
 cores/hardware threads are used), all compute threads will inherit the
 affinity set previously and multiple threads will run on a the same
 core.

 Note that this warning should typically not cause a crash, but it is
 telling you that something is not quite right, so it may be best to
 start with eliminating this warning (hints: I_MPI_PIN for Intel MPI,
 -cc for Cray's aprun, --cpu-bind for slurm).

 Cheers,
 --
 Szilárd


 On Fri, Dec 5, 2014 at 7:35 PM, Szilárd Páll pall.szil...@gmail.com
 wrote:

 I don't think this is a sysconf issue. As you seem to have 16-core (hw
 thread?) nodes, it looks like sysnconf returned the correct value
 (16), but the OpenMP runtime actually returned 1. This typically means
 that the OpenMP runtime was initialized outside mdrun and for some
 reason (which I'm not sure about) it returns 1.

 My guess is that your job scheduler is multi-threading aware and by
 default assumes 1 core/hardware thread per rank so you may want to set
 some rank depth/width option.

 --
 Szilárd


 On Fri, Dec 5, 2014 at 1:37 PM, Éric Germaneau german...@sjtu.edu.cn
 wrote:

 Thank you Mark,

 Yes this was the end of the log.
 I tried an other input and got the same issue:

 Number of CPUs detected (16) does not match the number reported by
 OpenMP (1).
 Consider setting the launch configuration manually!
 Reading file yukuntest-70K.tpr, VERSION 4.6.3 (single precision)
 [16:node328] unexpected disconnect completion event from [0:node299]
 Assertion failed in file ../../dapl_conn_rc.c at line 1179: 0
 internal ABORT - process 16

 Actually, I'm running some test for our users, I'll talk with the admin
 about how to  return information
 to the standard sysconf() routine in the usual way.
 Thank you,

 Éric.


 On 12/05/2014 07:38 PM, Mark Abraham wrote:

 On Fri, Dec 5, 2014 at 9:15 AM, Éric Germaneau german...@sjtu.edu.cn
 wrote:

  Dear all,

 I use impi and when I submit o job (via LSF) to more than one node I
 get
 the following message:

  Number of CPUs detected (16) does not match the number reported
 by
  OpenMP (1).

  That suggests this machine has not be set up to return information
 to the
 standard sysconf() routine in the usual way. What kind of machine is
 this?

  Consider setting the launch configuration manually!

  Reading file test184000atoms_verlet.tpr, VERSION 4.6.2 (single
  precision)

  I hope that's just a 4.6.2-era .tpr, but nobody should be using 4.6.2
 mdrun
 because there was a bug in only that version affecting precisely these
 kinds of issues...

  [16:node319] unexpected disconnect completion event from
 [11:node328]

  Assertion failed in file ../../dapl_conn_rc.c at line 1179: 0
  internal ABORT - process 16

 I submit doing

  mpirun -np 32 -machinefile nodelist $EXE -v -deffnm $INPUT

 The machinefile looks like this

  node328:16
  node319:16

 I'm running the release 4.6.7.
 I do not set anything about OpenMP for this job, I'd like to have 32
 MPI
 process.

 Using one node it works fine.
 Any hints here?

  Everything seems fine. What was the end of the .log file? Can you run
 another MPI test program thus?

 Mark


 Éric.

 --
 Éric Germaneau (???), Specialist
 Center for High Performance Computing
 Shanghai Jiao Tong University
 Room 205 Network Center, 800 Dongchuan Road, Shanghai 200240 China
 M:german...@sjtu.edu.cn P:+86-136-4161-6480 W:http://hpc.sjtu.edu.cn
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests 

Re: [gmx-users] multinode issue

2014-12-06 Thread Éric Germaneau

Thanks Mark for having tried to help.

On 12/06/2014 10:08 PM, Mark Abraham wrote:

On Sat, Dec 6, 2014 at 9:29 AM, Éric Germaneau german...@sjtu.edu.cn
wrote:


Dear Mark, Dear Szilárd,

Thank you for your help.
I did try different I_MPI... option without success.
Something I can't figure is I can run jobs with 2 or more OpenMP threads
per MPI process, but not just one.
It crash doing one OpenMP threads per MPI process, even I disable
I_MPI_PIN.


OK, well that points to something being configured incorrectly in IMPI,
rather than any of the other theories. Try OpenMPI ;-)

Mark



   Éric.



On 12/06/2014 02:54 AM, Szilárd Páll wrote:


On a second thought (and a quick googling), it _seems_ that this is an
issue caused by the following:
- the OpenMP runtime gets initialized outside mdrun and its threads
(or just the master thread), get their affinity set;
- mdrun then executes the sanity check, point at which
omp_get_num_procs(), reports 1 CPU most probably because the master
thread is bound to a single core.

This alone should not be a big deal as long as the affinity settings
get correctly overridden in mdrun. However this can have the ugly
side-effect that, if mdrun's affinity setting gets disabled (if mdrun
detects the externally set affinities it back off or if not all
cores/hardware threads are used), all compute threads will inherit the
affinity set previously and multiple threads will run on a the same
core.

Note that this warning should typically not cause a crash, but it is
telling you that something is not quite right, so it may be best to
start with eliminating this warning (hints: I_MPI_PIN for Intel MPI,
-cc for Cray's aprun, --cpu-bind for slurm).

Cheers,
--
Szilárd


On Fri, Dec 5, 2014 at 7:35 PM, Szilárd Páll pall.szil...@gmail.com
wrote:


I don't think this is a sysconf issue. As you seem to have 16-core (hw
thread?) nodes, it looks like sysnconf returned the correct value
(16), but the OpenMP runtime actually returned 1. This typically means
that the OpenMP runtime was initialized outside mdrun and for some
reason (which I'm not sure about) it returns 1.

My guess is that your job scheduler is multi-threading aware and by
default assumes 1 core/hardware thread per rank so you may want to set
some rank depth/width option.

--
Szilárd


On Fri, Dec 5, 2014 at 1:37 PM, Éric Germaneau german...@sjtu.edu.cn
wrote:


Thank you Mark,

Yes this was the end of the log.
I tried an other input and got the same issue:

 Number of CPUs detected (16) does not match the number reported by
 OpenMP (1).
 Consider setting the launch configuration manually!
 Reading file yukuntest-70K.tpr, VERSION 4.6.3 (single precision)
 [16:node328] unexpected disconnect completion event from [0:node299]
 Assertion failed in file ../../dapl_conn_rc.c at line 1179: 0
 internal ABORT - process 16

Actually, I'm running some test for our users, I'll talk with the admin
about how to  return information
to the standard sysconf() routine in the usual way.
Thank you,

 Éric.


On 12/05/2014 07:38 PM, Mark Abraham wrote:


On Fri, Dec 5, 2014 at 9:15 AM, Éric Germaneau german...@sjtu.edu.cn
wrote:

  Dear all,

I use impi and when I submit o job (via LSF) to more than one node I
get
the following message:

  Number of CPUs detected (16) does not match the number reported
by
  OpenMP (1).

  That suggests this machine has not be set up to return information

to the
standard sysconf() routine in the usual way. What kind of machine is
this?

  Consider setting the launch configuration manually!


  Reading file test184000atoms_verlet.tpr, VERSION 4.6.2 (single
  precision)

  I hope that's just a 4.6.2-era .tpr, but nobody should be using 4.6.2

mdrun
because there was a bug in only that version affecting precisely these
kinds of issues...

  [16:node319] unexpected disconnect completion event from
[11:node328]


  Assertion failed in file ../../dapl_conn_rc.c at line 1179: 0
  internal ABORT - process 16

I submit doing

  mpirun -np 32 -machinefile nodelist $EXE -v -deffnm $INPUT

The machinefile looks like this

  node328:16
  node319:16

I'm running the release 4.6.7.
I do not set anything about OpenMP for this job, I'd like to have 32
MPI
process.

Using one node it works fine.
Any hints here?

  Everything seems fine. What was the end of the .log file? Can you run

another MPI test program thus?

Mark


 Éric.

--
Éric Germaneau (???), Specialist
Center for High Performance Computing
Shanghai Jiao Tong University
Room 205 Network Center, 800 Dongchuan Road, Shanghai 200240 China
M:german...@sjtu.edu.cn P:+86-136-4161-6480 W:http://hpc.sjtu.edu.cn
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe 

[gmx-users] CNT simulation

2014-12-06 Thread Sergio Manzetti




Dear all, given the tedious process of building topologies of short finite 
nanotube systems for GROMACS 4.6.5, gaff force field, it would be nice to know 
if someone could do the task for a moderate fee.

all the best

sergio

  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding Gromacs 5.0 parallel installation

2014-12-06 Thread Mark Abraham
Hi,

On Sat, Dec 6, 2014 at 9:47 AM, Bikash Ranjan Sahoo 
bikash.bioinformat...@gmail.com wrote:

 Dear All,
  I am facing some problem in runiing mdrun using -nt flag. In my
 cluster I have installed gromacs 4.5.5 and 5.0. For checking I made 5.0
 with DOUBLE=ON. Now I can run  mdrun using -nt= 30 in Gromacs 4.5.5 by
 allowing mdrun in 30 CPUs. But the same command is not working in Gromacs
 5.0. The mdrun_d  -s em.tpr -nt 30 is showing error. After careful
 inspection, I came to know that Gromacs 5.0 is unable to access the threads
 by default. I tried many ways using many different flags in each trial
 installation(e.g,. -DGMX_THREAD_MPI=ON   or  -DGMX_SHARED_THREAD=ON or
 -DGMX_FLOAT=ON -DGMX_SSE=ON). But the error is telling Non-default thread
 affinity set. Even during cmake run, i got few warnings.

 The command

 *cmake .. -DGMX_SHARED_THREAD=ON -DBUILD_SHARED_LIBS=ON
 -DGMX_PREFER_STATIC_LIBS=ON -DGMX_DOUBLE=ON
 -DCMAKE_INSTALL_PREFIX=/user1/tanpaku/bussei/bics@1986
 /Bikash/cmake/gro/gro5.0
 -DGMX_BUILD_OWN_FFTW=ON*


 CMake Warning:
   Manually-specified variables were not used by the project:
 GMX_SHARED_THREAD


The fact that
http://www.gromacs.org/Documentation/Installation_Instructions#compiling-with-parallelization-options
doesn't mention GMX_SHARED_THREAD is a fine clue that it is not a thing ;-)
You will get thread-MPI and OpenMP working by default if you are using a
recent compiler on properly configured machine. Fortunately that's what's
happening anyway if you use the above CMake command.


 This means the thread sharing is not successful. How can I modify the cmake
 command. In my cluster there are 288 CPUs (SGI Al0x UV 100;  CPU - Intel
 Xeon X7542). In the same cluster gromacs 4.5.5 is working fine, but gromacs
 5.0 is not running for mdrun. Other commands like pdb2gmx, grompp,
 editconf, solvate, genion etc are running well. But mdrun is not running as
 it is unable to share the threads/nodes.

 can somebody suggest how to set the environment as per the last line of
 error message is concern (Highlighted in red)


 GROMACS:  gmx mdrun, VERSION 5.0.2
 Executable:   /user1/Bikash/gro5.0/bin/gmx
 Library dir:  /user1Bikash/gro5.0/share/gromacs/top
 Command line:
   mdrun_d -v -s em.tpr -nt 30


 Back Off! I just backed up md.log to ./#md.log.2#

 Number of hardware threads detected (288) does not match the number
 reported by OpenMP (276).
 Consider setting the launch configuration manually!
 Reading file em.tpr, VERSION 5.0.2 (single precision)
 The number of OpenMP threads was set by environment variable
 OMP_NUM_THREADS to 6

 Non-default thread affinity set, disabling internal thread affinity
 Using 5 MPI threads
 Segmentation fault


Something about your cluster environment is totally crazy if gcc 4.3 is
installed and one process thinks it can see all 288 hardware threads. The
thread-MPI build of GROMACS will work only a single shared-memory node.
From Googling, I'd guess the way your Altix UV 100 is set up tries to
pretend 24 6-core nodes are a single shared-memory node, but this is being
double crossed if some other part of your environment is setting
OMP_NUM_THREADS to 6 (which is probably the number of real cores on a
single actual node) and the OpenMP runtime is reacting to that and
subtracting off 6 cores times 2 hardware hyperthreads from 288 to get 276.
So, you should find out what is managing OMP_NUM_THREADS and do that
better. You can kind-of mimic the 4.5.5 behaviour explicitly with mdrun -nt
30 -ntomp 1, but probably that will not help. The thing causing the
segfault is probably the let's pretend to be a shared memory node
actually not being implemented the way more recent Gromacs expects a real
shared-memory node to work.

In your case, I would read the documentation for your machine carefully,
and then either
1_ turn off the pretend to be a big shared-memory node mode, configure
Gromacs with cmake -DGMX_MPI=on to use real MPI, and run mpirun -np 30
mdrun_mpi, or
2) configure Gromacs with cmake -DGMX_OPENMP=off, which will lead to mdrun
-nt 30 working more-or-less the way Gromacs 4.5 did, but might still
segfault depending on what was actually causing it, or
3) turn off the clever mode and do 2).

Mark

--
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Отв: Make DNA stay in the center of the box during Simulation

2014-12-06 Thread Justin Lemkul



On 12/6/14 10:03 AM, Hovakim Grabski wrote:

But if I use gmx trjconv -s md_1_1.tpr -f md_1_1.xtc -o md_1_1CentNew_noPBC.xtc
-pbc mol -ur compact -center, select DNA as the center group,  DNA molecule
jumps back and forth, is there any way I can solve it?
I use Gromacs 5.0.1



http://www.gromacs.org/Documentation/Terminology/Periodic_Boundary_Conditions#Suggested_trjconv_workflow

In most cases, you need several rounds of trjconv, in the proper order, to get 
your system in a proper state for visualization.  With a molecule like dsDNA, it 
can be centered when both strands are at opposite ends of the box, because 
the geometric center of the strands coincides with the geometric center of the 
box.  So extra steps need to be taken, sometimes with custom index groups.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Precision of xtc-precision

2014-12-06 Thread Johnny Lu
Hi.

What is the exact precision of xtc-precision = 1000 ?

Does that mean the positions are accurate to 0.001 nm, or to 0.1% ?

I searched gromacs 4.6.7 manual and gromacs.org for xtc-precision, but
didn't find answer.

Thank you again.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.