[gmx-users] new analysis in gromacs

2014-08-24 Thread Atila Petrosian
Dear gromacs users

I want to use a new code (g_vesicle_density) in gromacs being in this
address: http://md.chem.rug.nl/~mara/softa/.

There are 2 related files (g_vesicle_density.c and  gmx_vesicle_density.c).

I did following steps to use this analysis for my system:

1) I copied the files (g_vesicle_density.c and gmx_vesicle_density.c) to
the 'src/tools' directory of GROMACS.

2) I edited 'Makefile.am' to add the tool to the list of programs to
compile,
---
New Makefile.am is as follows:

## Process this file with automake to produce Makefile.in
# Note: Makefile is automatically generated from Makefile.in by the
configure
# script, and Makefile.in is generated from Makefile.am by automake.

AM_CPPFLAGS = -I$(top_srcdir)/include -DGMXLIBDIR=\"$(datadir)/top\"

lib_LTLIBRARIES = libgmxana@LIBSUFFIX@.la

libgmxana@LIBSUFFIX@_la_LIBADD =
libgmxana@LIBSUFFIX@_la_DEPENDENCIES   =
libgmxana@LIBSUFFIX@_la_LDFLAGS= -version-info @SHARED_VERSION_INFO@


libgmxana@LIBSUFFIX@_la_SOURCES = \
autocorr.c expfit.c polynomials.c levenmar.c\
anadih.c pp2shift.c pp2shift.h dlist.c\
eigio.clsq.ccmat.c cmat.h\
eigensolver.c   eigensolver.hnsc.c nsc.h\
hxprops.c hxprops.h fitahx.c fitahx.h\
gmx_analyze.cgmx_anaeig.cgmx_angle.cgmx_bond.c\
gmx_bundle.cgmx_chi.cgmx_cluster.cgmx_confrms.c\
gmx_covar.cgmx_current.c\
gmx_density.cgmx_densmap.c   gmx_dih.c\
gmx_dielectric.cgmx_kinetics.c gmx_spatial.c\
gmx_dipoles.cgmx_disre.cgmx_dist.cgmx_dyndom.c\
gmx_enemat.cgmx_energy.cgmx_lie.cgmx_filter.c\
gmx_gyrate.cgmx_h2order.cgmx_hbond.cgmx_helix.c\
gmx_mindist.cgmx_msd.cgmx_morph.cgmx_nmeig.c\
gmx_nmens.cgmx_order.cgmx_principal.c \
gmx_polystat.cgmx_potential.cgmx_rama.c\
gmx_rdf.cgmx_rms.cgmx_rmsdist.cgmx_rmsf.c\
gmx_rotacf.cgmx_saltbr.cgmx_sas.cgmx_sdf.c\
gmx_sgangle.cgmx_sorient.c gmx_spol.cgmx_tcaf.c\
gmx_traj.cgmx_velacc.cgmx_helixorient.c \
gmx_clustsize.cgmx_mdmat.cgmx_wham.ceigio.h\
correl.ccorrel.hgmx_sham.cgmx_nmtraj.c\
gmx_trjconv.cgmx_trjcat.cgmx_trjorder.cgmx_xpm2ps.c\
gmx_editconf.cgmx_genbox.cgmx_genion.cgmx_genconf.c\
gmx_genpr.cgmx_eneconv.cgmx_vanhove.cgmx_wheel.c\
gmx_vesicle_density.c   addconf.c addconf.h\
calcpot.c calcpot.h edittop.c

bin_PROGRAMS = \
do_dsspeditconfeneconv\
genboxgenconfgenrestrg_nmtraj\
make_ndxmk_angndxtrjcattrjconv\
trjorderwheelxpm2psgenion\
anadockmake_edi\
g_analyze   g_anaeigg_angle g_bond  \
g_bundleg_chi   g_cluster   g_confrms   \
g_covar g_current\
g_density   g_densmap   g_dih   \
g_dielectric g_helixorient g_principal \
g_dipoles   g_disre g_dist  g_dyndom\
g_enematg_energyg_lie   g_filter\
g_gyrateg_h2order   g_hbond g_helix \
g_mindist   g_msd   g_morph g_nmeig \
g_nmens g_order \
g_polystatg_potential g_rama  \
g_rdf   g_rms   g_rmsdist   g_rmsf  \
g_rotacfg_saltbrg_sas   g_sgangle   \
g_sham g_sorient   g_spol\
g_sdfg_spatial\
g_tcaf  g_traj  g_vanhoveg_velacc\
g_clustsize g_mdmat g_whamg_kinetics\
g_vesicle_density.c \
sigeps


LDADD = $(lib_LTLIBRARIES) ../mdlib/libmd@LIBSUFFIX@.la \
../gmxlib/libgmx@LIBSUFFIX@.la


# link the mpi library to non-mpi names if the latter are not present
install-exec-hook:
libname="libgmxana@LIBSUFFIX@"; \
nompi="`echo $$libname | sed -e 's,_mpi,,'`"; \
libdir="$(libdir)"; \
if echo $$libname | grep mpi >/dev/null ; then \
  (cd $$libdir && test -e $$libname.a -a ! -e $$nompi.a && $(LN_S)
$$libname.a $$nompi.a ; exit 0); \
  (cd $$libdir && test -e $$libname.so -a ! -e $$nompi.so && $(LN_S)
$$libname.so $$nompi.so ; exit 0); \
fi;

CLEANFILES   = *.la *~ \\\#*

---

Then what should I do?

Please guide me about that.

Thank you in advance.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/

[gmx-users] mdrun error

2014-08-24 Thread Lovika Moudgil
Hi everyone ,

Need some help .I have got an error in my mdrun . Upto grompp every thing
was fine but when I give command for mdrun ,It stops with this ...
Steepest Descents:
   Tolerance (Fmax)   =  1.0e+03
   Number of steps=1
Step=   14, Dmax= 1.2e-06 nm, Epot=  2.08534e+18 Fmax= inf, atom= 19
Energy minimization has stopped, but the forces have not converged to the
requested precision Fmax < 1000 (which may not be possible for your system).
It stopped because the algorithm tried to make a new step whose size was too
small, or there was no change in the energy since last step. Either way, we
regard the minimization as converged to within the available machine
precision, given your starting configuration and EM parameters.

Double precision normally gives you higher accuracy, but this is often not
needed for preparing to run molecular dynamics.
You might need to increase your constraint accuracy, or turn
off constraints altogether (set constraints = none in mdp file)

writing lowest energy coordinates.


Steepest Descents converged to machine precision in 15 steps,
but did not reach the requested Fmax < 1000.
Potential Energy  =  2.0853354e+18
Maximum force =inf on atom 19
Norm of force =  1.7429674e+18

constraints are all ready none ...Please help me what should I do to solve
this .
Thanks and Regards
Lovika
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] search in gromacs mailing list by subject

2014-08-24 Thread shahab shariati
Hi

Before, one could search in gromacs mailing list by subject. But, now,
there is not this possibility. Why?

Please guide me.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] [gmx-developers] About dynamics loading balance

2014-08-24 Thread Szilárd Páll
On Thu, Aug 21, 2014 at 8:25 PM, Yunlong Liu  wrote:
> Hi Roland,
>
> I just compiled the latest gromacs-5.0 version released on Jun 29th. I will
> recompile it as you suggested by using those Flags. It seems like the high
> loading imbalance doesn't affect the performance as well, which is weird.

How did you draw that conclusion? Please show us log files of the
respective runs, that will help to assess what is gong on.

--
Szilárd

> Thank you.
> Yunlong
>
> On 8/21/14, 2:13 PM, Roland Schulz wrote:
>>
>> Hi,
>>
>>
>>
>> On Thu, Aug 21, 2014 at 1:56 PM, Yunlong Liu > > wrote:
>>
>> Hi Roland,
>>
>> The problem I am posting is caused by trivial errors (like not
>> enough memory) and I think it should be a real bug inside the
>> gromacs-GPU support code.
>>
>> It is unlikely a trivial error because otherwise someone else would have
>> noticed. You could try the release-5-0 branch from git, but I'm not aware of
>> any bugfixes related to memory allocation.
>> The memory allocation which causes the error isn't the problem. The
>> printed size is reasonable. You could recompile with PRINT_ALLOC_KB (add
>> -DPRINT_ALLOC_KB to CMAKE_C_FLAGS) and rerun the simulation. It might tell
>> you where the usual large memory allocation happens.
>>
>> PS: Please don't reply to an individual Gromacs developer. Keep all
>> conversation on the gmx-users list.
>>
>> Roland
>>
>> That is the reason why I post this problem to the developer
>> mailing-list.
>>
>> My system contains ~240,000 atoms. It is a rather big protein. The
>> memory information of the node is :
>>
>> top - 12:46:59 up 15 days, 22:18, 1 user,  load average: 1.13,
>> 6.27, 11.28
>> Tasks: 510 total,   2 running, 508 sleeping,   0 stopped,   0 zombie
>> Cpu(s):  6.3%us,  0.0%sy,  0.0%ni, 93.7%id,  0.0%wa, 0.0%hi,
>> 0.0%si,  0.0%st
>> Mem:  32815324k total,  4983916k used, 27831408k free, 7984k
>> buffers
>> Swap:  4194296k total,0k used,  4194296k free,   700588k
>> cached
>>
>> I am running the simulation on 2 nodes, 4 MPI ranks and each rank
>> with 8 OPENMP-threads. I list the information of their CPU and GPU
>> here:
>>
>> c442-702.stampede(1)$ nvidia-smi
>> Thu Aug 21 12:46:17 2014
>> +--+
>> | NVIDIA-SMI 331.67 Driver Version: 331.67 |
>>
>> |---+--+--+
>> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile
>> Uncorr. ECC |
>> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util
>> Compute M. |
>>
>> |===+==+==|
>> |   0  Tesla K20m  Off  | :03:00.0 Off
>> |0 |
>> | N/A   22CP046W / 225W |172MiB /  4799MiB | 0%
>> Default |
>>
>> +---+--+--+
>>
>>
>> +-+
>> | Compute processes: GPU Memory |
>> |  GPU   PID  Process name
>> Usage  |
>>
>> |=|
>> |0113588 /work/03002/yliu120/gromacs-5/bin/mdrun_mpi 77MiB |
>> |0113589 /work/03002/yliu120/gromacs-5/bin/mdrun_mpi 77MiB |
>>
>> +-+
>>
>> c442-702.stampede(4)$ lscpu
>> Architecture:  x86_64
>> CPU op-mode(s):32-bit, 64-bit
>> Byte Order:Little Endian
>> CPU(s):16
>> On-line CPU(s) list:   0-15
>> Thread(s) per core:1
>> Core(s) per socket:8
>> Socket(s): 2
>> NUMA node(s):  2
>> Vendor ID: GenuineIntel
>> CPU family:6
>> Model: 45
>> Stepping:  7
>> CPU MHz:   2701.000
>> BogoMIPS:  5399.22
>> Virtualization:VT-x
>> L1d cache: 32K
>> L1i cache: 32K
>> L2 cache:  256K
>> L3 cache:  20480K
>> NUMA node0 CPU(s): 0-7
>> NUMA node1 CPU(s): 8-15
>>
>> I hope this information will help. Thank you.
>>
>> Yunlong
>>
>>
>>
>>
>>
>>
>> On 8/21/14, 1:38 PM, Roland Schulz wrote:
>>>
>>> Hi,
>>>
>>> please don't use gmx-developers for user questions. Feel free to
>>> use it if you want to fix the problem, and have questions about
>>> implementation details.
>>>
>>> Please provide more details: How large is your system? How much
>>> memory does a node have? On how many nodes do you try to run? How
>>> many mpi-ranks do you have per node?
>>>
>>> Roland
>>>
>>> On Thu, Aug 21, 2014 at 12:21 PM, Yunlong Liu >> 

Re: [gmx-users] build membrane system from Charmm-Gui

2014-08-24 Thread Xiang Ning
Dear Justin,





Thanks for your help! It did work when I use grompp to generate tpr file. 
However, after obtaining  em.tpr file, I want energy minimization of the 
membrane+water+ion system, so I typed "mdrun -v -deffnm em", and the outcome 
em.gro file looks really weird by using VMD to view. The original file is water 
above the lipid membrane, but now by viewing em.gro, the water is now in the 
middle of two membrane leaflet, and they aparted to 4 parts! I would like to 
know what is wrong? Is it because the mdp file? Thanks very much for your 
assistance!

Best,
Ning

my mdp file:
; minim.mdp - used as input into grompp to generate em.tpr
; Parameters describing what to do, when to stop and what to save
integrator  = steep ; Algorithm (steep = steepest descent 
minimization)
emtol   = 1000.0; Stop minimization when the maximum force < 
1000.0 kJ/mol/nm
emstep  = 0.01  ; Energy step size
nsteps  = 5 ; Maximum number of (minimization) steps to 
perform ; Parameters describing how to find the neighbors of each atom and how 
to calculate the interactions
nstlist = 1 ; Frequency to update the neighbor list and 
long range forces
ns_type = grid  ; Method to determine neighbor list (simple, 
grid)
rlist   = 1.2   ; Cut-off for making neighbor list (short range 
forces)
coulombtype = PME   ; Treatment of long range electrostatic 
interactions
rcoulomb= 1.2   ; Short-range electrostatic cut-off
rvdw= 1.2   ; Short-range Van der Waals cut-off
pbc = xyz   ; Periodic Boundary Conditions



On Friday, August 22, 2014 4:48 PM, Justin Lemkul  wrote:
 





On 8/22/14, 4:36 PM, Xiang Ning wrote:
> Hi All,
>
> I am using POPC/POPG mixed membrane built by Charmm-gui. After remove ions 
> and water from CHARMMGUI pdb and save just the membrane pdb, and use this pdb 
> with pdb2gmx to get top file (I used charmm36), then I would like to know, 
> how to add water and ions back to the system? I read the previous solution 
> was to modify [molecules] in top file (add ions and waters information 
> manually), are there any detailed explanation of how to do that?
>

For the coordinates, just paste the water and ion coordinates back into the 
membrane-only file.  For the topology, indeed all you need to do is modify 
[molecules] in the .top to reflect however many waters and ions there are in 
the 
reconstructed
 system.

-Justin

-- 
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul


==
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] new analysis in gromacs

2014-08-24 Thread Justin Lemkul



On 8/24/14, 5:10 AM, Atila Petrosian wrote:

Dear gromacs users

I want to use a new code (g_vesicle_density) in gromacs being in this
address: http://md.chem.rug.nl/~mara/softa/.

There are 2 related files (g_vesicle_density.c and  gmx_vesicle_density.c).

I did following steps to use this analysis for my system:

1) I copied the files (g_vesicle_density.c and gmx_vesicle_density.c) to
the 'src/tools' directory of GROMACS.

2) I edited 'Makefile.am' to add the tool to the list of programs to
compile,
---
New Makefile.am is as follows:

## Process this file with automake to produce Makefile.in
# Note: Makefile is automatically generated from Makefile.in by the
configure
# script, and Makefile.in is generated from Makefile.am by automake.

AM_CPPFLAGS = -I$(top_srcdir)/include -DGMXLIBDIR=\"$(datadir)/top\"

lib_LTLIBRARIES = libgmxana@LIBSUFFIX@.la

libgmxana@LIBSUFFIX@_la_LIBADD =
libgmxana@LIBSUFFIX@_la_DEPENDENCIES   =
libgmxana@LIBSUFFIX@_la_LDFLAGS= -version-info @SHARED_VERSION_INFO@


libgmxana@LIBSUFFIX@_la_SOURCES = \
 autocorr.c expfit.c polynomials.c levenmar.c\
 anadih.c pp2shift.c pp2shift.h dlist.c\
 eigio.clsq.ccmat.c cmat.h\
 eigensolver.c   eigensolver.hnsc.c nsc.h\
 hxprops.c hxprops.h fitahx.c fitahx.h\
 gmx_analyze.cgmx_anaeig.cgmx_angle.cgmx_bond.c\
 gmx_bundle.cgmx_chi.cgmx_cluster.cgmx_confrms.c\
 gmx_covar.cgmx_current.c\
 gmx_density.cgmx_densmap.c   gmx_dih.c\
 gmx_dielectric.cgmx_kinetics.c gmx_spatial.c\
 gmx_dipoles.cgmx_disre.cgmx_dist.cgmx_dyndom.c\
 gmx_enemat.cgmx_energy.cgmx_lie.cgmx_filter.c\
 gmx_gyrate.cgmx_h2order.cgmx_hbond.cgmx_helix.c\
 gmx_mindist.cgmx_msd.cgmx_morph.cgmx_nmeig.c\
 gmx_nmens.cgmx_order.cgmx_principal.c \
 gmx_polystat.cgmx_potential.cgmx_rama.c\
 gmx_rdf.cgmx_rms.cgmx_rmsdist.cgmx_rmsf.c\
 gmx_rotacf.cgmx_saltbr.cgmx_sas.cgmx_sdf.c\
 gmx_sgangle.cgmx_sorient.c gmx_spol.cgmx_tcaf.c\
 gmx_traj.cgmx_velacc.cgmx_helixorient.c \
 gmx_clustsize.cgmx_mdmat.cgmx_wham.ceigio.h\
 correl.ccorrel.hgmx_sham.cgmx_nmtraj.c\
 gmx_trjconv.cgmx_trjcat.cgmx_trjorder.cgmx_xpm2ps.c\
 gmx_editconf.cgmx_genbox.cgmx_genion.cgmx_genconf.c\
 gmx_genpr.cgmx_eneconv.cgmx_vanhove.cgmx_wheel.c\
 gmx_vesicle_density.c   addconf.c addconf.h\
 calcpot.c calcpot.h edittop.c

bin_PROGRAMS = \
 do_dsspeditconfeneconv\
 genboxgenconfgenrestrg_nmtraj\
 make_ndxmk_angndxtrjcattrjconv\
 trjorderwheelxpm2psgenion\
 anadockmake_edi\
 g_analyze   g_anaeigg_angle g_bond  \
 g_bundleg_chi   g_cluster   g_confrms   \
 g_covar g_current\
 g_density   g_densmap   g_dih   \
 g_dielectric g_helixorient g_principal \
 g_dipoles   g_disre g_dist  g_dyndom\
 g_enematg_energyg_lie   g_filter\
 g_gyrateg_h2order   g_hbond g_helix \
 g_mindist   g_msd   g_morph g_nmeig \
 g_nmens g_order \
 g_polystatg_potential g_rama  \
 g_rdf   g_rms   g_rmsdist   g_rmsf  \
 g_rotacfg_saltbrg_sas   g_sgangle   \
 g_sham g_sorient   g_spol\
 g_sdfg_spatial\
 g_tcaf  g_traj  g_vanhoveg_velacc\
 g_clustsize g_mdmat g_whamg_kinetics\
 g_vesicle_density.c \
 sigeps


LDADD = $(lib_LTLIBRARIES) ../mdlib/libmd@LIBSUFFIX@.la \
 ../gmxlib/libgmx@LIBSUFFIX@.la


# link the mpi library to non-mpi names if the latter are not present
install-exec-hook:
 libname="libgmxana@LIBSUFFIX@"; \
 nompi="`echo $$libname | sed -e 's,_mpi,,'`"; \
 libdir="$(libdir)"; \
 if echo $$libname | grep mpi >/dev/null ; then \
   (cd $$libdir && test -e $$libname.a -a ! -e $$nompi.a && $(LN_S)
$$libname.a $$nompi.a ; exit 0); \
   (cd $$libdir && test -e $$libname.so -a ! -e $$nompi.so && $(LN_S)
$$libname.so $$nompi.so ; exit 0); \
 fi;

CLEANFILES   = *.la *~ \\\#*

---

Then what should I do?



Compile and install like you would any normal Groma

Re: [gmx-users] mdrun error

2014-08-24 Thread Justin Lemkul



On 8/24/14, 5:28 AM, Lovika Moudgil wrote:

Hi everyone ,

Need some help .I have got an error in my mdrun . Upto grompp every thing
was fine but when I give command for mdrun ,It stops with this ...
Steepest Descents:
Tolerance (Fmax)   =  1.0e+03
Number of steps=1
Step=   14, Dmax= 1.2e-06 nm, Epot=  2.08534e+18 Fmax= inf, atom= 19
Energy minimization has stopped, but the forces have not converged to the
requested precision Fmax < 1000 (which may not be possible for your system).
It stopped because the algorithm tried to make a new step whose size was too
small, or there was no change in the energy since last step. Either way, we
regard the minimization as converged to within the available machine
precision, given your starting configuration and EM parameters.

Double precision normally gives you higher accuracy, but this is often not
needed for preparing to run molecular dynamics.
You might need to increase your constraint accuracy, or turn
off constraints altogether (set constraints = none in mdp file)

writing lowest energy coordinates.


Steepest Descents converged to machine precision in 15 steps,
but did not reach the requested Fmax < 1000.
Potential Energy  =  2.0853354e+18
Maximum force =inf on atom 19
Norm of force =  1.7429674e+18

constraints are all ready none ...Please help me what should I do to solve
this .


You have an infinite force, so that means severe atomic clashes or a totally 
distorted geometry.  mdrun tells you atom 19 is in the neighborhood of the 
problem, so visualize and see.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] search in gromacs mailing list by subject

2014-08-24 Thread Justin Lemkul



On 8/24/14, 8:14 AM, shahab shariati wrote:

Hi

Before, one could search in gromacs mailing list by subject. But, now,
there is not this possibility. Why?



Because Google does the job without the overhead on the Gromacs site:

http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List

"Browse previous messages through the gmx-users Archives page or add 
site:https://mailman-1.sys.kth.se to your google queries."


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] build membrane system from Charmm-Gui

2014-08-24 Thread Justin Lemkul



On 8/24/14, 4:36 PM, Xiang Ning wrote:

Dear Justin,





Thanks for your help! It did work when I use grompp to generate tpr file.
However, after obtaining  em.tpr file, I want energy minimization of the
membrane+water+ion system, so I typed "mdrun -v -deffnm em", and the outcome
em.gro file looks really weird by using VMD to view. The original file is
water above the lipid membrane, but now by viewing em.gro, the water is now
in the middle of two membrane leaflet, and they aparted to 4 parts! I would
like to know what is wrong? Is it because the mdp file? Thanks very much for
your assistance!



This is just a difference in convention.  CHARMM builds unit cells centered at 
the coordinate origin.  Gromacs centers at (x/2, y/2, z/2), so the molecule get 
re-wrapped due to PBC.  Re-centering the system within the box with editconf 
prior to anything else will fix the issue.


FYI We are working on a new version of CHARMM-GUI that will interface seamlessly 
with Gromacs.  It will be available soon.


-Justin


Best, Ning

my mdp file: ; minim.mdp - used as input into grompp to generate em.tpr ;
Parameters describing what to do, when to stop and what to save integrator
= steep ; Algorithm (steep = steepest descent minimization) emtol
= 1000.0; Stop minimization when the maximum force < 1000.0
kJ/mol/nm emstep  = 0.01  ; Energy step size nsteps  = 5
; Maximum number of (minimization) steps to perform ; Parameters describing
how to find the neighbors of each atom and how to calculate the interactions
nstlist = 1 ; Frequency to update the neighbor list
and long range forces ns_type = grid  ; Method to determine
neighbor list (simple, grid) rlist   = 1.2   ; Cut-off for
making neighbor list (short range forces) coulombtype = PME   ;
Treatment of long range electrostatic interactions rcoulomb= 1.2
; Short-range electrostatic cut-off rvdw= 1.2   ;
Short-range Van der Waals cut-off pbc = xyz   ;
Periodic Boundary Conditions



On Friday, August 22, 2014 4:48 PM, Justin Lemkul  wrote:






On 8/22/14, 4:36 PM, Xiang Ning wrote:

Hi All,

I am using POPC/POPG mixed membrane built by Charmm-gui. After remove ions
and water from CHARMMGUI pdb and save just the membrane pdb, and use this
pdb with pdb2gmx to get top file (I used charmm36), then I would like to
know, how to add water and ions back to the system? I read the previous
solution was to modify [molecules] in top file (add ions and waters
information manually), are there any detailed explanation of how to do
that?



For the coordinates, just paste the water and ion coordinates back into the
membrane-only file.  For the topology, indeed all you need to do is modify
[molecules] in the .top to reflect however many waters and ions there are in
the reconstructed system.

-Justin



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] DSSP

2014-08-24 Thread Nikolaos Michelarakis
Hello,

I am trying to do a secondary structure analysis to get the percentages of
each secondary structure in my protein. I know i can use do_dssp but
unfortunately DSSP is not installed on the cluster that i have been using
and i do not have the acess to install it. Any other ways to do it? or
would anyone be able to run it for me? I need it for 3 structures, for 3
different forcefields, 9 structures overall.

Thanks a lot,

Nicholas
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun error

2014-08-24 Thread Kester Wong
Dear Lovika, Have you looked into atom 19 specifically? Perhaps, changing the coordinate of atom 19  manually, and let it do another minimisation run would solve the issue?Regards,Kester - 원본 메일 -보낸사람 : Lovika Moudgil 받는사람 :  받은날짜 : 2014년 8월 24일(일) 18:28:06제목 : [gmx-users] mdrun errorHi everyone ,

Need some help .I have got an error in my mdrun . Upto grompp every thing
was fine but when I give command for mdrun ,It stops with this ...
Steepest Descents:
   Tolerance (Fmax)   =  1.0e+03
   Number of steps=1
Step=   14, Dmax= 1.2e-06 nm, Epot=  2.08534e+18 Fmax= inf, atom= 19
Energy minimization has stopped, but the forces have not converged to the
requested precision Fmax < 1000 (which may not be possible for your system).
It stopped because the algorithm tried to make a new step whose size was too
small, or there was no change in the energy since last step. Either way, we
regard the minimization as converged to within the available machine
precision, given your starting configuration and EM parameters.

Double precision normally gives you higher accuracy, but this is often not
needed for preparing to run molecular dynamics.
You might need to increase your constraint accuracy, or turn
off constraints altogether (set constraints = none in mdp file)

writing lowest energy coordinates.


Steepest Descents converged to machine precision in 15 steps,
but did not reach the requested Fmax < 1000.
Potential Energy  =  2.0853354e+18
Maximum force =inf on atom 19
Norm of force =  1.7429674e+18

constraints are all ready none ...Please help me what should I do to solve
this .
Thanks and Regards
Lovika
-- 
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] DSSP

2014-08-24 Thread bipin singh
You may use the precompiled dssp executables.




*Thanks and Regards,Bipin Singh*



On Mon, Aug 25, 2014 at 5:31 AM, Nikolaos Michelarakis 
wrote:

> Hello,
>
> I am trying to do a secondary structure analysis to get the percentages of
> each secondary structure in my protein. I know i can use do_dssp but
> unfortunately DSSP is not installed on the cluster that i have been using
> and i do not have the acess to install it. Any other ways to do it? or
> would anyone be able to run it for me? I need it for 3 structures, for 3
> different forcefields, 9 structures overall.
>
> Thanks a lot,
>
> Nicholas
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] DSSP

2014-08-24 Thread Nikolaos Michelarakis
Hi,

Thank you for your answer, could you please give me some more info on where
to find them and how to use them for the whole trajectory?

Cheers,

Nicholas
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


gromacs.org_gmx-users@maillist.sys.kth.se

2014-08-24 Thread Theodore Si

Hi,

https://onedrive.live.com/redir?resid=990FCE59E48164A4!2572&authkey=!AP82sTNxS6MHgUk&ithint=file%2clog
https://onedrive.live.com/redir?resid=990FCE59E48164A4!2482&authkey=!APLkizOBzXtPHxs&ithint=file%2clog

These are 2 log files. The first one  is using 64 cpu cores(64 / 16 = 4 
nodes) and 4nodes*2 = 8 GPUs, and the second is using 512 cpu cores, no GPU.
When we look at the 64 cores log file, we find that in the  R E A L   C 
Y C L E   A N D   T I M E   A C C O U N T I N G table, the total wall 
time is the sum of every line, that is 37.730=2.201+0.082+...+1.150. So 
we think that when the CPUs is doing PME, GPUs are doing nothing. That's 
why we say they are working sequentially.
As for the 512 cores log file, the total wall time is approximately the 
sum of PME mesh and PME wait for PP. We think this is because 
PME-dedicated nodes finished early, and the total wall time is the time 
spent on PP nodes, therefore time spent on PME is covered.



On 8/23/2014 9:30 PM, Mark Abraham wrote:

On Sat, Aug 23, 2014 at 1:47 PM, Theodore Si  wrote:


Hi,

When we used 2 GPU nodes (each has 2 cpus and 2 gpus) to do a mdrun(with
no PME-dedicated node), we noticed that when CPU are doing PME, GPU are
idle,


That could happen if the GPU completes its work too fast, in which case the
end of the log file will probably scream about imbalance.

that is they are doing their work sequentially.


Highly unlikely, not least because the code is written to overlap the
short-range work on the GPU with everything else on the CPU. What's your
evidence for *sequential* rather than *imbalanced*?



Is it supposed to be so?


No, but without seeing your .log files, mdrun command lines and knowing
about your hardware, there's nothing we can say.



Is it the same reason as GPUs on PME-dedicated nodes won't be used during
a run like you said before?


Why would you suppose that? I said GPUs do work from the PP ranks on their
node. That's true here.

So if we want to exploit our hardware, we should map PP-PME ranks manually,

right? Say, use one node as PME-dedicated node and leave the GPUs on that
node idle, and use two nodes to do the other stuff. How do you think about
this arrangement?


Probably a terrible idea. You should identify the cause of the imbalance,
and fix that.

Mark



Theo


On 8/22/2014 7:20 PM, Mark Abraham wrote:


Hi,

Because no work will be sent to them. The GPU implementation can
accelerate
domains from PP ranks on their node, but with an MPMD setup that uses
dedicated PME nodes, there will be no PP ranks on nodes that have been set
up with only PME ranks. The two offload models (PP work -> GPU; PME work
->
CPU subset) do not work well together, as I said.

One can devise various schemes in 4.6/5.0 that could use those GPUs, but
they either require
* each node does both PME and PP work (thus limiting scaling because of
the
all-to-all for PME, and perhaps making poor use of locality on
multi-socket
nodes), or
* that all nodes have PP ranks, but only some have PME ranks, and the
nodes
map their GPUs to PP ranks in a way that is different depending on whether
PME ranks are present (which could work well, but relies on the DD
load-balancer recognizing and taking advantage of the faster progress of
the PP ranks that have better GPU support, and requires that you get very
dirty hands laying out PP and PME ranks onto hardware that will later
match
the requirements of the DD load balancer, and probably that you balance
PP-PME load manually)

I do not recommend the last approach, because of its complexity.

Clearly there are design decisions to improve. Work is underway.

Cheers,

Mark


On Fri, Aug 22, 2014 at 10:11 AM, Theodore Si  wrote:

  Hi Mark,

Could you tell me why that when we are GPU-CPU nodes as PME-dedicated
nodes, the GPU on such nodes will be idle?


Theo

On 8/11/2014 9:36 PM, Mark Abraham wrote:

  Hi,

What Carsten said, if running on nodes that have GPUs.

If running on a mixed setup (some nodes with GPU, some not), then
arranging
your MPI environment to place PME ranks on CPU-only nodes is probably
worthwhile. For example, all your PP ranks first, mapped to GPU nodes,
then
all your PME ranks, mapped to CPU-only nodes, and then use mdrun
-ddorder
pp_pme.

Mark


On Mon, Aug 11, 2014 at 2:45 AM, Theodore Si  wrote:

   Hi Mark,


This is information of our cluster, could you give us some advice as
regards to our cluster so that we can make GMX run faster on our
system?

Each CPU node has 2 CPUs and each GPU node has 2 CPUs and 2 Nvidia K20M


Device Name Device Type Specifications  Number
CPU NodeIntelH2216JFFKRNodesCPU: 2×Intel Xeon E5-2670(8
Cores,
2.6GHz, 20MB Cache, 8.0GT)
Mem: 64GB(8×8GB) ECC Registered DDR3 1600MHz Samsung Memory 332
Fat NodeIntelH2216WPFKRNodesCPU: 2×Intel Xeon E5-2670(8
Cores,
2.6GHz, 20MB Cache, 8.0GT)
Mem: 256G(16×16G) ECC Registered DDR3 1600MHz Samsung Memory20
GPU NodeIntelR2208GZ4GC CPU: 2×Intel Xeon E5-2670(

[gmx-users] PMF calculation with three reaction coordinates

2014-08-24 Thread liaoxyi
Dear gromacs users,
  I am running an umbrella pulling to calculate the PMF of a soft protein 
adsorbed on surface. Since the protein is soft (easy to deform),  I conducted 
three pull groups on the COM (center of mass) of three domains of the protein. 
The pullings are all along the z axis but on different initial positions. So it 
outputs three Position variables in the pull-x.xvg and also three Force 
variables in the pull-f.xvg as follows:
@xaxis  label "Time (ps)"
@yaxis  label "Position (nm)"
@ s0 legend "1 dZ"
@ s1 legend "2 dZ"
@ s2 legend "3 dZ"
0.3.756993.739154.94193
0.50003.73193.709284.94322
1.3.72363.707454.91479

  My problem is to calculate the PMF with three reaction coordinates. By 
default, all pull groups found in all pullx/pullf files are used in WHAM. But, 
in this way, it disturbs the PMF curve along the z axis. In GMX 5.0, the g_wham 
-is groupsel.dat can decide which pull groups are used. My confusions are:
1. It is right to choose only one or two pull groups for the PMF when there are 
actually three pullings conducted?
   Obviously, the PMF curve differs a lot with one, two, or three pull groups.
2. Can the final PMF value for protein desorption be the sum of the PMF of each 
pull group?
   Calculate the PMF with only one pull group with g_wham -is first, then add 
the three PMF values together.
Which way should be right to calculate the PMF of protein desorption?
Is there a better solution for that?
I need your advices badly.
Thank you very much !


 Mabel

PS: the pulling settings in md.mdp
; COM PULLING 
pull = umbrella
pull_geometry= distance
pull_dim = N N Y 
pull_start   = yes
pull-print-reference = no
pull_nstxout = 250
pull_nstfout = 250
pull_ngroups = 3; three pull groups, not including the 
absolute referece group
pull-ncoords = 3; three pull coordinates
; Group name, weight (default all 1), vector, init, rate (nm/ps), kJ/(mol*nm^2)
pull-group1-name = Backbone_&_r_1-17 ;  resid 1-17
pull-coord1-groups   = 0  1
pull-coord1-origin   = 3.3 3.2 1.0   ; half the box a, b
pull-coord1-vec  = 0.0 0.0 0.0
pull-coord1-init = 0  
pull-coord1-rate = 0.0  
pull-coord1-k= 4184  

pull-group2-name = Backbone_&_r_18-39
pull-coord2-groups   = 0  2
pull-coord2-origin   = 3.3 3.2 1.0
pull-coord2-vec  = 0.0 0.0 0.0
pull-coord2-init = 0  
pull-coord2-rate = 0.0  
pull-coord2-k= 4184  

pull-group3-name = Backbone_&_r_40-51
pull-coord3-groups   = 0  3
pull-coord3-origin   = 3.3 3.2 1.0
pull-coord3-vec  = 0.0 0.0 0.0
pull-coord3-init = 0  
pull-coord3-rate = 0.0  
pull-coord3-k= 4184  

 


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] DSSP

2014-08-24 Thread bipin singh
You may download the relevant executables from the following links:

ftp://ftp.cmbi.ru.nl/pub/software/dssp/dssp-2.0.4-linux-amd64
ftp://ftp.cmbi.ru.nl/pub/software/dssp/dssp-2.0.4-linux-i386
ftp://ftp.cmbi.ru.nl/pub/software/dssp/dssp-2.0.4-win32.exe

before using the do_dssp module of Gromacs, export the path to the above
executable dssp as:


export dssp=/path/to/dssp/executable


then run the do_dssp as usual.





*Thanks and Regards,Bipin Singh*




On Mon, Aug 25, 2014 at 7:38 AM, Nikolaos Michelarakis 
wrote:

> Hi,
>
> Thank you for your answer, could you please give me some more info on where
> to find them and how to use them for the whole trajectory?
>
> Cheers,
>
> Nicholas
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] [gmx-developers] About dynamics loading balance

2014-08-24 Thread Yunlong Liu
Hi Szilard,

I would like to send you the log file and i really need your help. Please trust 
me that i have tested many times when i turned on the dlb, the gpu nodes 
reported cannot allocate memory error and shut all MPI processes down. I have 
to tolerate the large loading imbalance (50%) to run my simulations. I wish i 
can figure out some way that makes my simulation run on GPU and have better 
performance.

Where can i post the log file? If i paste it here, it will be really long.

Yunlong


> On Aug 24, 2014, at 2:20 PM, "Szilárd Páll"  wrote:
> 
>> On Thu, Aug 21, 2014 at 8:25 PM, Yunlong Liu  wrote:
>> Hi Roland,
>> 
>> I just compiled the latest gromacs-5.0 version released on Jun 29th. I will
>> recompile it as you suggested by using those Flags. It seems like the high
>> loading imbalance doesn't affect the performance as well, which is weird.
> 
> How did you draw that conclusion? Please show us log files of the
> respective runs, that will help to assess what is gong on.
> 
> --
> Szilárd
> 
>> Thank you.
>> Yunlong
>> 
>>> On 8/21/14, 2:13 PM, Roland Schulz wrote:
>>> 
>>> Hi,
>>> 
>>> 
>>> 
>>> On Thu, Aug 21, 2014 at 1:56 PM, Yunlong Liu >> > wrote:
>>> 
>>>Hi Roland,
>>> 
>>>The problem I am posting is caused by trivial errors (like not
>>>enough memory) and I think it should be a real bug inside the
>>>gromacs-GPU support code.
>>> 
>>> It is unlikely a trivial error because otherwise someone else would have
>>> noticed. You could try the release-5-0 branch from git, but I'm not aware of
>>> any bugfixes related to memory allocation.
>>> The memory allocation which causes the error isn't the problem. The
>>> printed size is reasonable. You could recompile with PRINT_ALLOC_KB (add
>>> -DPRINT_ALLOC_KB to CMAKE_C_FLAGS) and rerun the simulation. It might tell
>>> you where the usual large memory allocation happens.
>>> 
>>> PS: Please don't reply to an individual Gromacs developer. Keep all
>>> conversation on the gmx-users list.
>>> 
>>> Roland
>>> 
>>>That is the reason why I post this problem to the developer
>>>mailing-list.
>>> 
>>>My system contains ~240,000 atoms. It is a rather big protein. The
>>>memory information of the node is :
>>> 
>>>top - 12:46:59 up 15 days, 22:18, 1 user,  load average: 1.13,
>>>6.27, 11.28
>>>Tasks: 510 total,   2 running, 508 sleeping,   0 stopped,   0 zombie
>>>Cpu(s):  6.3%us,  0.0%sy,  0.0%ni, 93.7%id,  0.0%wa, 0.0%hi,
>>> 0.0%si,  0.0%st
>>>Mem:  32815324k total,  4983916k used, 27831408k free, 7984k
>>>buffers
>>>Swap:  4194296k total,0k used,  4194296k free,   700588k
>>>cached
>>> 
>>>I am running the simulation on 2 nodes, 4 MPI ranks and each rank
>>>with 8 OPENMP-threads. I list the information of their CPU and GPU
>>>here:
>>> 
>>>c442-702.stampede(1)$ nvidia-smi
>>>Thu Aug 21 12:46:17 2014
>>>+--+
>>>| NVIDIA-SMI 331.67 Driver Version: 331.67 |
>>> 
>>> |---+--+--+
>>>| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile
>>>Uncorr. ECC |
>>>| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util
>>> Compute M. |
>>> 
>>> |===+==+==|
>>>|   0  Tesla K20m  Off  | :03:00.0 Off
>>>|0 |
>>>| N/A   22CP046W / 225W |172MiB /  4799MiB | 0%
>>> Default |
>>> 
>>> +---+--+--+
>>> 
>>> 
>>> +-+
>>>| Compute processes: GPU Memory |
>>>|  GPU   PID  Process name
>>> Usage  |
>>> 
>>> |=|
>>>|0113588 /work/03002/yliu120/gromacs-5/bin/mdrun_mpi 77MiB |
>>>|0113589 /work/03002/yliu120/gromacs-5/bin/mdrun_mpi 77MiB |
>>> 
>>> +-+
>>> 
>>>c442-702.stampede(4)$ lscpu
>>>Architecture:  x86_64
>>>CPU op-mode(s):32-bit, 64-bit
>>>Byte Order:Little Endian
>>>CPU(s):16
>>>On-line CPU(s) list:   0-15
>>>Thread(s) per core:1
>>>Core(s) per socket:8
>>>Socket(s): 2
>>>NUMA node(s):  2
>>>Vendor ID: GenuineIntel
>>>CPU family:6
>>>Model: 45
>>>Stepping:  7
>>>CPU MHz:   2701.000
>>>BogoMIPS:  5399.22
>>>Virtualization:VT-x
>>>L1d cache: 32K
>>>L1i cache: 32K
>>>L2 cache:  256K
>>>L3 cache:  20480K
>>>NUMA node0 CPU(s): 0-7
>>>NUMA n

Re: [gmx-users] mdrun error

2014-08-24 Thread Lovika Moudgil
Thanks for reply Justin and Kester...Ya my geometry is getting distorted
and I tried to change the coordinates of atom 19 i.e Hydrogen ...but the
error is same


Regards
Lovika


On Mon, Aug 25, 2014 at 7:09 AM, Kester Wong  wrote:

> Dear Lovika,
>
>
> Have you looked into atom 19 specifically? Perhaps, changing the
> coordinate of atom 19  manually, and let it do another minimisation run
> would solve the issue?
>
>
>
> Regards,
>
> Kester
>
>
> - 원본 메일 -
>
> *보낸사람* : Lovika Moudgil 
> *받는사람* : 
> *받은날짜* : 2014년 8월 24일(일) 18:28:06
> *제목* : [gmx-users] mdrun error
>
> Hi everyone ,
>
> Need some help .I have got an error in my mdrun . Upto grompp every thing
> was fine but when I give command for mdrun ,It stops with this ...
> Steepest Descents:
>Tolerance (Fmax)   =  1.0e+03
>Number of steps=1
> Step=   14, Dmax= 1.2e-06 nm, Epot=  2.08534e+18 Fmax= inf, atom= 19
> Energy minimization has stopped, but the forces have not converged to the
> requested precision Fmax < 1000 (which may not be possible for your system).
> It stopped because the algorithm tried to make a new step whose size was too
> small, or there was no change in the energy since last step. Either way, we
> regard the minimization as converged to within the available machine
> precision, given your starting configuration and EM parameters.
>
> Double precision normally gives you higher accuracy, but this is often not
> needed for preparing to run molecular dynamics.
> You might need to increase your constraint accuracy, or turn
> off constraints altogether (set constraints = none in mdp file)
>
> writing lowest energy coordinates.
>
>
> Steepest Descents converged to machine precision in 15 steps,
> but did not reach the requested Fmax < 1000.
> Potential Energy  =  2.0853354e+18
> Maximum force =inf on atom 19
> Norm of force =  1.7429674e+18
>
> constraints are all ready none ...Please help me what should I do to solve
> this .
> Thanks and Regards
> Lovika
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
> * For (un)subscribe requests 
> visithttps://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send a mail to gmx-users-requ...@gromacs.org.
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] new analysis in gromacs

2014-08-24 Thread Atila Petrosian
Dear Justin

Thanks for your answer.

Unfortunately, I am not professional in Linux OS. This project is important
for me.

Please guide me to do that truly.

How to compile these c files?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun error

2014-08-24 Thread Mark Abraham
So, that means either your starting configuration or your model physics
doesn't make sense. Which is it?

Mark


On Mon, Aug 25, 2014 at 6:52 AM, Lovika Moudgil 
wrote:

> Thanks for reply Justin and Kester...Ya my geometry is getting distorted
> and I tried to change the coordinates of atom 19 i.e Hydrogen ...but the
> error is same
>
>
> Regards
> Lovika
>
>
> On Mon, Aug 25, 2014 at 7:09 AM, Kester Wong  wrote:
>
> > Dear Lovika,
> >
> >
> > Have you looked into atom 19 specifically? Perhaps, changing the
> > coordinate of atom 19  manually, and let it do another minimisation run
> > would solve the issue?
> >
> >
> >
> > Regards,
> >
> > Kester
> >
> >
> > - 원본 메일 -
> >
> > *보낸사람* : Lovika Moudgil 
> > *받는사람* : 
> > *받은날짜* : 2014년 8월 24일(일) 18:28:06
> > *제목* : [gmx-users] mdrun error
> >
> > Hi everyone ,
> >
> > Need some help .I have got an error in my mdrun . Upto grompp every thing
> > was fine but when I give command for mdrun ,It stops with this ...
> > Steepest Descents:
> >Tolerance (Fmax)   =  1.0e+03
> >Number of steps=1
> > Step=   14, Dmax= 1.2e-06 nm, Epot=  2.08534e+18 Fmax= inf,
> atom= 19
> > Energy minimization has stopped, but the forces have not converged to the
> > requested precision Fmax < 1000 (which may not be possible for your
> system).
> > It stopped because the algorithm tried to make a new step whose size was
> too
> > small, or there was no change in the energy since last step. Either way,
> we
> > regard the minimization as converged to within the available machine
> > precision, given your starting configuration and EM parameters.
> >
> > Double precision normally gives you higher accuracy, but this is often
> not
> > needed for preparing to run molecular dynamics.
> > You might need to increase your constraint accuracy, or turn
> > off constraints altogether (set constraints = none in mdp file)
> >
> > writing lowest energy coordinates.
> >
> >
> > Steepest Descents converged to machine precision in 15 steps,
> > but did not reach the requested Fmax < 1000.
> > Potential Energy  =  2.0853354e+18
> > Maximum force =inf on atom 19
> > Norm of force =  1.7429674e+18
> >
> > constraints are all ready none ...Please help me what should I do to
> solve
> > this .
> > Thanks and Regards
> > Lovika
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> >
> > * For (un)subscribe requests visithttps://
> maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail
> to gmx-users-requ...@gromacs.org.
> >
> >
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] [gmx-developers] About dynamics loading balance

2014-08-24 Thread Mark Abraham
Please upload them to a file-sharing service on the web (there are lots
that are free-to-use), and paste the link here.

Mark


On Mon, Aug 25, 2014 at 6:07 AM, Yunlong Liu  wrote:

> Hi Szilard,
>
> I would like to send you the log file and i really need your help. Please
> trust me that i have tested many times when i turned on the dlb, the gpu
> nodes reported cannot allocate memory error and shut all MPI processes
> down. I have to tolerate the large loading imbalance (50%) to run my
> simulations. I wish i can figure out some way that makes my simulation run
> on GPU and have better performance.
>
> Where can i post the log file? If i paste it here, it will be really long.
>
> Yunlong
>
>
> > On Aug 24, 2014, at 2:20 PM, "Szilárd Páll" 
> wrote:
> >
> >> On Thu, Aug 21, 2014 at 8:25 PM, Yunlong Liu  wrote:
> >> Hi Roland,
> >>
> >> I just compiled the latest gromacs-5.0 version released on Jun 29th. I
> will
> >> recompile it as you suggested by using those Flags. It seems like the
> high
> >> loading imbalance doesn't affect the performance as well, which is
> weird.
> >
> > How did you draw that conclusion? Please show us log files of the
> > respective runs, that will help to assess what is gong on.
> >
> > --
> > Szilárd
> >
> >> Thank you.
> >> Yunlong
> >>
> >>> On 8/21/14, 2:13 PM, Roland Schulz wrote:
> >>>
> >>> Hi,
> >>>
> >>>
> >>>
> >>> On Thu, Aug 21, 2014 at 1:56 PM, Yunlong Liu  >>> > wrote:
> >>>
> >>>Hi Roland,
> >>>
> >>>The problem I am posting is caused by trivial errors (like not
> >>>enough memory) and I think it should be a real bug inside the
> >>>gromacs-GPU support code.
> >>>
> >>> It is unlikely a trivial error because otherwise someone else would
> have
> >>> noticed. You could try the release-5-0 branch from git, but I'm not
> aware of
> >>> any bugfixes related to memory allocation.
> >>> The memory allocation which causes the error isn't the problem. The
> >>> printed size is reasonable. You could recompile with PRINT_ALLOC_KB
> (add
> >>> -DPRINT_ALLOC_KB to CMAKE_C_FLAGS) and rerun the simulation. It might
> tell
> >>> you where the usual large memory allocation happens.
> >>>
> >>> PS: Please don't reply to an individual Gromacs developer. Keep all
> >>> conversation on the gmx-users list.
> >>>
> >>> Roland
> >>>
> >>>That is the reason why I post this problem to the developer
> >>>mailing-list.
> >>>
> >>>My system contains ~240,000 atoms. It is a rather big protein. The
> >>>memory information of the node is :
> >>>
> >>>top - 12:46:59 up 15 days, 22:18, 1 user,  load average: 1.13,
> >>>6.27, 11.28
> >>>Tasks: 510 total,   2 running, 508 sleeping,   0 stopped,   0 zombie
> >>>Cpu(s):  6.3%us,  0.0%sy,  0.0%ni, 93.7%id,  0.0%wa, 0.0%hi,
> >>> 0.0%si,  0.0%st
> >>>Mem:  32815324k total,  4983916k used, 27831408k free, 7984k
> >>>buffers
> >>>Swap:  4194296k total,0k used,  4194296k free,   700588k
> >>>cached
> >>>
> >>>I am running the simulation on 2 nodes, 4 MPI ranks and each rank
> >>>with 8 OPENMP-threads. I list the information of their CPU and GPU
> >>>here:
> >>>
> >>>c442-702.stampede(1)$ nvidia-smi
> >>>Thu Aug 21 12:46:17 2014
> >>>+--+
> >>>| NVIDIA-SMI 331.67 Driver Version: 331.67 |
> >>>
> >>>
> |---+--+--+
> >>>| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile
> >>>Uncorr. ECC |
> >>>| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util
> >>> Compute M. |
> >>>
> >>>
> |===+==+==|
> >>>|   0  Tesla K20m  Off  | :03:00.0 Off
> >>>|0 |
> >>>| N/A   22CP046W / 225W |172MiB /  4799MiB | 0%
> >>> Default |
> >>>
> >>>
> +---+--+--+
> >>>
> >>>
> >>>
> +-+
> >>>| Compute processes: GPU Memory |
> >>>|  GPU   PID  Process name
> >>> Usage  |
> >>>
> >>>
> |=|
> >>>|0113588 /work/03002/yliu120/gromacs-5/bin/mdrun_mpi 77MiB |
> >>>|0113589 /work/03002/yliu120/gromacs-5/bin/mdrun_mpi 77MiB |
> >>>
> >>>
> +-+
> >>>
> >>>c442-702.stampede(4)$ lscpu
> >>>Architecture:  x86_64
> >>>CPU op-mode(s):32-bit, 64-bit
> >>>Byte Order:Little Endian
> >>>CPU(s):16
> >>>On-line CPU(s) list:   0-15
> >>>Thread(s) per core:1
> >>>Core(s) per socket:8
> >>>Socket(s): 2
> >>>NUMA node(s):  2
> >>>Vendor ID:  

[gmx-users] new analysis in gromacs

2014-08-24 Thread Atila Petrosian
Dear Justin

In /home/atila/gromacs-4.0.4/src/tools, when I use gcc g_vesicle_density.c

I encountered with

g_vesicle_density.c:26:21: error: gmx_ana.h: No such file or directory
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Missing Residues of PDB file

2014-08-24 Thread neha bharti
Hello All

I am performing Molecular dynamics simulation of protein-ligand complex
using charmm36 force field in popc lipid.

I downloaded the protein ligand complex pdb file. And as mentioned in
Justin A. Lemkul protein-ligand complex tutorial I have seperate ligand and
protein from pdb file.

My protein contain some missing residues so I have homology modeled the
protein taking the same pdb file as target and templet.

At the time of Building the complex step I am using the homology modelled
protein.

Can I add the ligand in that file as shown in tutorial:

  163ASN  C 1691   0.621  -0.740  -0.126
  163ASN O1 1692   0.624  -0.616  -0.140
  163ASN O2 1693   0.683  -0.703  -0.011
1JZ4  C4   1   2.429  -2.412  -0.007
1JZ4  C14  2   2.392  -2.470  -0.139
1JZ4  C13  3   2.246  -2.441  -0.181
1JZ4  C12  4   2.229  -2.519  -0.308
1JZ4  C11  5   2.169  -2.646  -0.295

because due to homology modeling it might be possible that the coordinates
of the protein will get change??

Or I should use maxwarn option to avoid the error message of missing
residues of protein pdb file and no need of homology modelling??

Please Help
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun error

2014-08-24 Thread Kester Wong
If the H-atom is constituent of a molecule (e.g. H2O), then you could also try moving the molecule coordinates and see how it goes.I had a similar issue, but moving the molecule by an angstrom worked in my case. Good luck!Regards,Kester- 원본 메일 -보낸사람 : Lovika Moudgil 받는사람 :  받은날짜 : 2014년 8월 25일(월) 13:52:49제목 : Re: [gmx-users] mdrun errorThanks for reply Justin and Kester...Ya my geometry is getting distorted
and I tried to change the coordinates of atom 19 i.e Hydrogen ...but the
error is same


Regards
Lovika


On Mon, Aug 25, 2014 at 7:09 AM, Kester Wong  wrote:

> Dear Lovika,
>
>
> Have you looked into atom 19 specifically? Perhaps, changing the
> coordinate of atom 19  manually, and let it do another minimisation run
> would solve the issue?
>
>
>
> Regards,
>
> Kester
>
>
> - 원본 메일 -
>
> *보낸사람* : Lovika Moudgil 
> *받는사람* : 
> *받은날짜* : 2014년 8월 24일(일) 18:28:06
> *제목* : [gmx-users] mdrun error
>
> Hi everyone ,
>
> Need some help .I have got an error in my mdrun . Upto grompp every thing
> was fine but when I give command for mdrun ,It stops with this ...
> Steepest Descents:
>Tolerance (Fmax)   =  1.0e+03
>Number of steps=1
> Step=   14, Dmax= 1.2e-06 nm, Epot=  2.08534e+18 Fmax= inf, atom= 19
> Energy minimization has stopped, but the forces have not converged to the
> requested precision Fmax < 1000 (which may not be possible for your system).
> It stopped because the algorithm tried to make a new step whose size was too
> small, or there was no change in the energy since last step. Either way, we
> regard the minimization as converged to within the available machine
> precision, given your starting configuration and EM parameters.
>
> Double precision normally gives you higher accuracy, but this is often not
> needed for preparing to run molecular dynamics.
> You might need to increase your constraint accuracy, or turn
> off constraints altogether (set constraints = none in mdp file)
>
> writing lowest energy coordinates.
>
>
> Steepest Descents converged to machine precision in 15 steps,
> but did not reach the requested Fmax < 1000.
> Potential Energy  =  2.0853354e+18
> Maximum force =inf on atom 19
> Norm of force =  1.7429674e+18
>
> constraints are all ready none ...Please help me what should I do to solve
> this .
> Thanks and Regards
> Lovika
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
> * For (un)subscribe requests visithttps://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
>
-- 
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Missing Residues of PDB file

2014-08-24 Thread RINU KHATTRI
use -missing in last of command pdb2gmx

On Mon, Aug 25, 2014 at 10:54 AM, neha bharti  wrote:
> Hello All
>
> I am performing Molecular dynamics simulation of protein-ligand complex
> using charmm36 force field in popc lipid.
>
> I downloaded the protein ligand complex pdb file. And as mentioned in
> Justin A. Lemkul protein-ligand complex tutorial I have seperate ligand and
> protein from pdb file.
>
> My protein contain some missing residues so I have homology modeled the
> protein taking the same pdb file as target and templet.
>
> At the time of Building the complex step I am using the homology modelled
> protein.
>
> Can I add the ligand in that file as shown in tutorial:
>
>   163ASN  C 1691   0.621  -0.740  -0.126
>   163ASN O1 1692   0.624  -0.616  -0.140
>   163ASN O2 1693   0.683  -0.703  -0.011
> 1JZ4  C4   1   2.429  -2.412  -0.007
> 1JZ4  C14  2   2.392  -2.470  -0.139
> 1JZ4  C13  3   2.246  -2.441  -0.181
> 1JZ4  C12  4   2.229  -2.519  -0.308
> 1JZ4  C11  5   2.169  -2.646  -0.295
>
> because due to homology modeling it might be possible that the coordinates
> of the protein will get change??
>
> Or I should use maxwarn option to avoid the error message of missing
> residues of protein pdb file and no need of homology modelling??
>
> Please Help
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


gromacs.org_gmx-users@maillist.sys.kth.se

2014-08-24 Thread Mark Abraham
On Mon, Aug 25, 2014 at 5:01 AM, Theodore Si  wrote:

> Hi,
>
> https://onedrive.live.com/redir?resid=990FCE59E48164A4!
> 2572&authkey=!AP82sTNxS6MHgUk&ithint=file%2clog
> https://onedrive.live.com/redir?resid=990FCE59E48164A4!
> 2482&authkey=!APLkizOBzXtPHxs&ithint=file%2clog
>
> These are 2 log files. The first one  is using 64 cpu cores(64 / 16 = 4
> nodes) and 4nodes*2 = 8 GPUs, and the second is using 512 cpu cores, no GPU.
> When we look at the 64 cores log file, we find that in the  R E A L   C Y
> C L E   A N D   T I M E   A C C O U N T I N G table, the total wall time is
> the sum of every line, that is 37.730=2.201+0.082+...+1.150. So we think
> that when the CPUs is doing PME, GPUs are doing nothing. That's why we say
> they are working sequentially.
>

Please note that "sequential" means "one phase after another." Your log
files don't show the timing breakdown for the GPUs, which is distinct from
showing that the GPUs ran and then the CPUs ran (which I don't think the
code even permits!). References to "CUDA 8x8 kernels" do show the GPU was
active. There was an issue with mdrun not always being able to gather and
publish the GPU timing results; I don't recall the conditions (Szilard
might remember), but it might be fixed in a later release. In any case, you
should probably be doing performance optimization on a GROMACS version that
isn't a year old.

I gather that you didn't actually observe the GPUs idle - e.g. with a
performance monitoring tool? Otherwise, and in the absence of a description
of your simulation system, I'd say that log file looks somewhere between
normal and optimal. For the record, for better performance, you should
probably be following the advice of the install guide and not compiling
FFTW with AVX support, and using one of the five gcc minor versions
released since 4.4 ;-)

As for the 512 cores log file, the total wall time is approximately the sum
> of PME mesh and PME wait for PP. We think this is because PME-dedicated
> nodes finished early, and the total wall time is the time spent on PP
> nodes, therefore time spent on PME is covered.


Yes, using an offload model makes it awkward to report CPU timings, because
there are two kinds of CPU ranks. The total of the "Wall t" column adds up
to twice the total time taken (which is noted explicitly in more recent
mdrun versions). By design, the PME ranks do finish early, as you know from
Figure 3.16 of the manual. As you can see in the table, the PP ranks spend
26% of their time waiting for the results from the PME ranks, and this is
the origin of the note (above the table) that you might want to balance
things better.

Mark

On 8/23/2014 9:30 PM, Mark Abraham wrote:
>
>> On Sat, Aug 23, 2014 at 1:47 PM, Theodore Si  wrote:
>>
>>  Hi,
>>>
>>> When we used 2 GPU nodes (each has 2 cpus and 2 gpus) to do a mdrun(with
>>> no PME-dedicated node), we noticed that when CPU are doing PME, GPU are
>>> idle,
>>>
>>
>> That could happen if the GPU completes its work too fast, in which case
>> the
>> end of the log file will probably scream about imbalance.
>>
>> that is they are doing their work sequentially.
>>
>>
>> Highly unlikely, not least because the code is written to overlap the
>> short-range work on the GPU with everything else on the CPU. What's your
>> evidence for *sequential* rather than *imbalanced*?
>>
>>
>>  Is it supposed to be so?
>>>
>>
>> No, but without seeing your .log files, mdrun command lines and knowing
>> about your hardware, there's nothing we can say.
>>
>>
>>  Is it the same reason as GPUs on PME-dedicated nodes won't be used during
>>> a run like you said before?
>>>
>>
>> Why would you suppose that? I said GPUs do work from the PP ranks on their
>> node. That's true here.
>>
>> So if we want to exploit our hardware, we should map PP-PME ranks
>> manually,
>>
>>> right? Say, use one node as PME-dedicated node and leave the GPUs on that
>>> node idle, and use two nodes to do the other stuff. How do you think
>>> about
>>> this arrangement?
>>>
>>>  Probably a terrible idea. You should identify the cause of the
>> imbalance,
>> and fix that.
>>
>> Mark
>>
>>
>>  Theo
>>>
>>>
>>> On 8/22/2014 7:20 PM, Mark Abraham wrote:
>>>
>>>  Hi,

 Because no work will be sent to them. The GPU implementation can
 accelerate
 domains from PP ranks on their node, but with an MPMD setup that uses
 dedicated PME nodes, there will be no PP ranks on nodes that have been
 set
 up with only PME ranks. The two offload models (PP work -> GPU; PME work
 ->
 CPU subset) do not work well together, as I said.

 One can devise various schemes in 4.6/5.0 that could use those GPUs, but
 they either require
 * each node does both PME and PP work (thus limiting scaling because of
 the
 all-to-all for PME, and perhaps making poor use of locality on
 multi-socket
 nodes), or
 * that all nodes have PP ranks, but only some have PME ranks, and the
 nodes
 map their 

Re: [gmx-users] new analysis in gromacs

2014-08-24 Thread Mark Abraham
Hi,

How about using the Gromacs/tools package provided at that site? It seems
to bundle all the tools, and presumably its build system works (and if it
doesn't, please enquire first of the authors of modified versions, since
they're the ones able to fix any problems).

Mark


On Mon, Aug 25, 2014 at 7:05 AM, Atila Petrosian 
wrote:

> Dear Justin
>
> In /home/atila/gromacs-4.0.4/src/tools, when I use gcc g_vesicle_density.c
>
> I encountered with
>
> g_vesicle_density.c:26:21: error: gmx_ana.h: No such file or directory
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] DSSP

2014-08-24 Thread Mark Abraham
You could ask for it to be installed?

Mark


On Mon, Aug 25, 2014 at 2:01 AM, Nikolaos Michelarakis 
wrote:

> Hello,
>
> I am trying to do a secondary structure analysis to get the percentages of
> each secondary structure in my protein. I know i can use do_dssp but
> unfortunately DSSP is not installed on the cluster that i have been using
> and i do not have the acess to install it. Any other ways to do it? or
> would anyone be able to run it for me? I need it for 3 structures, for 3
> different forcefields, 9 structures overall.
>
> Thanks a lot,
>
> Nicholas
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] (no subject)

2014-08-24 Thread Balasubramanian Suriyanarayanan
thank you very much sir.

regards
Suriyanarayanan


On Sat, Aug 23, 2014 at 12:06 AM, Tsjerk Wassenaar 
wrote:

> Hi Suriyanarayanan,
>
> You can check out the WeNMR Gromacs portal.
>
> Hope it helps,
>
> Tsjerk
> On Aug 22, 2014 10:29 AM, "Balasubramanian Suriyanarayanan" <
> bsns...@gmail.com> wrote:
>
> > Dear Users,
> >
> >  generally for running a 10 ns simulation is there any online server
> > facility available. This will be helpful for people who do not have a
> > "continuous access" to the server.
> >
> > regards
> > Suriyanarayanan
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun error

2014-08-24 Thread Lovika Moudgil
I am  trying to have a system of  GNPs with aminoacid and stabilizer i.e
Sodium Citrate ...and this hydrogen is of Sodium Citrate .  moving
molecule is not working for me!!!

Regards
Lovika


On Mon, Aug 25, 2014 at 10:52 AM, Kester Wong  wrote:

> If the H-atom is constituent of a molecule (e.g. H2O), then you could also
> try moving the molecule coordinates and see how it goes.
>
> I had a similar issue, but moving the molecule by an angstrom worked in my
> case. Good luck!
>
>
>
> Regards,
>
> Kester
>
>
>
> - 원본 메일 -
>
> *보낸사람* : Lovika Moudgil 
> *받는사람* : 
> *받은날짜* : 2014년 8월 25일(월) 13:52:49
> *제목* : Re: [gmx-users] mdrun error
>
> Thanks for reply Justin and Kester...Ya my geometry is getting distorted
> and I tried to change the coordinates of atom 19 i.e Hydrogen ...but the
> error is same
>
>
> Regards
> Lovika
>
>
> On Mon, Aug 25, 2014 at 7:09 AM, Kester Wong  wrote:
>
> > Dear Lovika,
> >
> >
> > Have you looked into atom 19 specifically? Perhaps, changing the
> > coordinate of atom 19  manually, and let it do another minimisation run
> > would solve the issue?
> >
> >
> >
> > Regards,
> >
> > Kester
> >
> >
> > - 원본 메일 -
> >
> > *보낸사람* : Lovika Moudgil
> > *받는사람* :
> > *받은날짜* : 2014년 8월 24일(일) 18:28:06
> > *제목* : [gmx-users] mdrun error
> >
> > Hi everyone ,
> >
> > Need some help .I have got an error in my mdrun . Upto grompp every thing
> > was fine but when I give command for mdrun ,It stops with this ...
> > Steepest Descents:
> >Tolerance (Fmax)   =  1.0e+03
> >Number of steps=1
> > Step=   14, Dmax= 1.2e-06 nm, Epot=  2.08534e+18 Fmax= inf, atom= 19
> > Energy minimization has stopped, but the forces have not converged to the
> > requested precision Fmax < 1000 (which may not be possible for your system).
> > It stopped because the algorithm tried to make a new step whose size was too
> > small, or there was no change in the energy since last step. Either way, we
> > regard the minimization as converged to within the available machine
> > precision, given your starting configuration and EM parameters.
> >
> > Double precision normally gives you higher accuracy, but this is often not
> > needed for preparing to run molecular dynamics.
> > You might need to increase your constraint accuracy, or turn
> > off constraints altogether (set constraints = none in mdp file)
> >
> > writing lowest energy coordinates.
> >
> >
> > Steepest Descents converged to machine precision in 15 steps,
> > but did not reach the requested Fmax < 1000.
> > Potential Energy  =  2.0853354e+18
> > Maximum force =inf on atom 19
> > Norm of force =  1.7429674e+18
> >
> > constraints are all ready none ...Please help me what should I do to solve
> > this .
> > Thanks and Regards
> > Lovika
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at 
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> >
> > * For (un)subscribe requests 
> > visithttps://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> >
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests 
> visithttps://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send a mail to gmx-users-requ...@gromacs.org.
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun error

2014-08-24 Thread Mark Abraham
Hi,

Use a visualization program and see what you learn...

Mark


On Mon, Aug 25, 2014 at 8:26 AM, Lovika Moudgil 
wrote:

> I am  trying to have a system of  GNPs with aminoacid and stabilizer i.e
> Sodium Citrate ...and this hydrogen is of Sodium Citrate .  moving
> molecule is not working for me!!!
>
> Regards
> Lovika
>
>
> On Mon, Aug 25, 2014 at 10:52 AM, Kester Wong 
> wrote:
>
> > If the H-atom is constituent of a molecule (e.g. H2O), then you could
> also
> > try moving the molecule coordinates and see how it goes.
> >
> > I had a similar issue, but moving the molecule by an angstrom worked in
> my
> > case. Good luck!
> >
> >
> >
> > Regards,
> >
> > Kester
> >
> >
> >
> > - 원본 메일 -
> >
> > *보낸사람* : Lovika Moudgil 
> > *받는사람* : 
> > *받은날짜* : 2014년 8월 25일(월) 13:52:49
> > *제목* : Re: [gmx-users] mdrun error
> >
> > Thanks for reply Justin and Kester...Ya my geometry is getting distorted
> > and I tried to change the coordinates of atom 19 i.e Hydrogen ...but the
> > error is same
> >
> >
> > Regards
> > Lovika
> >
> >
> > On Mon, Aug 25, 2014 at 7:09 AM, Kester Wong  wrote:
> >
> > > Dear Lovika,
> > >
> > >
> > > Have you looked into atom 19 specifically? Perhaps, changing the
> > > coordinate of atom 19  manually, and let it do another minimisation run
> > > would solve the issue?
> > >
> > >
> > >
> > > Regards,
> > >
> > > Kester
> > >
> > >
> > > - 원본 메일 -
> > >
> > > *보낸사람* : Lovika Moudgil
> > > *받는사람* :
> > > *받은날짜* : 2014년 8월 24일(일) 18:28:06
> > > *제목* : [gmx-users] mdrun error
> > >
> > > Hi everyone ,
> > >
> > > Need some help .I have got an error in my mdrun . Upto grompp every
> thing
> > > was fine but when I give command for mdrun ,It stops with this ...
> > > Steepest Descents:
> > >Tolerance (Fmax)   =  1.0e+03
> > >Number of steps=1
> > > Step=   14, Dmax= 1.2e-06 nm, Epot=  2.08534e+18 Fmax= inf,
> atom= 19
> > > Energy minimization has stopped, but the forces have not converged to
> the
> > > requested precision Fmax < 1000 (which may not be possible for your
> system).
> > > It stopped because the algorithm tried to make a new step whose size
> was too
> > > small, or there was no change in the energy since last step. Either
> way, we
> > > regard the minimization as converged to within the available machine
> > > precision, given your starting configuration and EM parameters.
> > >
> > > Double precision normally gives you higher accuracy, but this is often
> not
> > > needed for preparing to run molecular dynamics.
> > > You might need to increase your constraint accuracy, or turn
> > > off constraints altogether (set constraints = none in mdp file)
> > >
> > > writing lowest energy coordinates.
> > >
> > >
> > > Steepest Descents converged to machine precision in 15 steps,
> > > but did not reach the requested Fmax < 1000.
> > > Potential Energy  =  2.0853354e+18
> > > Maximum force =inf on atom 19
> > > Norm of force =  1.7429674e+18
> > >
> > > constraints are all ready none ...Please help me what should I do to
> solve
> > > this .
> > > Thanks and Regards
> > > Lovika
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > >
> > > * For (un)subscribe requests visithttps://
> maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail
> to gmx-users-requ...@gromacs.org.
> > >
> > >
> > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visithttps://
> maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail
> to gmx-users-requ...@gromacs.org.
> >
> >
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can'