[gmx-users] does gromacs-4.6 support intel core Quad Q9550 cpu?

2014-09-25 Thread Vedat Durmaz

hi guys,

sorry for disturbing!

the pc of a student here has ubuntu 14.04 installed on it along with the
gromacs version 4.6.5 debian/ubuntu binaries from the ubuntu repositories.

when we start mdrun, we get an german error message saying:

"ungültiger maschinenbefehl" (something like "invalid machine command").
when searching the internet i got the feeling that this has something to
do with the cpu.

cat /proc/cpuinfo yields (among others):

...
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni
dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave
lahf_lm dtherm tpr_shadow vnmi flexpriority
...

can anybody confirm that this cpu with the given properties/flag is not
supported by gromacs 4.6 debian binaries? if so, would compiling it from
the sources remedy the problem? if so, which flag must be set for
builing/compiling gromacs accordingly?

thanks a lot & take care

vedat

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] does gromacs-4.6 support intel core Quad Q9550 cpu?

2014-09-25 Thread Vedat Durmaz

i guess that's it, mark. thanks. following mirco's hint and checking the
log file indeed reveals that "rdtscp" was used upon compiling the binaries.

so i will compile it myself on the affected machine. you say i don't
need to set any special cmake variable? shouldn't i at least set the
"GMX_USE_RDTSCP" variable the website you've linked below mentions? and
if so, how would i do that? like this?:

cmake ... -DGMX_USE_RDTSCP=OFF

and what about using SSE4.1? you've written: "Yes, and use the SSE4.1
speed advantages ..." do i need to set some proper cmake variable in
order to use this feature?

vedat



Am 25.09.2014 um 15:28 schrieb Mark Abraham:
> On Thu, Sep 25, 2014 at 12:16 PM, Vedat Durmaz  wrote:
>
>> hi guys,
>>
>> sorry for disturbing!
>>
>> the pc of a student here has ubuntu 14.04 installed on it along with the
>> gromacs version 4.6.5 debian/ubuntu binaries from the ubuntu repositories.
>>
>> when we start mdrun, we get an german error message saying:
>>
>> "ungültiger maschinenbefehl" (something like "invalid machine command").
>> when searching the internet i got the feeling that this has something to
>> do with the cpu.
>>
>> cat /proc/cpuinfo yields (among others):
>>
>> ...
>> flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
>> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
>> nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni
>> dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave
>> lahf_lm dtherm tpr_shadow vnmi flexpriority
>> ...
>>
>> can anybody confirm that this cpu with the given properties/flag is not
>> supported by gromacs 4.6 debian binaries?
>
> That cpu does not appear to support an instruction (rdtscp) that is used by
> GROMACS in doing timing if that instruction was supported on the build
> machine (mentioned at
> http://www.gromacs.org/Documentation/Installation_Instructions_4.6#4.3.1._Portability_aspects
> ).
>
>
>> if so, would compiling it from
>> the sources remedy the problem?
>
> Yes, and use the SSE4.1 speed advantages that the cpu does have, unlike the
> least-common-denominator build that is targeted by the
> ubuntu-gromacs-package maintainer. If you want mdrun to run optimally,
> compile it on the machine you want to run it on.
>
>
>> if so, which flag must be set for
>> builing/compiling gromacs accordingly?
>>
> Nothing is required if building on the target machine - GROMACS cmake
> configuration does all the required detection, but it can't detect the
> capabilities of a machine it hasn't yet seen...
>
> Mark
>
> thanks a lot & take care
>> vedat
>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] does gromacs-4.6 support intel core Quad Q9550 cpu?

2014-09-25 Thread Vedat Durmaz


thanks micro,

there is indeed a log file (below). the only occurrence of "avx" is
related to the FFTW library. is that what you were talking about? i and
does my cpu support avx?.

however, the log file (below) also mentions the cpu feature "sse4.2"
that is obviously not supported by intel core Quad Q9550 cpu


>>> architecture the gromacs binary has been compiled on:
Gromacs version:VERSION 4.6.5
Precision:  single
Memory model:   64 bit
MPI library:thread_mpi
OpenMP support: enabled
GPU support:disabled
invsqrt routine:gmx_software_invsqrt(x)
CPU acceleration:   SSE4.1
FFT library:fftw-3.3.3-sse2-avx
Large file support: enabled
RDTSCP usage:   enabled
Built on:   Sun Dec 15 04:01:11 UTC 2013
Built by:   buildd@panlong [CMAKE]
Build OS/arch:  Linux 3.2.0-37-generic x86_64
Build CPU vendor:   GenuineIntel
Build CPU brand:Intel(R) Xeon(R) CPU   E5620  @ 2.40GHz
Build CPU family:   6   Model: 44   Stepping: 2
Build CPU features: aes apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr
nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3
sse4.1 sse4.2 ssse3
C compiler: /usr/bin/x86_64-linux-gnu-gcc GNU gcc-4.8.real
(Ubuntu/Linaro 4.8.2-10ubuntu1) 4.8.2
C compiler flags:   -msse4.1-Wextra -Wno-missing-field-initializers
-Wno-sign-compare -Wall -Wno-unused -Wunused-value -Wno-unused-parameter
-Wno-array-bounds -Wno-maybe-uninitialized -Wno-strict-overflow  
-fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast  -O3
-DNDEBUG
...
...
>>> Present hardware specification:
Vendor: GenuineIntel
Brand:  Intel(R) Core(TM)2 Quad CPUQ9550  @ 2.83GHz
Family:  6  Model: 23  Stepping: 10
Features: apic clfsh cmov cx8 cx16 lahf_lm mmx msr pdcm pse sse2 sse3
sse4.1 ssse3
Acceleration most likely to fit this hardware: SSE4.1
Acceleration selected at GROMACS compile time: SSE4.1



Am 25.09.2014 um 13:36 schrieb Mirco Wahab:
> On 25.09.2014 12:16, Vedat Durmaz wrote:
>> gromacs version 4.6.5 debian/ubuntu binaries from the ubuntu
>> repositories.
>> when we start mdrun, we get an german error message saying:
>> "ungültiger maschinenbefehl" (something like "invalid machine command").
>> when searching the internet i got the feeling that this has something to
>> do with the cpu.
>>
>> cat /proc/cpuinfo yields (among others):
>
> Does it crash immediately or does mdrun start to write a md.log
> file before it stops?
>
> You then need the output from md.log where it says for which CPU-
> architecture the gromacs binary has been compiled (optimized)
> by the ubuntu maintainer.
>
> If the maintainer's machine supported AVX and he didn't
> explicitly select a "lower" instruction set, the binary
> will crash on any machine not supporting AVX.
>
> my € 0.02
>
> M.
>
>

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] does gromacs-4.6 support intel core Quad Q9550 cpu?

2014-09-25 Thread Vedat Durmaz
:)  well, you convinced me finally. after having had screwed up all my 
courage, i just gave it a try:


cmake .. -DCMAKE_INSTALL_PREFIX=$path

and everything worked surprisingly fine with make. i guess i was still 
traumatized by the attempt to install gromacs on some cray xc30 a couple 
of month ago!


vedat



Am 25.09.2014 um 18:09 schrieb Mark Abraham:

On Thu, Sep 25, 2014 at 4:19 PM, Vedat Durmaz  wrote:


i guess that's it, mark. thanks. following mirco's hint and checking the
log file indeed reveals that "rdtscp" was used upon compiling the binaries.

so i will compile it myself on the affected machine. you say i don't
need to set any special cmake variable?


"Nothing is required if building on the target machine - GROMACS cmake
configuration does all the required detection"



shouldn't i at least set the
"GMX_USE_RDTSCP" variable the website you've linked below mentions?


Not if GROMACS cmake is doing the required detection - which it does. (We
did improve the implementation of the detection after 4.6.5, but that is an
issue only if you tried to do a source-based install of 4.6.5 or earlier
with a build host that has rdtscp and a target machine that does not - so
do the build on the target machine and stop worrying about it ;-) ).

and

if so, how would i do that? like this?:

cmake ... -DGMX_USE_RDTSCP=OFF


Yes, but you don't need if it you are building on the target machine,
because GROMACS cmake configuration does all the required detection, and
this gets turned off automatically if appropriate.



and what about using SSE4.1? you've written: "Yes, and use the SSE4.1
speed advantages ..." do i need to set some proper cmake variable in
order to use this feature?


No, if you are building on the target machine, because GROMACS cmake
configuration does all the required detection, and turns on SSE4.1 support
if that would suit the build host.

:-)

Mark



vedat



Am 25.09.2014 um 15:28 schrieb Mark Abraham:

On Thu, Sep 25, 2014 at 12:16 PM, Vedat Durmaz  wrote:


hi guys,

sorry for disturbing!

the pc of a student here has ubuntu 14.04 installed on it along with the
gromacs version 4.6.5 debian/ubuntu binaries from the ubuntu

repositories.

when we start mdrun, we get an german error message saying:

"ungültiger maschinenbefehl" (something like "invalid machine command").
when searching the internet i got the feeling that this has something to
do with the cpu.

cat /proc/cpuinfo yields (among others):

...
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni
dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave
lahf_lm dtherm tpr_shadow vnmi flexpriority
...

can anybody confirm that this cpu with the given properties/flag is not
supported by gromacs 4.6 debian binaries?

That cpu does not appear to support an instruction (rdtscp) that is used

by

GROMACS in doing timing if that instruction was supported on the build
machine (mentioned at


http://www.gromacs.org/Documentation/Installation_Instructions_4.6#4.3.1._Portability_aspects

).



if so, would compiling it from
the sources remedy the problem?

Yes, and use the SSE4.1 speed advantages that the cpu does have, unlike

the

least-common-denominator build that is targeted by the
ubuntu-gromacs-package maintainer. If you want mdrun to run optimally,
compile it on the machine you want to run it on.



if so, which flag must be set for
builing/compiling gromacs accordingly?


Nothing is required if building on the target machine - GROMACS cmake
configuration does all the required detection, but it can't detect the
capabilities of a machine it hasn't yet seen...

Mark

thanks a lot & take care

vedat

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] multiple processes of a gromacs tool requiring user action at runtime on one Cray XC30 node using aprun

2015-10-27 Thread Vedat Durmaz

hi guys,

I'm struggling with the use of diverse gromacs commands on a Cray XC30 
system. actually, it's about the external tool g_mmpbsa which requires 
user action during runtime. i get similar errors with other Gromacs 
tools, e.g., make_ndx, though, i know that it doesn't make sense to use 
more than one core for make_ndx. however, g_mmpsa (or rather apbs used 
by g_mmpbsa) is supposed to be capable of multiple cores using openmp. 
so, as long as i assign all of the 24 cores of a computing node to one 
process through


aprun -n 1 ../run_mmpbsa.sh

everthing works fine. user input is accepted either interactively, by 
using the echo command, or through a here construction (""... << EOF ... 
EOF). however, as soon as I try to split the 24 cores of a node to 
multiple processes (more than one) using for instance


aprun -n 3 -N 3 -cc 0-7:8-15:16-23 ../run_mmpbsa.sh

(and OMP_NUM_THREADS=8), there is neither an occasion to feed with user 
input in the interactive mode nor it is recognized through echo/here in 
the script. instead, i get the error


>> Source code file: .../gromacs-4.6.7/src/gmxlib/index.c, line: 1192
>> Fatal error:
>> Cannot read from input

where, according to the source code, "scanf" malfunctions. when i use, 
for comparison purposes, make_ndx that i would like to feed with "q" i 
observe a similar error:


>>Source code file: .../gromacs-4.6.7/src/tools/gmx_make_ndx.c, line: 1219
>>Fatal error:
>>Error reading user input

here, it's "fgets" which is malfunctioning.

does anyone have an idea what this could be caused by? what do i need to 
consider/change in order to be able to start more than process on one 
computing node?


thanks in advance

vedat


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] multiple processes of a gromacs tool requiring user action at runtime on one Cray XC30 node using aprun

2015-10-27 Thread Vedat Durmaz

hi mark,

many thanks. but can you be a little more precise? the author's only 
hint regarding mpi is on this site 
"http://rashmikumari.github.io/g_mmpbsa/How-to-Run.html"; and related to 
APBS. g_mmpbsa itself doesn't understand openmp/mpi afaik.


the error i'm observing is occurring pretty much before apbs is started. 
to be honest, i can't see any link to my initial question ...






Am 27.10.2015 um 22:43 schrieb Mark Abraham:

Hi,

I think if you check out how the g_mmpbsa author intends you to use MPI
with the tool, your problem goes away.
http://rashmikumari.github.io/g_mmpbsa/Usage.html

Mark

On Tue, Oct 27, 2015 at 10:10 PM Vedat Durmaz  wrote:


hi guys,

I'm struggling with the use of diverse gromacs commands on a Cray XC30
system. actually, it's about the external tool g_mmpbsa which requires
user action during runtime. i get similar errors with other Gromacs
tools, e.g., make_ndx, though, i know that it doesn't make sense to use
more than one core for make_ndx. however, g_mmpsa (or rather apbs used
by g_mmpbsa) is supposed to be capable of multiple cores using openmp.
so, as long as i assign all of the 24 cores of a computing node to one
process through

aprun -n 1 ../run_mmpbsa.sh

everthing works fine. user input is accepted either interactively, by
using the echo command, or through a here construction (""... << EOF ...
EOF). however, as soon as I try to split the 24 cores of a node to
multiple processes (more than one) using for instance

aprun -n 3 -N 3 -cc 0-7:8-15:16-23 ../run_mmpbsa.sh

(and OMP_NUM_THREADS=8), there is neither an occasion to feed with user
input in the interactive mode nor it is recognized through echo/here in
the script. instead, i get the error

  >> Source code file: .../gromacs-4.6.7/src/gmxlib/index.c, line: 1192
  >> Fatal error:
  >> Cannot read from input

where, according to the source code, "scanf" malfunctions. when i use,
for comparison purposes, make_ndx that i would like to feed with "q" i
observe a similar error:

  >>Source code file: .../gromacs-4.6.7/src/tools/gmx_make_ndx.c, line: 1219
  >>Fatal error:
  >>Error reading user input

here, it's "fgets" which is malfunctioning.

does anyone have an idea what this could be caused by? what do i need to
consider/change in order to be able to start more than process on one
computing node?

thanks in advance

vedat


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] multiple processes of a gromacs tool requiring user action at runtime on one Cray XC30 node using aprun

2015-10-28 Thread Vedat Durmaz



Am 27.10.2015 um 23:57 schrieb Mark Abraham:

Hi,


On Tue, Oct 27, 2015 at 11:39 PM Vedat Durmaz  wrote:


hi mark,

many thanks. but can you be a little more precise? the author's only
hint regarding mpi is on this site
"http://rashmikumari.github.io/g_mmpbsa/How-to-Run.html"; and related to
APBS. g_mmpbsa itself doesn't understand openmp/mpi afaik.

the error i'm observing is occurring pretty much before apbs is started.
to be honest, i can't see any link to my initial question ...


It has the sentence "Although g_mmpbsa does not support mpirun..." aprun is
a form of mpirun, so I assumed you knew that what you were trying was
actually something that could work, which would therefore have to be with
the APBS back end. The point of what it says there is that you don't run
g_mmpbsa with aprun, you tell it how to run APBS with aprun. This just
avoids the problem entirely because your redirected/interactive input goes
to a single g_mmpbsa as normal, which then launches APBS with MPI support.

Tool authors need to actively write code to be useful with MPI, so unless
you know what you are doing is supposed to work with MPI because they say
it works, don't try.

Mark


you are right. it's apbs which ought to run in parallel mode. of course, 
i can set the variable 'export APBS="mpirun -np 8 apbs"' [or set 'export 
OMP_NUM_THREADS=8'] if i want to split a 24 cores-node to let's say 3 
independent g_mmpbsa processes. the problem is that i must start 
g_mmpbsa itself with aprun (in the script run_mmpbsa.sh). i absolutely 
cannot see any other way of running apbs when using it out of g_mmpbs. 
hence, i need to run


aprun -n 3 -N 3 -cc 0-7:8-15:16-23 ../run_mmpbsa.sh

and of course i'm aware about having given 8 cores to g_mmpbsa, hoping 
that it is able to read my input and to run apbs which hopefully uses 
all of the 8 cores. the user input (choosing protein, then ligand), 
however, "Cannot [be] read". this issue occurs quite early during the 
g_mmpbsa process and therefore has nothing to do with the apbs (either 
with openmp or mpi) functionality which is launched later.


if i simulate the whole story (spreading 24 cores of a node over 3 
processes) using a bash script (instead of g_mmpbsa) which just expects 
(and prints) the two inputs during runtime and which i start three times 
on one node, everything works fine. i'm just asking myself whether 
someone knows why gromacs fails under the same conditions and whether it 
is possible to remedy that problem.


thanks again,

vedat







Am 27.10.2015 um 22:43 schrieb Mark Abraham:

Hi,

I think if you check out how the g_mmpbsa author intends you to use MPI
with the tool, your problem goes away.
http://rashmikumari.github.io/g_mmpbsa/Usage.html

Mark

On Tue, Oct 27, 2015 at 10:10 PM Vedat Durmaz  wrote:


hi guys,

I'm struggling with the use of diverse gromacs commands on a Cray XC30
system. actually, it's about the external tool g_mmpbsa which requires
user action during runtime. i get similar errors with other Gromacs
tools, e.g., make_ndx, though, i know that it doesn't make sense to use
more than one core for make_ndx. however, g_mmpsa (or rather apbs used
by g_mmpbsa) is supposed to be capable of multiple cores using openmp.
so, as long as i assign all of the 24 cores of a computing node to one
process through

aprun -n 1 ../run_mmpbsa.sh

everthing works fine. user input is accepted either interactively, by
using the echo command, or through a here construction (""... << EOF ...
EOF). however, as soon as I try to split the 24 cores of a node to
multiple processes (more than one) using for instance

aprun -n 3 -N 3 -cc 0-7:8-15:16-23 ../run_mmpbsa.sh

(and OMP_NUM_THREADS=8), there is neither an occasion to feed with user
input in the interactive mode nor it is recognized through echo/here in
the script. instead, i get the error

   >> Source code file: .../gromacs-4.6.7/src/gmxlib/index.c, line: 1192
   >> Fatal error:
   >> Cannot read from input

where, according to the source code, "scanf" malfunctions. when i use,
for comparison purposes, make_ndx that i would like to feed with "q" i
observe a similar error:

   >>Source code file: .../gromacs-4.6.7/src/tools/gmx_make_ndx.c, line:

1219

   >>Fatal error:
   >>Error reading user input

here, it's "fgets" which is malfunctioning.

does anyone have an idea what this could be caused by? what do i need to
consider/change in order to be able to start more than process on one
computing node?

thanks in advance

vedat


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/m

Re: [gmx-users] multiple processes of a gromacs tool requiring user action at runtime on one Cray XC30 node using aprun

2015-10-29 Thread Vedat Durmaz


hi again,

3 answers are hidden somewhere below ..

Am 28.10.2015 um 15:45 schrieb Mark Abraham:

Hi,

On Wed, Oct 28, 2015 at 3:19 PM Vedat Durmaz  wrote:



Am 27.10.2015 um 23:57 schrieb Mark Abraham:

Hi,


On Tue, Oct 27, 2015 at 11:39 PM Vedat Durmaz  wrote:


hi mark,

many thanks. but can you be a little more precise? the author's only
hint regarding mpi is on this site
"http://rashmikumari.github.io/g_mmpbsa/How-to-Run.html"; and related to
APBS. g_mmpbsa itself doesn't understand openmp/mpi afaik.

the error i'm observing is occurring pretty much before apbs is started.
to be honest, i can't see any link to my initial question ...


It has the sentence "Although g_mmpbsa does not support mpirun..." aprun

is

a form of mpirun, so I assumed you knew that what you were trying was
actually something that could work, which would therefore have to be with
the APBS back end. The point of what it says there is that you don't run
g_mmpbsa with aprun, you tell it how to run APBS with aprun. This just
avoids the problem entirely because your redirected/interactive input

goes

to a single g_mmpbsa as normal, which then launches APBS with MPI

support.

Tool authors need to actively write code to be useful with MPI, so unless
you know what you are doing is supposed to work with MPI because they say
it works, don't try.

Mark

you are right. it's apbs which ought to run in parallel mode. of course,
i can set the variable 'export APBS="mpirun -np 8 apbs"' [or set 'export
OMP_NUM_THREADS=8'] if i want to split a 24 cores-node to let's say 3
independent g_mmpbsa processes. the problem is that i must start
g_mmpbsa itself with aprun (in the script run_mmpbsa.sh).


No. Your job runs a shell script on your compute node. It can do anything
it likes, but it would make sense to run something in parallel at some
point. You need to build a g_mmpbsa that you can just run in a shell script
that echoes in the input (try that on its own first). Then you use the
above approach so that the single process that is g_mmpbsa does the call to
aprun (which is the cray mpirun) to run APBS in MPI mode.

It is likely that even if you run g_mmpbsa with aprun and solve the input
issue somewhow, the MPI runtime will refuse to start the child APBS with
aprun, because nesting is typically unsupported (and your current command
lines haven't given it enough information to do a good job even if it is
supported).


yes, i've encountered issues with nested aprun calls. so this will 
hardly work i guess.





i absolutely
cannot see any other way of running apbs when using it out of g_mmpbs.
hence, i need to run

aprun -n 3 -N 3 -cc 0-7:8-15:16-23 ../run_mmpbsa.sh


This likely starts three copies of g_mmpbsa each of which expect terminal
input, which maybe you can teach aprun to manage, but then each g_mmpbsa
will then do its own APBS and this is completely not what you want.


hmm, to be honest, i would say this is exactly what i'm trying to 
achieve. isn't it? i want 3 independent g_mmpbsa runs each of which 
executed in another directory with its own APBS. by the way, all 
together i have 1800 such directories each containing another trajectory.


if someone is ever (within the next 20 hours!) able to figure out a 
solution for this purpose, i would be absolutely pleased.




and of course i'm aware about having given 8 cores to g_mmpbsa, hoping
that it is able to read my input and to run apbs which hopefully uses
all of the 8 cores. the user input (choosing protein, then ligand),
however, "Cannot [be] read". this issue occurs quite early during the
g_mmpbsa process and therefore has nothing to do with the apbs (either
with openmp or mpi) functionality which is launched later.

if i simulate the whole story (spreading 24 cores of a node over 3
processes) using a bash script (instead of g_mmpbsa) which just expects
(and prints) the two inputs during runtime and which i start three times
on one node, everything works fine. i'm just asking myself whether
someone knows why gromacs fails under the same conditions and whether it
is possible to remedy that problem.


By the way, GROMACS isn't failing. You're using a separately provided
program, so you should really be talking to its authors for help. ;-)

mpirun -np 3 gmx_mpi make_ndx

would work fine (though not usefully), if you use the mechanisms provided
by mpirun to control how the redirection to the stdin of the child
processes should work. But handling that redirection is an issue between
you and the docs of your mpirun :-)

Mark


unfortunately, there is only very few information about stdin 
redirection associated with aprun. what i've done now is modifying 
g_mmpbsa such that no user input is required. starting


aprun -n 3 -N 3 -cc 0-7:8-15:16-23  ../run_mmpbsa.sh

where, using the $ALPS_APP_PE variable, i successfully enter three 
direct

Re: [gmx-users] multiple processes of a gromacs tool requiring user action at runtime on one Cray XC30 node using aprun

2015-10-29 Thread Vedat Durmaz


after several days of trial and error, i was told only today that our 
HPC indeed has one cluster/queue (40 core nodes SMP) that does not 
require the use of aprun/mprun. so, after having compiled all the tools 
again on that cluster, i am finally able to execute many processes per node.


(however, we were not able to remedy the other issue regarding "aprun" 
in between. nevertheless, i'm fine now.)


thanks for your help guys and good evening

vedat



Am 29.10.2015 um 12:53 schrieb Rashmi:

Hi,

As written on the website, g_mmpbsa does not directly support MPI. g_mmpbsa
does not include any code concerning OpenMP and MPI. However, We have tried
to interface MPI and OpenMP functionality of APBS by some mechanism.

One may use g_mmpbsa with MPI by following: (1) allocate number of
processors through queue management system, (2) define APBS environment
variable (export APBS="mpirun -np 8 apbs") that includes all required
flags, then start g_mmpbsa directly without using mpirun (or any similar
program). If queue management system specifically requires aprun/mpirun for
execution of program, g_mmpbsa might not work in this case.

One may use g_mmpbsa with OpenMP by following: (1)  allocate number of
threads through queue management system, (2) define OMP_NUM_THREADS
variable for allocated number of threads and (3) execute g_mmpbsa.

We have not tested simultaneous use of both MPI and OpenMP, so we do not
know that it will work.

Concerning standard input for g_mmpbsa, if echo or < wrote:


hi again,

3 answers are hidden somewhere below ..


Am 28.10.2015 um 15:45 schrieb Mark Abraham:


Hi,

On Wed, Oct 28, 2015 at 3:19 PM Vedat Durmaz  wrote:



Am 27.10.2015 um 23:57 schrieb Mark Abraham:


Hi,


On Tue, Oct 27, 2015 at 11:39 PM Vedat Durmaz  wrote:

hi mark,

many thanks. but can you be a little more precise? the author's only
hint regarding mpi is on this site
"http://rashmikumari.github.io/g_mmpbsa/How-to-Run.html"; and related
to
APBS. g_mmpbsa itself doesn't understand openmp/mpi afaik.

the error i'm observing is occurring pretty much before apbs is
started.
to be honest, i can't see any link to my initial question ...

It has the sentence "Although g_mmpbsa does not support mpirun..."

aprun


is


a form of mpirun, so I assumed you knew that what you were trying was
actually something that could work, which would therefore have to be
with
the APBS back end. The point of what it says there is that you don't run
g_mmpbsa with aprun, you tell it how to run APBS with aprun. This just
avoids the problem entirely because your redirected/interactive input


goes


to a single g_mmpbsa as normal, which then launches APBS with MPI


support.


Tool authors need to actively write code to be useful with MPI, so
unless
you know what you are doing is supposed to work with MPI because they
say
it works, don't try.

Mark


you are right. it's apbs which ought to run in parallel mode. of course,
i can set the variable 'export APBS="mpirun -np 8 apbs"' [or set 'export
OMP_NUM_THREADS=8'] if i want to split a 24 cores-node to let's say 3
independent g_mmpbsa processes. the problem is that i must start
g_mmpbsa itself with aprun (in the script run_mmpbsa.sh).


No. Your job runs a shell script on your compute node. It can do anything
it likes, but it would make sense to run something in parallel at some
point. You need to build a g_mmpbsa that you can just run in a shell
script
that echoes in the input (try that on its own first). Then you use the
above approach so that the single process that is g_mmpbsa does the call
to
aprun (which is the cray mpirun) to run APBS in MPI mode.

It is likely that even if you run g_mmpbsa with aprun and solve the input
issue somewhow, the MPI runtime will refuse to start the child APBS with
aprun, because nesting is typically unsupported (and your current command
lines haven't given it enough information to do a good job even if it is
supported).


yes, i've encountered issues with nested aprun calls. so this will hardly
work i guess.



i absolutely

cannot see any other way of running apbs when using it out of g_mmpbs.
hence, i need to run

aprun -n 3 -N 3 -cc 0-7:8-15:16-23 ../run_mmpbsa.sh

This likely starts three copies of g_mmpbsa each of which expect terminal

input, which maybe you can teach aprun to manage, but then each g_mmpbsa
will then do its own APBS and this is completely not what you want.


hmm, to be honest, i would say this is exactly what i'm trying to achieve.
isn't it? i want 3 independent g_mmpbsa runs each of which executed in
another directory with its own APBS. by the way, all together i have 1800
such directories each containing another trajectory.

if someone is ever (within the next 20 hours!) able to figure out a
solution for this purpose, i would be absolutely pleased.


and of course i'm aware about hav

[gmx-users] purpose of step pdb files during MD

2017-09-06 Thread Vedat Durmaz

hi guys,

from time to time i'm faced with GMX output files during MD called, e.g. in the 
current case:

step8164372b_n254.pdb
step8164372b_n2.pdb
step8164372c_n254.pdb
step8164372c_n2.pdb

what i know is that they are related to kind of exploding systems. however, i'm 
not really able to interpret their content. if i visualize them in VMD, i see a 
subset of my system surounded by explicit water molecules where the two *n254* 
files contain a larger part of my fibrils (polypeptides) than the *n2* files 
which only show few atoms of one particular amino acid. but if i pick certain 
atoms of the amino acids, they are often not correctly assigned to residue 
names and the atom index shown in VMD is different from the index listed in the 
underlying gro file.

where can i find detailed information about how to interpret the names and 
contents of these files? why are exactly these atoms written to the pdb files 
and what does the file name tell me?

any hint is appreciated.

many thanks,

vedat durmaz


-- 
Vedat Durmaz
Computational Molecular Design
Zuse Institute Berlin (ZIB)
Takustrasse 7
14195 Berlin, Germany
T: +49-30-84185-139
F: +49-30-84185-107
http://www.zib.de/durmaz


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] purpose of step pdb files during MD

2017-09-07 Thread Vedat Durmaz

i really appreciate this pretty informative answer. and do you also know, what 
the infix "n254" or "n2" stands for?

what i found strange is that the error occurs after nearly 10M MD steps. in 
other copies of the same system, the error doesn't occur at all even after the 
entire predefines time span (40ns).

it's a long fibril chain (>200 repeating units) of a 7mer polypeptide. the 
walls of the triclinic simulation box of size 10x8x70 nm have an initial 
distance of 1.5 nm to the fibril and i have chosen "comm-mode  = linear" in 
order to keep the system centered. i'm actually wondering, whether that might 
have caused the error or whether the system will for sure crash if the long 
chain changes its shape to some spheric one that doesn't fit the slim box 
anymore.

do you have any experience with that?

vedat



Am 07.09.2017 um 14:26 schrieb Justin Lemkul:
>
> On 9/6/17 5:55 AM, Vedat Durmaz wrote:
>> hi guys,
>>
>> from time to time i'm faced with GMX output files during MD called, e.g. in 
>> the current case:
>>
>> step8164372b_n254.pdb
>> step8164372b_n2.pdb
>> step8164372c_n254.pdb
>> step8164372c_n2.pdb
>>
>> what i know is that they are related to kind of exploding systems. however, 
>> i'm not really able to interpret their content. if i visualize them in VMD, 
>> i see a subset of my system surounded by explicit water molecules where the 
>> two *n254* files contain a larger part of my fibrils (polypeptides) than the 
>> *n2* files which only show few atoms of one particular amino acid. but if i 
>> pick certain atoms of the amino acids, they are often not correctly assigned 
>> to residue names and the atom index shown in VMD is different from the index 
>> listed in the underlying gro file.
>>
>> where can i find detailed information about how to interpret the names and 
>> contents of these files? why are exactly these atoms written to the pdb 
>> files and what does the file name tell me?
> These files contain the atoms in a given domain (e.g. on a certain CPU 
> core) and are before (b) and after (c) applying constraints of a given 
> step.  I personally have never found them to be useful in determining 
> anything, but they are an indicator of physical instability.
>
> -Justin
>


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] purpose of step pdb files during MD

2017-09-08 Thread Vedat Durmaz


Am 07.09.2017 um 21:10 schrieb Justin Lemkul:
>
> On 9/7/17 10:29 AM, Vedat Durmaz wrote:
>> i really appreciate this pretty informative answer. and do you also know, 
>> what the infix "n254" or "n2" stands for?
> Node ID.
>
>> what i found strange is that the error occurs after nearly 10M MD steps. in 
>> other copies of the same system, the error doesn't occur at all even after 
>> the entire predefines time span (40ns).
>>
>> it's a long fibril chain (>200 repeating units) of a 7mer polypeptide. the 
>> walls of the triclinic simulation box of size 10x8x70 nm have an initial 
>> distance of 1.5 nm to the fibril and i have chosen "comm-mode  = linear" in 
>> order to keep the system centered. i'm actually wondering, whether that 
>> might have caused the error or whether the system will for sure crash if the 
>> long chain changes its shape to some spheric one that doesn't fit the slim 
>> box anymore.
>>
>> do you have any experience with that?
> Nope, sorry.  A crash after a long simulation time is very unusual and 
> hard to diagnose.  Normally things fail rather quickly.  Does your 
> GROMACS installation pass all regression tests?
>
> -Justin
>

ok. since i haven't compiled this gromacs installation i haven't seen any 
results of its test runs. anyway, i carried out the simulations again and this 
time no error occurred.

thanks again for your help!

vedat


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] openmpi execution using sbatch & mpirun every command executed several times

2017-12-12 Thread Vedat Durmaz

hi everybody,

i'm working on an ubuntu 16.04 system with gromacs-openmpi (5.1.2) installed 
from the ubuntu repos. everything works fine when i submit job.slurm using 
sbatch where job.slurm roughly looks like this

-
#!/bin/bash -l

#SBATCH -N 1
#SBATCH -n 24
#SBATCH -t 02:00:00

cd $4

cp file1 file2
grompp ...
mpirun -np 24 mdrun_mpi ...   ### WORKS FINE
-

and the mdrun_mpi output contains the following statements:

-
Using 24 MPI processes
Using 1 OpenMP thread per MPI process

  mdrun_mpi -v -deffnm em

Number of logical cores detected (24) does not match the number reported by 
OpenMP (12).
Consider setting the launch configuration manually!

Running on 1 node with total 24 cores, 24 logical cores
-


now, i want to put the last 3 commands of job.slurm into an extra script 
(run_gmx.sh)

-
#!/bin/bash

cp file1 file2
grompp ...
mdrun_mpi ...
-

that i start with mpirun

-
#!/bin/bash -l

#SBATCH -N 1
#SBATCH -n 24
#SBATCH -t 02:00:00

cd $4
mpirun -np 24 run_gmx.sh
-

but now i get strange errors because every non-mpi programm (cp, grompp) is 
executed 24 times which gives a desaster in the file system.


if i change job.slurm to

-
#!/bin/bash -l

#SBATCH -N 1
#SBATCH --ntasks-per-node=1
#SBATCH -t 02:00:00

cd $4
mpirun run_gmx.sh
-

i get the following error in the output:

Number of logical cores detected (24) does not match the number reported by 
OpenMP (1).
Consider setting the launch configuration manually!

Running on 1 node with total 24 cores, 24 logical cores
Using 1 MPI process
Using 24 OpenMP threads

Fatal error:
Your choice of 1 MPI rank and the use of 24 total threads leads to the use of 
24 OpenMP threads, whereas we expect the optimum to be with more MPI ranks with 
1 to 6 OpenMP threads. If you want to run with this many OpenMP threads, 
specify the -ntomp option. But we suggest to increase the number of MPI ranks.


and if i use -ntomp
-
#!/bin/bash

cp file1 file2
grompp ...
mdrun_mpi -ntomp 24 ...
-

things seem to work fine but mdrun_mpi is extremely slow as if it was running 
on one core only. and that's the output:

  mdrun_mpi -ntomp 24 -v -deffnm em

Number of logical cores detected (24) does not match the number reported by 
OpenMP (1).
Consider setting the launch configuration manually!

Running on 1 node with total 24 cores, 24 logical cores

The number of OpenMP threads was set by environment variable OMP_NUM_THREADS to 
24 (and the command-line setting agreed with that)
Using 1 MPI process
Using 24 OpenMP threads

NOTE: You requested 24 OpenMP threads, whereas we expect the optimum to be with 
more MPI ranks with 1 to 6 OpenMP threads.

Non-default thread affinity set probably by the OpenMP library,
disabling internal thread affinity



what am i doing wrong? what a the proper setting for my goal? i need to use an 
extra script executed with mpirun and i somehow need to reduce 24 executions of 
serial commands to just one!

any useful hint is appreciated.

take care,
vedat


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] (cross-)compiling mdrun independently from the rest of gromacs tools

2019-03-25 Thread Vedat Durmaz

hi guys,

according to the gromacs installation instructions (i think a saw it
here:
http://manual.gromacs.org/documentation/2018/install-guide/index.html)
it could make sense to compile gromacs mdrun independently from the
other gromacs tools in order to have a cross-compiled installation
working with compiler optimization flags.

i am compiling gromacs inside a singularity container on a local machine
and running it on a cluster. it works fine, but since i want to use high
level compiler flags for simulation optimization (e.g. AVX_256) which is
not available on the login nodes where i want to (need to) run commands
such as editconf, solvate, etc., but available on the cluster nodes
where mdrun is executed, i indeed need to compile mdrun independently
from the rest. i just don't know how exactly to achieve that. currently,
i'm compiling in two stages (non-mpi and mpi version) inside the container:

GMXVSN="gromacs-2018.1" #"gromacs-5.1.2"
TOOLPATH=/opt

## first round (without MPI)
tar xzf ${GMXVSN}.tar.gz
mv ${GMXVSN} ${GMXVSN}_src
mkdir ${GMXVSN}
cd ${GMXVSN}_src
mkdir build && cd build
cmake .. -DCMAKE_INSTALL_PREFIX=${TOOLPATH}/${GMXVSN}
-DGMX_BUILD_OWN_FFTW=ON -DGMX_MPI=OFF -DGMX_USE_RDTSCP=OFF
-DGMX_THREAD_MPI=OFF -DGMX_DEFAULT_SUFFIX=OFF -DGMX_SIMD=SSE2
make
make install
cd ${TOOLPATH}
rm -r ${GMXVSN}_src
mv ${GMXVSN} ${GMXVSN}_BAK    # backup install dir

## second round (with MPI)
tar xzf ${GMXVSN}.tar.gz
mv ${GMXVSN} ${GMXVSN}_src
mv ${GMXVSN}_BAK ${GMXVSN}    # restore install dir to put in mdrun_mpi
cd ${GMXVSN}_src
mkdir build && cd build
cmake .. -DCMAKE_INSTALL_PREFIX=${TOOLPATH}/${GMXVSN}
-DGMX_BUILD_OWN_FFTW=ON -DGMX_MPI=ON -DGMX_USE_RDTSCP=OFF
-DGMX_THREAD_MPI=OFF -DGMX_BUILD_MDRUN_ONLY=ON -DBUILD_SHARED_LIBS=OFF
-DGMX_BINARY_SUFFIX="_mpi" -DGMX_SIMD=SSE2
make
make install
cd ${TOOLPATH}
. ${GMXVSN}/bin/GMXRC


would some expert please give me an as concrete as possible hint, where
to put which modification in order to have mdrun compiled separately
(with AVX_256) from the other tools?

many thanks in advance & best wishes

vedat


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Still "GB parameter(s) missing or negative for atom type 'C'" issue after 7 years

2019-08-28 Thread Vedat Durmaz

Hi everybody,


After 7 years I'm trying to do some implicit solvent simulations of
protein-ligand systems again using Gromacs/Amber and running into the
same issue as before:

GB parameter(s) missing or negative for atom type 'N3'
GB parameter(s) missing or negative for atom type 'H'
GB parameter(s) missing or negative for atom type 'CT'
GB parameter(s) missing or negative for atom type 'HP'
GB parameter(s) missing or negative for atom type 'HC'
GB parameter(s) missing or negative for atom type 'H1'
GB parameter(s) missing or negative for atom type 'S'
GB parameter(s) missing or negative for atom type 'C'
GB parameter(s) missing or negative for atom type 'O'
GB parameter(s) missing or negative for atom type 'HA'
GB parameter(s) missing or negative for atom type 'OH'
GB parameter(s) missing or negative for atom type 'HO'
GB parameter(s) missing or negative for atom type 'OS' 

Can't do GB electrostatics; the implicit_genborn_params section of the
forcefield is missing parameters for 13 atomtypes or they might be negative.

I googled the problem and found a discussion from 2012, which
surprisingly was initialized by myself running into the same issue!  xD

https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2012-May/071102.html

And just as in 2012, I can get my system grompped, although these atom
types listed above are in the force field's gbsa.itp file (apart from
the ether oxygen OS which I added manually as a copy of the carbonyl
atom type "O". Some info:


## My top file:

#include "amber99sb-ildn.ff/forcefield.itp"
#include "ligand.itp"
#include "protein.itp"


## My atom types section at the top of the ligand.itp file:

[ atomtypes ]
;name   bond_type mass charge   ptype   sigma
epsilon   Amb
 CT   CT  0.0  0.0   A 3.39967e-01   4.57730e-01
; 1.91  0.1094
 HP   HP  0.0  0.0   A 1.95998e-01   6.56888e-02
; 1.10  0.0157
 H1   H1  0.0  0.0   A 2.47135e-01   6.56888e-02
; 1.39  0.0157
 HC   HC  0.0  0.0   A 2.64953e-01   6.56888e-02
; 1.49  0.0157
 S    S   0.0  0.0   A 3.56359e-01   1.04600e+00
; 2.00  0.2500
 C    C   0.0  0.0   A 3.39967e-01   3.59824e-01
; 1.91  0.0860
 O    O   0.0  0.0   A 2.95992e-01   8.78640e-01
; 1.66  0.2100
 N3   N3  0.0  0.0   A 3.25000e-01   7.11280e-01
; 1.82  0.1700
 H    H   0.0  0.0   A 1.06908e-01   6.56888e-02
; 0.60  0.0157
 N    N   0.0  0.0   A 3.25000e-01   7.11280e-01
; 1.82  0.1700
 O2   O2  0.0  0.0   A 2.95992e-01   8.78640e-01
; 1.66  0.2100
 CD   CD  0.0  0.0   A 3.39967e-01   3.59824e-01
; 1.91  0.0860
 OH   OH  0.0  0.0   A 3.06647e-01   8.80314e-01
; 1.72  0.2104
 HO   HO  0.0  0.0   A 0.0e+00   0.0e+00
; 0.00  0.
 CM   CM  0.0  0.0   A 3.39967e-01   3.59824e-01
; 1.91  0.0860
 HA   HA  0.0  0.0   A 2.59964e-01   6.27600e-02
; 1.46  0.0150
 OS   OS  0.0  0.0   A 3.1e-01   7.11280e-01
; 1.68  0.1700

Did ever anyone find a solution for that issue?

Many thanks!

Vedat


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Still "GB parameter(s) missing or negative for atom type 'C'" issue after 7 years

2019-08-28 Thread Vedat Durmaz
Sorry, typo: "I can NOT get my system grompped" ...


Am 28.08.19 um 12:03 schrieb Vedat Durmaz:
> Hi everybody,
>
>
> After 7 years I'm trying to do some implicit solvent simulations of
> protein-ligand systems again using Gromacs/Amber and running into the
> same issue as before:
>
> GB parameter(s) missing or negative for atom type 'N3'
> GB parameter(s) missing or negative for atom type 'H'
> GB parameter(s) missing or negative for atom type 'CT'
> GB parameter(s) missing or negative for atom type 'HP'
> GB parameter(s) missing or negative for atom type 'HC'
> GB parameter(s) missing or negative for atom type 'H1'
> GB parameter(s) missing or negative for atom type 'S'
> GB parameter(s) missing or negative for atom type 'C'
> GB parameter(s) missing or negative for atom type 'O'
> GB parameter(s) missing or negative for atom type 'HA'
> GB parameter(s) missing or negative for atom type 'OH'
> GB parameter(s) missing or negative for atom type 'HO'
> GB parameter(s) missing or negative for atom type 'OS' 
>
> Can't do GB electrostatics; the implicit_genborn_params section of the
> forcefield is missing parameters for 13 atomtypes or they might be negative.
>
> I googled the problem and found a discussion from 2012, which
> surprisingly was initialized by myself running into the same issue!  xD
>
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2012-May/071102.html
>
> And just as in 2012, I can get my system grompped, although these atom
> types listed above are in the force field's gbsa.itp file (apart from
> the ether oxygen OS which I added manually as a copy of the carbonyl
> atom type "O". Some info:
>
>
> ## My top file:
>
> #include "amber99sb-ildn.ff/forcefield.itp"
> #include "ligand.itp"
> #include "protein.itp"
>
>
> ## My atom types section at the top of the ligand.itp file:
>
> [ atomtypes ]
> ;name   bond_type mass charge   ptype   sigma
> epsilon   Amb
>  CT   CT  0.0  0.0   A 3.39967e-01   4.57730e-01
> ; 1.91  0.1094
>  HP   HP  0.0  0.0   A 1.95998e-01   6.56888e-02
> ; 1.10  0.0157
>  H1   H1  0.0  0.0   A 2.47135e-01   6.56888e-02
> ; 1.39  0.0157
>  HC   HC  0.0  0.0   A 2.64953e-01   6.56888e-02
> ; 1.49  0.0157
>  S    S   0.0  0.0   A 3.56359e-01   1.04600e+00
> ; 2.00  0.2500
>  C    C   0.0  0.0   A 3.39967e-01   3.59824e-01
> ; 1.91  0.0860
>  O    O   0.0  0.0   A 2.95992e-01   8.78640e-01
> ; 1.66  0.2100
>  N3   N3  0.0  0.0   A 3.25000e-01   7.11280e-01
> ; 1.82  0.1700
>  H    H   0.0  0.0   A 1.06908e-01   6.56888e-02
> ; 0.60  0.0157
>  N    N   0.0  0.0   A 3.25000e-01   7.11280e-01
> ; 1.82  0.1700
>  O2   O2  0.0  0.0   A 2.95992e-01   8.78640e-01
> ; 1.66  0.2100
>  CD   CD  0.0  0.0   A 3.39967e-01   3.59824e-01
> ; 1.91  0.0860
>  OH   OH  0.0  0.0   A 3.06647e-01   8.80314e-01
> ; 1.72  0.2104
>  HO   HO  0.0  0.0   A 0.0e+00   0.0e+00
> ; 0.00  0.
>  CM   CM  0.0  0.0   A 3.39967e-01   3.59824e-01
> ; 1.91  0.0860
>  HA   HA  0.0  0.0   A 2.59964e-01   6.27600e-02
> ; 1.46  0.0150
>  OS   OS  0.0  0.0   A 3.1e-01   7.11280e-01
> ; 1.68  0.1700
>
> Did ever anyone find a solution for that issue?
>
> Many thanks!
>
> Vedat
>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Still "GB parameter(s) missing or negative for atom type 'C'" issue after 7 years

2019-08-28 Thread Vedat Durmaz

Hi Mark,

Thanks for your hint regarding the order of itp files. I got it running
now.

Here's my solution for all other users (as well as for myself in 7 years
again). I had an ligand itp file generated with "acpype -a amber -i
ligand.pdb" in order to have the upper case atom type notation. At the
top of that itp file, a first section [ atom types ] lists all relevant
atom types.

Some of these types are already included in the amber force field's
gbsa.itp in the corresponding amber force field directory in
gromacs/top. These atom types can be safely removed from the atom types
section of the ligand.itp file. For those atom types of the ligand that
are not listed in gbsa.itp, it was not helpful to use the corresponding
atom type lines from ligand.itp. In my case this resulted in a
segmentation fault.

My work-around was to add these few types to a local copy (important due
to reordering of itp files later!) of gbsa.itp by copying the parameters
from respective atom types already existing in gbsa.itp with most
similar properties (sp2/3, 1/2 membered ring, ring junction, etc.).
Strangely, I additionally needed to keep exactly one of these manually
added atom types in my ligand's atom types section.


Finally, I still had to reorder include lines in my top file in the
following way:

#include "amber99sb-ildn.ff/forcefield.itp"
[ atomtypes ]
;name   bond_type mass charge   ptype   sigma
epsilon   Amb
CD   CD  0.0  0.0   A 3.39967e-01   3.59824e-01
; 1.91  0.0860 

#include "gbsa-local.itp"
#include "protein.itp"
#include "ligand.itp"


Obviously, I had to remove the atomtypes section from ligand.itp
directly into the top file and between the inculde force field line and
the ligand,itp file o make things work. I hope this helps someone out
there struggling with similar problems.

Vedat



Am 28.08.19 um 14:30 schrieb Mark Abraham:
> Hi,
>
> The answer is still the same - if gbsa.itp has the right contents, is it
> being included at the right time?
>
> Mark
>
> On Wed, 28 Aug 2019 at 12:57, Vedat Durmaz  wrote:
>
>> Sorry, typo: "I can NOT get my system grompped" ...
>>
>>
>> Am 28.08.19 um 12:03 schrieb Vedat Durmaz:
>>> Hi everybody,
>>>
>>>
>>> After 7 years I'm trying to do some implicit solvent simulations of
>>> protein-ligand systems again using Gromacs/Amber and running into the
>>> same issue as before:
>>>
>>> GB parameter(s) missing or negative for atom type 'N3'
>>> GB parameter(s) missing or negative for atom type 'H'
>>> GB parameter(s) missing or negative for atom type 'CT'
>>> GB parameter(s) missing or negative for atom type 'HP'
>>> GB parameter(s) missing or negative for atom type 'HC'
>>> GB parameter(s) missing or negative for atom type 'H1'
>>> GB parameter(s) missing or negative for atom type 'S'
>>> GB parameter(s) missing or negative for atom type 'C'
>>> GB parameter(s) missing or negative for atom type 'O'
>>> GB parameter(s) missing or negative for atom type 'HA'
>>> GB parameter(s) missing or negative for atom type 'OH'
>>> GB parameter(s) missing or negative for atom type 'HO'
>>> GB parameter(s) missing or negative for atom type 'OS'
>>>
>>> Can't do GB electrostatics; the implicit_genborn_params section of the
>>> forcefield is missing parameters for 13 atomtypes or they might be
>> negative.
>>> I googled the problem and found a discussion from 2012, which
>>> surprisingly was initialized by myself running into the same issue!  xD
>>>
>>>
>> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2012-May/071102.html
>>> And just as in 2012, I can get my system grompped, although these atom
>>> types listed above are in the force field's gbsa.itp file (apart from
>>> the ether oxygen OS which I added manually as a copy of the carbonyl
>>> atom type "O". Some info:
>>>
>>>
>>> ## My top file:
>>>
>>> #include "amber99sb-ildn.ff/forcefield.itp"
>>> #include "ligand.itp"
>>> #include "protein.itp"
>>>
>>>
>>> ## My atom types section at the top of the ligand.itp file:
>>>
>>> [ atomtypes ]
>>> ;name   bond_type mass charge   ptype   sigma
>>> epsilon   Amb
>>>  CT   CT  0.0  0.0   A 3.39967e-01   4.57730e-01
>>> ; 1.91  0.1094
>>>  HP   HP  0.

[gmx-users] specbond identified by pdb2gmx but not added to topology

2014-06-07 Thread Vedat Durmaz

dear gmx users/team,

i've defined some simple residues (5 monomeric units) in aminoacids.rtp, 
residues.dat and aminoacids.hdb that i want to use for the modelling of 
small polymers. the polymer may be highly bifurcated due to a branching 
T piece among the monomer units.


all possible intermolecular connections (about 20) between the units are 
defined in the specbonds file according to the following pattern:



...
XYB  CG2 1XYXO10.16XYBXYX
...


there is no need for new atom types or changes in ffbonded/ffnonbonded 
since all atoms and the resulting bonds also occur in typical amino 
acids. however, when starting pdb2gmx


pdb2gmx -ff amber99sb -f input.pdb -o polymer.pdb  -p polymer.top -ignh 
-water none


with 12 residues/units (and several bifurcations) requiring 11 
additional intermolecular bonds to be read from the specbond file, i get 
a nice resulting topology containing all desired bonds except for one 
lone 0.143nm bond between atom CG2 (amber type CT) of residue 3 ("XYB") 
and atom O (amber type OS) of residue 7 ("XYX"). curiously, this missing 
bond is listed in the distance matrix output of pdb2gmxt related to 
specbond:



...
29 out of 29 lines of specbond.dat converted successfully
Special Atom Distance matrix:

XYL1XYX2XYX2XYX2XYB3 XYB3XYB3
CG25  O6OG19   CG210 O11 OG114   CG215
...
XYX7 O31   0.714   0.635   0.487   0.293   0.483 0.303 
->0.143<- here it is

...
...


but when it comes to the linking stage a little further in the output, 
this bond is neglected whereas the other ten bonds are set as desired:



...
Linking XYL-1 CG2-5 and XYX-2 O-6...
Linking XYX-2 OG1-9 and XYX-4 CG2-20...
Linking XYX-2 CG2-10 and XYB-3 OG1-14...
Linking XYX-4 O-16 and XYL-5 CG2-25...
Linking XYX-4 OG1-19 and XYL-6 CG2-30...
Linking XYX-7 OG1-34 and XYX-8 CG2-40...
Linking XYX-7 CG2-35 and XYR-12 O-56...
Linking XYX-8 O-36 and XYL-9 CG2-45...
Linking XYX-8 OG1-39 and XYB-10 CG2-50...
Linking XYB-10 OG1-49 and XYL-11 CG2-55...
...


does anyone have some hint or an idea of what might be suppressing the 
linking of that bond? i've tried it with gromacs 4.5 as well as 4.6 
without any difference in the output.



thanks in advance and a good weekend,

vedat






--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] specbond identified by pdb2gmx but not added to topology

2014-06-09 Thread Vedat Durmaz

hi justin & gmx-users,

meanwhile i had understood that the reference distance in the speconds file is 
not an upper boundary for possible bonds but the distance itself +/-x. i should 
have read the manual with more attention before writing to the list.

thanks again,

vedat





dear gmx users/team,


i've defined some simple residues (5 monomeric units) in aminoacids.rtp,
residues.dat and aminoacids.hdb that i want to use for the modelling of small
polymers. the polymer may be highly bifurcated due to a branching T piece among
the monomer units.

all possible intermolecular connections (about 20) between the units are defined
in the specbonds file according to the following pattern:


...
XYB  CG2 1XYXO10.16XYBXYX
...


there is no need for new atom types or changes in ffbonded/ffnonbonded since all
atoms and the resulting bonds also occur in typical amino acids. however, when
starting pdb2gmx

pdb2gmx -ff amber99sb -f input.pdb -o polymer.pdb  -p polymer.top -ignh -water 
none

with 12 residues/units (and several bifurcations) requiring 11 additional
intermolecular bonds to be read from the specbond file, i get a nice resulting
topology containing all desired bonds except for one lone 0.143nm bond between
atom CG2 (amber type CT) of residue 3 ("XYB") and atom O (amber type OS) of
residue 7 ("XYX"). curiously, this missing bond is listed in the distance matrix
output of pdb2gmxt related to specbond:


...
29 out of 29 lines of specbond.dat converted successfully
Special Atom Distance matrix:

 XYL1XYX2XYX2XYX2XYB3 XYB3XYB3
 CG25  O6OG19   CG210 O11 OG114   CG215
 ...
 XYX7 O31   0.714   0.635   0.487   0.293   0.483 0.303 ->0.143<- here
it is
 ...
...


but when it comes to the linking stage a little further in the output, this bond
is neglected whereas the other ten bonds are set as desired:


...
Linking XYL-1 CG2-5 and XYX-2 O-6...
Linking XYX-2 OG1-9 and XYX-4 CG2-20...
Linking XYX-2 CG2-10 and XYB-3 OG1-14...
Linking XYX-4 O-16 and XYL-5 CG2-25...
Linking XYX-4 OG1-19 and XYL-6 CG2-30...
Linking XYX-7 OG1-34 and XYX-8 CG2-40...
Linking XYX-7 CG2-35 and XYR-12 O-56...
Linking XYX-8 O-36 and XYL-9 CG2-45...
Linking XYX-8 OG1-39 and XYB-10 CG2-50...
Linking XYB-10 OG1-49 and XYL-11 CG2-55...
...


does anyone have some hint or an idea of what might be suppressing the linking
of that bond? i've tried it with gromacs 4.5 as well as 4.6 without any
difference in the output.



Bonds are added if the distance between the atoms is ± 10% of the reference
distance specified in specbonds.dat.  That's not true in the case of a 0.16-nm
reference distance, since 0.144 nm is the lower boundary.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul at outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

 

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] (no subject)

2014-06-12 Thread Vedat Durmaz

i've recieved your email concernig "energy minimisation step in protein
lipid simulation" already 3 times ..


maybe you've deactivated the delivery of mails? if so, do the following:

go to https://mailman-1.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users

and log in where it says "To unsubscribe from gromacs.org_gmx-users, get
a password reminder, or change your subscription options enter your
subscription email address:"

there you can change your personal settings.

vedat



Am 12.06.2014 11:31, schrieb Balasubramanian Suriyanarayanan:
> Why is my questions not appearing in the forum.
>
> Do I not follow the rules.'
> Please clarify.
>
>
> regards
>
> Suriyanarayanan

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] (no subject)

2014-06-12 Thread Vedat Durmaz
then, i don't know what to do. may be your email provider sorts them out
or something.

anyway, did you get justin's answer he sent recently? see below ..

vedat



On 6/12/14, 4:54 AM, Balasubramanian Suriyanarayanan wrote:
> Dear friends, In protein lipid simulation , when I do energy minimisation
> ster I get an error as " Ek! No confout.gro at all!
>
> Died at inflategro.pl line 104."
>
> What is "confout.gro". If am right it is the output file that we get from
> the minimisation step.
>
> I used the command " perl inflategro.pl confout.gro 0.95 POPC 0
> system_shrink1.gro 5 area_shrink1.dat".
>

You're specifying confout.gro as the source of coordinates.  Apparently
that file doesn't exist in the working directory, so InflateGRO is
giving you an error.

-Justin



Am 12.06.2014 11:46, schrieb Balasubramanian Suriyanarayanan:
> but I get other mails.
>
> even the last one which i have sent with out title has appeared.
>
> regards
> Suriyanarayanan
>
>
> On Thu, Jun 12, 2014 at 3:14 PM, Vedat Durmaz  <mailto:dur...@zib.de>> wrote:
>
>
> i've recieved your email concernig "energy minimisation step in
> protein
> lipid simulation" already 3 times ..
>
>
> maybe you've deactivated the delivery of mails? if so, do the
> following:
>
> go to
> https://mailman-1.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
>
> and log in where it says "To unsubscribe from
> gromacs.org_gmx-users, get
> a password reminder, or change your subscription options enter your
> subscription email address:"
>
> there you can change your personal settings.
>
> vedat
>
>
>
> Am 12.06.2014 11:31, schrieb Balasubramanian Suriyanarayanan:
> > Why is my questions not appearing in the forum.
> >
> > Do I not follow the rules.'
> > Please clarify.
> >
> >
> > regards
> >
> > Suriyanarayanan
>
>

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] (no subject)

2014-06-12 Thread Vedat Durmaz

i've recieved your email concernig "energy minimisation step in protein
lipid simulation" already 3 times ..


maybe you've deactivated the delivery of mails? if so, do the following:

go to https://mailman-1.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users

and log in where it says "To unsubscribe from gromacs.org_gmx-users, get
a password reminder, or change your subscription options enter your
subscription email address:"

there you can change your personal settings.

vedat



Am 12.06.2014 11:31, schrieb Balasubramanian Suriyanarayanan:
> Why is my questions not appearing in the forum.
>
> Do I not follow the rules.'
> Please clarify.
>
>
> regards
>
> Suriyanarayanan

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Gromacs bug related to rms and atommass.dat interplay (Can not find mass in database for atom MG in residue)

2020-02-05 Thread Vedat Durmaz

Hi there,

I'm pretty sure it's not a feature but a bug which I've faced in the GMX
versions 2018.7, 2019.5 an 2020.

When I try to calculate RMSD values for a protein system including a
catalytic magnesium ion "Mg" using the command

gmx rms -s mol.pdb -f mol.xtc -f2 mol.xtc -o mol-rmsd.xvg -debug

I get this error:


Can not find mass in database for atom MG in residue 1370 MG

---
Program: gmx rms, version 2019.5
Source file: src/gromacs/fileio/confio.cpp (line 517)

Fatal error:
Masses were requested, but for some atom(s) masses could not be found in the
database. Use a tpr file as input, if possible, or add these atoms to
the mass
database.

Let's accept for the moment that using a .tpr file with -s (rather than
a .pdb file) is no option for me. Consequently, gmx retrieves atom
masses from atommass.dat which actually contains the Mg ion:


; NOTE: longest names match
; General atoms
; '???' or '*' matches any residue name
???  H  1.00790 
???  He 4.00260 
???  Li 6.94100 
???  Be 9.01220 
???  B 10.81100 
???  C 12.01070 
???  N 14.00670 
???  O 15.99940 
???  F 18.99840 
???  Ne    20.17970 
???  Na    22.98970 
???  Mg    24.30500    <<< in this line
???  Al    26.98150 
???  Si    28.08550 
???  P 30.97380


Having a look at the log file of "gmx rms -debug ...", I see some
strange output. Here's a snippet:


searching residue:  ??? atom:    H
 not successful
searching residue:  ??? atom:   He
 match:  ???    H
searching residue:  ??? atom:   Li
 not successful
searching residue:  ??? atom:   Be
 not successful
searching residue:  ??? atom:    B
 not successful
searching residue:  ??? atom:    C
 not successful
searching residue:  ??? atom:    N
 not successful
searching residue:  ??? atom:    O
 not successful
searching residue:  ??? atom:    F
 not successful
searching residue:  ??? atom:   Ne
 match:  ???    N
searching residue:  ??? atom:   Na
 match:  ???    N
searching residue:  ??? atom:   Mg
 not successful
searching residue:  ??? atom:   Al
 not successful
searching residue:  ??? atom:   Si
 not successful
searching residue:  ??? atom:    P
 not successful
searching residue:  ??? atom:    S
 not successful
searching residue:  ??? atom:   Cl
 match:  ???    C
 ...
searching residue:  ??? atom:   Cu
 match:  ???    C
...


I don't know what exactly the rms command is doing in the scenes and
also don't want to agonize over the cpp code. But to me it seems as if
the element assignment is based only on the FIRST LETTER of each element
name. If I use copper (Cu) instead of magnesium, the program runs fine.
Now let's compare the Cu lines with the Mg lines in the debugging output
above.

searching residue:  ??? atom:   Cu
 match:  ???    C

searching residue:  ??? atom:   Mg
 not successful

Obviously, Cu is found because the first letter of its element
abbreviation, C (because Cu[index 0] is C) does exist as an element, but
there is no element M, the first letter of Mg. A little test
(respectively an improper workaround). If I add a line for the element
"M" to atommass.dat like this

???  Na    22.98970 
???  Mg    24.30500
???  M 24.30500  <<< new line
???  Al    26.98150 

then the rms command executed on the protein with Mg runs without any
error. But this observation also implies that masses, the rms command
retrieves from atommass.dat, are wrong because, e.g., to each of Cl, Cr,
Cu and Co, the mass of C is assigned.


I see 2 options for GMX developers. Either you check the
rms<->atommass.dat interplay, or you disable the possibility to use PDB
files with the -s option. However, I would strongly discourage from the
latter decision since there are cases where you have an xtc trajectory
possibly generated with another tool along with a pdb file, but you
don't want to spend too much time in the generation of a proper tpr file.

Thanks & best wishes,

Vedat


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.