[gmx-users] Yet another question about what force field to use

2013-08-13 Thread Pedro Lacerda
Hi all,

Many questions about how to choose a force field are addressed to this
list, sorry if this one was already answered.

Some argue that proteins are better modeled with ff99sb-ildn or charmm22*
because occur great agreement between NMR experiments and simulations when
using these force fields. But what about organic heteromolecules? One could
say that organic molecules are similar enough to amino acid side chains
concluding that protein force fields are adequate to model these molecules,
but I'm not sure.

What ff would best describe a small ligand bound to protein? What do you
think?

abraços,
Pedro Lacerda
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault on g_protonate

2013-08-09 Thread Pedro Lacerda
Hi,

My heteromolecule structure is missing hydrogens. I did an aminoacids.hdb
entry which I suppose being right. When running `g_protonate -s conf.pdb -o
prot.pdb` to add the hydrogens happens an segmentation fault. The traceback
for 4.6.4-dev-20130808-afc6131 follows. I could add them by any other ways,
but g_protonate seems the right way to do. Can you help me to use
g_protonate?

Program received signal SIGSEGV, Segmentation fault.
0x77b22450 in calc_all_pos (pdba=0x619d20, x=0x61c6a0,
nab=0x61c310, ab=0x61f9e0, bCheckMissing=0) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:392
392if (ab[i][j].oname == NULL && ab[i][j].tp > 0)
(gdb) bt
#0  0x77b22450 in calc_all_pos (pdba=0x619d20, x=0x61c6a0,
nab=0x61c310, ab=0x61f9e0, bCheckMissing=0) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:392
#1  0x77b22cd7 in add_h_low (pdbaptr=0x7fffc1e8,
xptr=0x7fffced8, nah=50, ah=0x613370, nterpairs=1, ntdb=0x616440,
ctdb=0x619cc0, rN=0x619ce0, rC=0x619d00,
bCheckMissing=0, nabptr=0x7fffdf40, abptr=0x7fffdf48,
bUpdate_pdba=1, bKeep_old_pdba=1) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:540
#2  0x77b23b66 in add_h (pdbaptr=0x7fffc1e8,
xptr=0x7fffced8, nah=50, ah=0x613370, nterpairs=1, ntdb=0x616440,
ctdb=0x619cc0, rN=0x619ce0, rC=0x619d00,
bAllowMissing=1, nabptr=0x7fffdf40, abptr=0x7fffdf48,
bUpdate_pdba=1, bKeep_old_pdba=1) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:781
#3  0x77b24080 in protonate (atomsptr=0x7fffceb8,
xptr=0x7fffced8, protdata=0x7fffdf30) at
/home/peu/Downloads/gromacs/src/kernel/genhydro.c:894
#4  0x004020ff in cmain (argc=1, argv=0x7fffe0d8) at
/home/peu/Downloads/gromacs/src/kernel/g_protonate.c:195
#5  0x0040224c in main (argc=5, argv=0x7fffe0d8) at
/home/peu/Downloads/gromacs/src/kernel/main.c:29


abraços,
Pedro Lacerda
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Intel vs gcc compilers

2013-06-25 Thread Pedro Lacerda
On Tue, Jun 25, 2013 at 8:53 AM, Mark Abraham wrote:

> You're using a real-MPI process per core, and you have six cores per
> processor. The recommended procedure is to map cores to OpenMP
> threads, and choose the number of MPI processes per processor (and
> thus the number of OpenMP threads per MPI process) to maximize
> performance. See
>
> http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Multi-level_parallelization.3a_MPI.2fthread-MPI_.2b_OpenMP


The page says:

> at the moment, the multi-level parallelization will surpass the
> (thread-)MPI-only parallelization only in case of highly parallel runs
> and/or with a slow network.


What "highly parallel runs" mean? I'm sure it works for Djurre as he has 72
nodes, but how many six-core nodes are considered highly parallel?
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists