Hi Zhuquing,
as far as I know, atom numbers in .gro file are irrelevant - gromacs
asigns indices based on the order of records in the file. I my
configurations numbers are often quite random, as I copy and paste
coordinates from different sources. It never caused any problems in
gromacs
Hi Sujith,
you can try to translate the initial configuration by, say, L/2 in all
directions using trjconv. If the bubble appears again at the edge, I
would suspect artifacts. If you are using Verlet cutoff scheme, you
can also try to change it to group - it helped in my simulation when a
droplet
Dear community,
I simulate a droplet made of rigid SPC waters (ca 1 molecules)
lying on a solid surface placed at z=0 represented by LJ atoms with
fixed coordinates. Using cutoff-sheme = Verlet with quite long cutoff
(3.0nm) produces a strange spurious force, which seems to pull the
droplet
.
Regards, Jan
2015-06-15 21:51 GMT+02:00 Szilárd Páll pall.szil...@gmail.com:
On Sun, Jun 14, 2015 at 6:54 PM, Jan Jirsák janjir...@gmail.com wrote:
Hi,
I did the test and found out that -nt 8 is even slower than -nt 1 !!
FYI: -nt is mostly backward compatibility option and for clarity
it's best
of details as normally? This is what people do in
the plane-wave codes when the so-called 'cluster representation' is
desired.
On Sun, Jun 14, 2015 at 1:54 PM, Jan Jirsák janjir...@gmail.com wrote:
Hi,
I did the test and found out that -nt 8 is even slower than -nt 1 !!
However, I think
people do in
the plane-wave codes when the so-called 'cluster representation' is
desired.
On Sun, Jun 14, 2015 at 1:54 PM, Jan Jirsák janjir...@gmail.com wrote:
Hi,
I did the test and found out that -nt 8 is even slower than -nt 1 !!
However, I think that simulation hasn't even properly
Hi,
I did the test and found out that -nt 8 is even slower than -nt 1 !!
However, I think that simulation hasn't even properly started with 8
threads and got stuck somewhere in the beginning.
Details:
I used a short run (1000 steps) for testing. mdrun -nt 1 finished
after ca 11 hours, whereas
Hi,
I have one more problem with running this system with thread-MPI
(tested in both 5.0.4 and 5.0.5 on two different machines). When I set
everything as you advised me, it runs, however top command shows only
100% load - i.e., only single CPU is used (and it is really very very
slow), despite
Hello everyone,
what is the correct setup for simulations with no PBC and no cuttoffs in
Gromacs 5.0.4?
In versions 4.5 and 4.6 i used
nstlist = 0
ns_type = simple
pbc = no
This no longer works, as I get the error:
Domain decomposition does not support simple neighbor searching, use grid
David van der Spoel spoel@... writes:
Use grid search in any case. It supports vacuum.
Thank you very much, it seems to work. I wonder, does nstlist variable have
any relevance in this case? I mean, here all particles interact with one
another, so it should be sufficient to build neighbor list
Justin Lemkul jalemkul@... writes:
Use mdrun -nt 1
Thank you for a quick reply - however, I really need to parallelize - single
CPU run would take ages;)
Regards, Jan
--
Gromacs Users mailing list
* Please search the archive at
11 matches
Mail list logo