Hi,
Please see
http://www.gromacs.org/Documentation/How-tos/Extending_Simulations and try
things out. We can't bless everybody's scripts...
Mark
On Mon, 11 May 2015 15:30 Satyabrata Das wrote:
> Dear Dr. Mark Abraham,
> Thank you for all the suggestions.
> I have a query now about the 'mdrun'
Dear Dr. Mark Abraham,
Thank you for all the suggestions.
I have a query now about the 'mdrun' restart command line:
First run:
export OMP_NUM_THREADS=2
aprun -n 144 -N 24 -j1 mdrun_mpi -npme 48 -maxh 24 -v -deffnm simu-100ns
(simu-100ns.tpr for whole 100ns)
Restart-1,2,3...etc:
export OMP_NUM_TH
Hi,
You should check out the other recent discussion in this list on
performance variation. Getting your GROMACS nodes allocated close together
is an important part of mitigating such problems.
Rather than manually splitting your job into small pieces, you can have
mdrun do that automatically for
Dear Dr. Chris Neale,
Thank you for this reply, actually I am able to run job using multiple
aprun.
One thing is clear that in order to divide whole run into smaller bin (in
term
of ns) one need to invoke 'aprun' for every 'mdrun'. Correct me, in case
it is not true. I am trying to follow your oth
ed_jid|awk -F '.' '{print $1}')
fi
# Main loop for job chain submission
for((j=n;j ./last_submitted_jid
id=${nid}
done
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
on behalf of Satyabrata Das
Sent: 09 May 2015 02:41
Thank you Justin, indeed wallclock limit is there, there are
heterogeneity on performance (40 ns to 120 ns for 24:00:00)
also and to avoid the very large trr file we do follow the smaller bin.
One need to submit few time same job.
Regarding heterogeneity: in order to balance PP:PME load imbalance