.
Mark
On Tue, Mar 12, 2013 at 10:08 AM, Chaitali Chandratre
chaitujo...@gmail.com
wrote:
Sir,
Thanks for your reply
But the same script runs on some other cluster with apprx same
configuration but not on cluster on which I am doing set up.
Also job hangs after some 16000
effectively. Below about 1000 atoms/core you're wasting your time unless
you've balanced the load really well. There is a
simulation-system-dependent point below which fatal GROMACS errors are
assured.
Mark
On Tue, Mar 12, 2013 at 6:17 AM, Chaitali Chandratre
chaitujo...@gmail.comwrote:
Hello Sir
for
completion
I am not clear like whether problem is there in my installation or what?
Thanks and Regards,
Chaitalij
On Wed, Mar 6, 2013 at 5:41 PM, Justin Lemkul jalem...@vt.edu wrote:
On 3/6/13 4:20 AM, Chaitali Chandratre wrote:
Dear Sir ,
I am new to this installation and setup area. I need
Dear Sir ,
I am new to this installation and setup area. I need some information for
-stepout option for
mdrun_mpi and also probable causes for segmentation fault in gromacs-4.5.4.
(my node having 64 GB mem running with 16 processes, nsteps = 2000)
Thanks in advance.
--
With Regards,
4 matches
Mail list logo