Hi all.

We are having some problem running g_tcaf here. It keeps yielding
"segmentation faults".

We first thought it was because of the single-precision calculations of the
first run, or the exceedingly large trajectory file. Both were changed for
a second trial, which still yielded the same error.

Also, we are aware of the similar error posted in this thread at this
mail-list:

http://www.mail-archive.com/gmx-users@gromacs.org/msg54295.html

Unfortunately, it seems to be a different issue.

We are also writing a full trajectory file in the .trr file, as can be seen
below on the excerpt from the .mdp file below:

dt                  =  0.002
nsteps              =  100000
nstcomm             =  1
nstxout             =  0
nstvout             =  10
nstfout             =  10
nstlog              =  10
nstenergy           =  100
nstlist             =  5

The precise line our cluster script is trying to execute is:

echo 0 | g_tcaf -f BMIm.AlCl4.md14.trr -s BMIm.AlCl4.md14.tpr -oa
BMIm.AlCl4.md14.all.xvg -o BMIm.AlCl4.md14.tcaf.xvg -of
BMIm.AlCl4.md14.fit.xvg -ov BMIm.AlCl4.md14.visc.xvg >& md14.tcaf.out

The "echo 0" assures that the needed parameter is passed to g_tcaf. By the
way, in our last test the pbs parameters for resources request were:

#PBS -l ncpus=8
#PBS -l mem=16GB

We know that it's pointless to ask for extra cpus, it's in there just for a
matter involving the machine architecture. But the 16 GB of memory, despite
available, are just our guess on how much it will need. We also tested it
out of the queue to be certain, but it still fails at the same point.

Does anybody has any suggestion please? I would be really helpful.

Thanks a lot in advance for any help,

Jones
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Reply via email to