[gmx-users] QMMM number of thread

2013-07-18 Thread SEMRAN İPEK
Dear Users; I would like to use Gromacs for QM/MM calculations. Up to now md calculations have been proceed without failure. Could you please shed on light on these issues related to the number of threads while using QM/MM interface with any kind of Quantum Chemistry software? 1-How many thread c

Re: [gmx-users] Multi-level parallelization: MPI + OpenMP

2013-07-18 Thread Éric Germaneau
I actually submitted using two MPI process per node but log files do not get updated, it's like the calculation gets stuck. Here is how I proceed: mpirun -np $NM -machinefile nodegpu mdrun_mpi -nb gpu -v -deffnm test184000atoms_verlet.tpr >& mdrun_mpi.log with the content of /nodegpu/:

Re: [gmx-users] g_hbond for trajectory without having box information

2013-07-18 Thread bipin singh
Thanks a lot Prof. David. I will try this. On Fri, Jul 19, 2013 at 10:45 AM, David van der Spoel wrote: > On 2013-07-19 06:26, bipin singh wrote: > >> Hello all, >> >> I was using g_hbond to calculate H-bonds for a trajectory made from >> several >> individual snapshots from MD simulation, but b

Re: [gmx-users] g_hbond for trajectory without having box information

2013-07-18 Thread David van der Spoel
On 2013-07-19 06:26, bipin singh wrote: Hello all, I was using g_hbond to calculate H-bonds for a trajectory made from several individual snapshots from MD simulation, but because this trajectory does not have the coordinates/information for simulation box, g_hbond is giving the following error:

[gmx-users] g_hbond for trajectory without having box information

2013-07-18 Thread bipin singh
Hello all, I was using g_hbond to calculate H-bonds for a trajectory made from several individual snapshots from MD simulation, but because this trajectory does not have the coordinates/information for simulation box, g_hbond is giving the following error: Fatal error: Your computational box has

[gmx-users] Multi-level parallelization: MPI + OpenMP

2013-07-18 Thread Éric Germaneau
Dear all, I'm note a gromacs user, I've installed gromacs 4.6.3 on our cluster and making some test. Each node of our machine has 16 cores and 2 GPU. I'm trying to figure how to submit efficient multiple nodes LSF jobs using the maximum of resources. After reading the documentation

[gmx-users] segfault with an otherwise stable system when I turn on FEP (complete decoupling)

2013-07-18 Thread Christopher Neale
Dear Michael: I have uploaded them as http://redmine.gromacs.org/issues/1306 It does not crash immediately. The crash is stochastic, giving a segfault between 200 and 5000 integration steps. That made me think it was a simple exploding system problem, but there are other things (listed in my or

Re: [gmx-users] segfault with an otherwise stable system when I turn on FEP (complete decoupling)

2013-07-18 Thread Michael Shirts
Chris, can you post a redmine on this so I can look at the files? Also, does it crash immediately, or after a while? On Thu, Jul 18, 2013 at 2:45 PM, Christopher Neale wrote: > Dear Users: > > I have a system with water and a drug (54 total atoms; 27 heavy atoms). The > system is stable when I

[gmx-users] segfault with an otherwise stable system when I turn on FEP (complete decoupling)

2013-07-18 Thread Christopher Neale
Dear Users: I have a system with water and a drug (54 total atoms; 27 heavy atoms). The system is stable when I simulate it for 1 ns. However, Once I add the following options to the .mdp file, the run dies after a few ps with a segfault. free-energy = yes init-lambda = 1 couple-lambda0 = vdw-

Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Justin Lemkul
On 7/18/13 11:52 AM, Rasoul Nasiri wrote: The error message using 6.4: -- *** glibc detected *** g_sas_d: malloc(): memory corrupti

Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Rasoul Nasiri
The error message using 6.4: -- *** glibc detected *** g_sas_d: malloc(): memory corruption: 0x0065c8b0 *** === Backtrace:

Re: [gmx-users] meaning of results of g_hbond -ac

2013-07-18 Thread Erik Marklund
* Time * Ac(hbond) with correction for the fact that a finite system is being simulated. * Ac(hbond) without correction * Cross correlation between hbonds and contacts (see the papers by Luzar&Chandler and van der Spoel that are mentioned in the stdout from g_hbond) * Derivative of second column.

Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Justin Lemkul
On 7/18/13 8:04 AM, Rasoul Nasiri wrote: Justin, I just ran this calculations on "VERSION 4.6-GPU-dev-20120501-ec56c" and I will let you know about the outcomes. The outcome of 4.6.3 would be more interesting than an outdated development version. -Justin Rasoul On Thu, Jul 18, 2013 at

Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Rasoul Nasiri
Justin, I just ran this calculations on "VERSION 4.6-GPU-dev-20120501-ec56c" and I will let you know about the outcomes. Rasoul On Thu, Jul 18, 2013 at 1:09 PM, Rasoul Nasiri wrote: > Bellow are commands and error message: > > 1- trjconv_d -f traj.xtc -n maxclust.ndx -o traj_out.xtc > > 2-

Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Rasoul Nasiri
Bellow are commands and error message: 1- trjconv_d -f traj.xtc -n maxclust.ndx -o traj_out.xtc 2-g_sas_d -f traj_out.xtc -n maxclust.ndx -o surface.xvg -nopbc glibc detected *** g_sas_d: malloc(): memory corruption: 0x016dfcd0 Rasoul On Thu, Jul 18, 2013 at 1:01 PM, Just

Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Justin Lemkul
On 7/18/13 6:52 AM, Rasoul Nasiri wrote: Hello all, I'm trying to know how the surface area of a nano-drop changes during the evaporation in the vacuum. When I filter the trajectory of non-evaporated molecules by trjconv and use g_sas for calculation of their surface, it usually crash (I'm us

[gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Rasoul Nasiri
Hello all, I'm trying to know how the surface area of a nano-drop changes during the evaporation in the vacuum. When I filter the trajectory of non-evaporated molecules by trjconv and use g_sas for calculation of their surface, it usually crash (I'm using version of 4.5.5). Is there still this i