Re: [gmx-users] Free-energy on GMX-2019.1 ( lower performance on GPU) (Mark Abraham)

2019-03-15 Thread praveen kumar
Dear Mark

I have a system containing formed lipid-bilayer (Phospholipid + drug
molecules) (~91 K atoms): There are 120 Phospholipids and 87 drug molecules
in the system box of (8 X 8 X 12). I am trying to grow the all the drug
molecules (87) (each drug consist of 122 atoms) from decoupled state to
coupled state using two-stage (TI method). First decoupling the vdw and
then ele. I have tested with both the simulations these do not run on GPU
mostly taking CPU to run. I have checked -pme gpu -bonded  gpu  (these are
not helping me run on GPU)

Thanks
Praveen



Hi,

How large is your perturbed region and your normal region? The FEP
short-ranged kernels run on the CPU, and are not written very well for
performance. So the larger the perturbed region, the worse things get.
Because there's a lot of extra CPU work when running FEP, you may see
improvements from also adding -pme gpu -bonded gpu to your mdrun
invocation, by moving such work off the CPU.

BTW lincs-order=12 is uselessly large, but is not the problem here.

Mark

On Fri, 15 Mar 2019 at 06:16 praveen kumar  wrote:

> Dear All
>
> I am trying to run the free-energy simulation using TI method in gromacs
> 2019.1 in a GPU machine  (containing two Nvidia Geforce 1080 TI cards ).
> But unfortunately, am unable to run the free-energy simulation run on GPU.
>
> The normal MD simulation (without free-energy )is able to run perfectly by
> making use of GPU, which gives us excellent speed up in the simulation.
> for example, 100 K atoms system is able to give us ~ 80 ns per day on a
gpu
> card.  (It uses > 80 % GPU usage)
> When I am trying to run the free-energy simulations for the same system,
> the performance drastically falls down to ~0.02 ns per day.  (It uses 0 %
> GPU usage).
>
> I am pasting the MDP files for Normal MD simulation and Free-energy
> simulation below.
> npt. mdp (MD simulation)
>
>
> #
> title= MD simulation
> ; Run parameters
> integrator= md; leap-frog integrator
> nsteps= 1  ; 2 * 6000   = 200 ns
> dt= 0.002; 2 fs
> ; Output control
> nstxout= 10  ; save coordinates every 10.0 ps
> nstvout= 10  ; save velocities every 10.0 ps
> nstfout= 10  ; save forces every 10.0 ps
> nstenergy= 500; save energies every 10.0 ps
> nstlog= 5000; update log file every 10.0 ps
> nstxout-compressed  = 5000  ; save compressed coordinates
every
> 10.0 ps, nstxout-compressed replaces nstxtcout
> compressed-x-grps   = System; replaces xtc-grps
> ; Bond parameters
> continuation= yes; Restarting after NVT
> constraint_algorithm= lincs; holonomic constraints
> constraints= h-bonds; H bonds constrained
> lincs_iter= 1; accuracy of LINCS
> lincs_order= 4; also related to accuracy
> ; Neighborsearching
> cutoff-scheme   = Verlet
> ns_type= grid; search neighboring grid cells
> nstlist= 10; 20 fs, largely irrelevant with Verlet
> rcoulomb= 1.2; short-range electrostatic cutoff (in nm)
> rvdw= 1.2; short-range van der Waals cutoff (in nm)
> rvdw-switch = 1.0
> vdwtype = cutoff
> vdw-modifier= force-switch
> rlist = 1.2
> ; Electrostatics
> coulombtype= PME; Particle Mesh Ewald for long-range
> electrostatics
> pme_order= 4; cubic interpolation
> fourierspacing= 0.16; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl= V-rescale; modified Berendsen thermostat
> tc-grps= system; Water   ; two coupling
> groups - more accurate
> tau_t= 0.1 ;0.1  ; time constant, in ps
> ref_t= 360  ;340 ; reference
> temperature, one for each group, in K
> ; Pressure coupling is on
> ;pcoupl  =no
> pcoupl= Parrinello-Rahman; Pressure coupling on in
> NPT
> pcoupltype= isotropic; uniform scaling of box
> vectors
> tau_p= 2.0; time constant, in ps
> ref_p= 1.0   ;1.0 ; reference pressure, in
> bar
> compressibility = 4.5e-5 ; 4.5e-5; isothermal
> compressibility of water, bar^-1
> ; Periodic boundary conditions
> pbc= xyz; 3-D PBC
> ; Dispersion correction
> DispCorr= no 

[gmx-users] Free-energy on GMX-2019.1 ( lower performance on GPU)

2019-03-14 Thread praveen kumar
Dear All

I am trying to run the free-energy simulation using TI method in gromacs
2019.1 in a GPU machine  (containing two Nvidia Geforce 1080 TI cards ).
But unfortunately, am unable to run the free-energy simulation run on GPU.

The normal MD simulation (without free-energy )is able to run perfectly by
making use of GPU, which gives us excellent speed up in the simulation.
for example, 100 K atoms system is able to give us ~ 80 ns per day on a gpu
card.  (It uses > 80 % GPU usage)
When I am trying to run the free-energy simulations for the same system,
the performance drastically falls down to ~0.02 ns per day.  (It uses 0 %
GPU usage).

I am pasting the MDP files for Normal MD simulation and Free-energy
simulation below.
npt. mdp (MD simulation)


#
title= MD simulation
; Run parameters
integrator= md; leap-frog integrator
nsteps= 1  ; 2 * 6000   = 200 ns
dt= 0.002; 2 fs
; Output control
nstxout= 10  ; save coordinates every 10.0 ps
nstvout= 10  ; save velocities every 10.0 ps
nstfout= 10  ; save forces every 10.0 ps
nstenergy= 500; save energies every 10.0 ps
nstlog= 5000; update log file every 10.0 ps
nstxout-compressed  = 5000  ; save compressed coordinates every
10.0 ps, nstxout-compressed replaces nstxtcout
compressed-x-grps   = System; replaces xtc-grps
; Bond parameters
continuation= yes; Restarting after NVT
constraint_algorithm= lincs; holonomic constraints
constraints= h-bonds; H bonds constrained
lincs_iter= 1; accuracy of LINCS
lincs_order= 4; also related to accuracy
; Neighborsearching
cutoff-scheme   = Verlet
ns_type= grid; search neighboring grid cells
nstlist= 10; 20 fs, largely irrelevant with Verlet
rcoulomb= 1.2; short-range electrostatic cutoff (in nm)
rvdw= 1.2; short-range van der Waals cutoff (in nm)
rvdw-switch = 1.0
vdwtype = cutoff
vdw-modifier= force-switch
rlist = 1.2
; Electrostatics
coulombtype= PME; Particle Mesh Ewald for long-range
electrostatics
pme_order= 4; cubic interpolation
fourierspacing= 0.16; grid spacing for FFT
; Temperature coupling is on
tcoupl= V-rescale; modified Berendsen thermostat
tc-grps= system; Water   ; two coupling
groups - more accurate
tau_t= 0.1 ;0.1  ; time constant, in ps
ref_t= 360  ;340 ; reference
temperature, one for each group, in K
; Pressure coupling is on
;pcoupl  =no
pcoupl= Parrinello-Rahman; Pressure coupling on in
NPT
pcoupltype= isotropic; uniform scaling of box
vectors
tau_p= 2.0; time constant, in ps
ref_p= 1.0   ;1.0 ; reference pressure, in
bar
compressibility = 4.5e-5 ; 4.5e-5; isothermal
compressibility of water, bar^-1
; Periodic boundary conditions
pbc= xyz; 3-D PBC
; Dispersion correction
DispCorr= no; account for cut-off vdW scheme
; Velocity generation
gen_vel= no; Velocity generation is off
##
npt. mdp ( for free-energy simulation)
##

; Run control
integrator   = sd   ; Langevin dynamics
tinit= 0
dt   = 0.002
nsteps   = 5; 100 ps
nstcomm  = 100
; Output control
nstxout  = 500
nstvout  = 500
nstfout  = 0
nstlog   = 500
nstenergy= 500
nstxout-compressed   = 0
; Neighborsearching and short-range nonbonded interactions
cutoff-scheme= verlet
nstlist  = 20
ns_type  = grid
pbc  = xyz
rlist= 1.2
; Electrostatics
coulombtype  = PME
rcoulomb = 1.2
; van der Waals
vdwtype  = cutoff
vdw-modifier = potential-switch
rvdw-switch  = 1.0
rvdw = 1.2
; Apply long range dispersion corrections for Energy and Pressure
DispCorr  = EnerPres
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12
; EWALD/PME/PPPM parameters
pme_order= 6
ewald_rtol   = 1e-06
epsilon_surface  = 0
; Temperature coupling
; tcoupl is implicitly handled by the sd 

Re: [gmx-users] multiple GPU usage for simulation

2019-01-31 Thread praveen kumar
Dear Paul

Many thanks for your help.
As per your suggestion. Now I am able to perform a simulation using two
GPUs.
Earlier there are some unnecessary flags while installing. Now I have
modified installation like this:
cmake .. DGMX_THREAD_MPI=ON -DGMX_GPU=ON  -DGMX_X11=ON

For running simulation I have used like this :
gmx mdrun -v -deffnm test -nb gpu -pme gpu -npme 1 -ntomp 5 -ntmpi 4  (for
running on both the GPU's)
gmx mdrun -v -deffnm test -nb gpu -pme gpu -npme 1 -ntomp 5 -ntmpi 2
-gpu_id 0 (or 1) (for running on indicidual the GPU)

However, now simulation running on both the GPUs. The sample performance on
a small benchmark test is given below for (98 k atoms)
2 GPU's together will give ~ 42 ns/day
1 GPU (without using other GPU) ~ 33 ns/day
If two simulations run parallelly on individual GPU's give us the ~ 22
ns/day

from "nvidia-smi" I can see that maximum of ~57% of GPU utilization on both
the GPUs. Is this maximum I can achieve with this?

Thanks
Praveen



On Wed, Jan 30, 2019 at 12:08 PM <
gromacs.org_gmx-users-requ...@maillist.sys.kth.se> wrote:

> Send gromacs.org_gmx-users mailing list submissions to
> gromacs.org_gmx-users@maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
> gromacs.org_gmx-users-requ...@maillist.sys.kth.se
>
> You can reach the person managing the list at
> gromacs.org_gmx-users-ow...@maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
>1. multiple GPU usage for simulation (praveen kumar)
>2. Re: multiple GPU usage for simulation (pbusc...@q.com)
>3. modeling evaporation NVP  NVE (pbusc...@q.com)
>4. how can I get two parallel lysin in the same box?
>   (Giuseppe R Del Sorbo)
>5. Re: how can I get two parallel lysin in the same box?
>   (pbusc...@q.com)
>6. High potential energy (Ali Ahmed)
>7. Gromacs 2018.5 with CUDA ( )
>
>
> ------
>
> Message: 1
> Date: Tue, 29 Jan 2019 22:29:10 +0530
> From: praveen kumar 
> To: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: [gmx-users] multiple GPU usage for simulation
> Message-ID:
> <
> calazg1prorjdoumvhfrpws4-0-bdpwq_mug6j-wwbkq7q+s...@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Dear gromacs users
>
> I am working on molecular simulation using gromacs 2018.4, we have a new
> gpu machine which has two GPU cards.
>
> I have new workstation  with 20 cores  (Intel I9 processors) with 3.3 GHz
>
> Nvidia Getforce X1080 Ti cards(2 nos)
>
> I have tried running simulation using the command  gmx mdrun -v -deffnm
> test
>
> getting this message
>
> Using 1 MPI process
> Using 10 OpenMP threads
>
> 1 GPU auto-selected for this run.
> Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this nodes:
>
> It seems simulation runs by make use of one GPU instead of two. I have
> checked it using nvidia-smi.
>
> similar error has got already in the given link
> https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-
> users/2018-April/119915.html
>
> I have tried all those options given by Mark and other people, I am
> wondering whether this issue cab be solved ? I.e. Make use of two gpu cards
> for one simulation run
>
> I would be really thankful to if anyone can help me in this regard.
>
> Thanks
> Praveen
>
>
> --
> Thanks & Regards
> Dr. Praveen Kumar Sappidi,
> National Post Doctoral Fellow.
> Computational Nanoscience Laboratory,
> Chemical Engineering Department,
> IIT Kanpur, India-208016
>
>
>
> --
> Thanks & Regards
> Praveen Kumar Sappidi,
> National Post Doctoral Fellow.
> Computational Nanoscience Laboratory,
> Chemical Engineering Department,
> IIT Kanpur, India-208016
>
>
> --
>
> Message: 2
> Date: Tue, 29 Jan 2019 11:50:12 -0600
> From: 
> To: 
> Subject: Re: [gmx-users] multiple GPU usage for simulation
> Message-ID: <000901d4b7fb$15edfe60$41c9fb20$@q.com>
> Content-Type: text/plain;   charset="us-ascii"
>
> I am not expert on this subject but have recently gone through the
> exercise...
>
> Firstly, does nvidia-smi indicate both cards are active ?
>
> Secondly,  for the nvt or npt runs  have you tried mdrun commands similar
> to
> :
>
> mdrun -deffnm  file  -nb gpu  -gpu_id 01
> or
> mdrun -deffnm  file  -nb gpu -pme  g

Re: [gmx-users] parallelizing gromacs2018.4

2018-11-25 Thread praveen kumar
Dear all
As per the suggestions, given Now am  able to run the simulations in 1 node
with 20 CPU.

"export OMP_NUM_THREADS=4

-np should now become 5 in the mpirun command."
 but when I use two nodes instead of 1 node
the simulations slowed done the performance like this,
1 node with 20 CPUs will give  ~ 42 ns per day
2 nodes with 40 cpus will give ~ 3 ns per day
The below is the modified script for two nodes

Script for 2 nodes (~ 3 ns per day )
#!/bin/bash
#PBS -N test
#PBS -q mini
#PBS -l nodes=2:ppn=20
#PBS -j oe
#$ -e err.$JOB_ID.$JOB_NAME
#$ -o out.$JOB_ID.$JOB_NAME
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=5
export I_MPI_FABRICS=shm:dapl
export I_MPI_MPD_TMPDIR=/scratch/sappidi/largefile/
#source /opt/software/intel/initpaths intel64

/home/sappidi/software/openmpi-2.0.1/bin/mpirun -np 8 -machinefile
$PBS_NODEFILE /home/sappidi/software/gromacs-2018.4/bin/gmx_mpi mdrun   -v
-s NVT1.tpr -deffnm 2

Script for one node (~ 40 ns per day)

Script for 2 nodes
#!/bin/bash
#PBS -N test
#PBS -q mini
#PBS -l nodes=2:ppn=20
#PBS -j oe
#$ -e err.$JOB_ID.$JOB_NAME
#$ -o out.$JOB_ID.$JOB_NAME
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=5
export I_MPI_FABRICS=shm:dapl
export I_MPI_MPD_TMPDIR=/scratch/sappidi/largefile/
#source /opt/software/intel/initpaths intel64

/home/sappidi/software/openmpi-2.0.1/bin/mpirun -np 4 -machinefile
$PBS_NODEFILE /home/sappidi/software/gromacs-2018.4/bin/gmx_mpi mdrun   -v
-s NVT1.tpr -deffnm 2


Any help in this regard is much appreciated.

Thanks
Praveen



On Sat, Nov 24, 2018 at 2:12 AM <
gromacs.org_gmx-users-requ...@maillist.sys.kth.se> wrote:

> Send gromacs.org_gmx-users mailing list submissions to
> gromacs.org_gmx-users@maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
> gromacs.org_gmx-users-requ...@maillist.sys.kth.se
>
> You can reach the person managing the list at
> gromacs.org_gmx-users-ow...@maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
>1. Re: parallelizing gromacs2018.4 (Abhishek Acharya)
>2. Re: Error: Cannot set thread affinities on the current
>   platform (Neena Susan Eappen)
>3. Re: Error: Cannot set thread affinities on the current
>   platform (Benson Muite)
>4. free binding energy calculation (marzieh dehghan)
>5. Re: free binding energy calculation (Benson Muite)
>
>
> --
>
> Message: 1
> Date: Fri, 23 Nov 2018 23:03:12 +0530
> From: Abhishek Acharya 
> To: Discussion list for GROMACS users 
> Subject: Re: [gmx-users] parallelizing gromacs2018.4
> Message-ID:
>  vr_hedanuyoe+p-4y8eo7abgt2j-porkv0tz...@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi.
>
> You can add the following line to the PBS script.
>
> export OMP_NUM_THREADS=4
>
> -np should now become 5 in the mpirun command.
>
> Abhishek
>
> On Fri 23 Nov, 2018, 22:20 Mark Abraham,  wrote:
>
> > Hi,
> >
> > Looks like nodes=1:ppn=20 sets the number of openmpi threads per rank to
> be
> > 20, on your cluster. Check the documentation for the cluster and/or talk
> to
> > your admins.
> >
> > Mark
> >
> > On Fri, Nov 23, 2018 at 3:45 PM praveen kumar 
> > wrote:
> >
> > > Dear all
> > > I have successfully installed gromacs 2018.4 in local PC and HPC center
> > > (Without GPU)
> > > using these commands
> > > CMAKE_PREFIX_PATH=/home/sappidi/software/fftw-3.3.8
> > > /home/sappidi/software/cmake-3.13.0/bin/cmake ..
> > > -DCMAKE_INCLUDE_PATH=/home/sappidi/software/fftw-3.3.8/include
> > > -DCMAKE_LIBRARY_PATH=/home/sappidi/software/fftw-3.3.8/lib
> > >   -DGMX_GUP=OFF
> > > -DGMX_MPI=ON
> > > -DGMX_OPENMP=ON
> > > -DGMX_X11=ON
> -DCMAKE_INSTALL_PREFIX=/home/sappidi/software/gromacs-2018.4
> > > -DCMAKE_CXX_COMPILER=/home/sappidi/software/openmpi-2.0.1/bin/mpicxx
> > > -DCMAKE_C_COMPILER=/home/sappidi/software/openmpi-2.0.1/bin/mpicc
> > > make && make install
> > > the sample job runs perfectly without using mpirun.
> > > but when I want to run on multiple processors on single node or multi
> > > nodes, I am getting following error message
> > >
> > > "Fatal error:
> > > Your choice of number of MPI ranks and amount of resources results in
> > using
> > > 20
> > > OpenMP threads per 

[gmx-users] parallelizing gromacs2018.4

2018-11-23 Thread praveen kumar
Dear all
I have successfully installed gromacs 2018.4 in local PC and HPC center
(Without GPU)
using these commands
CMAKE_PREFIX_PATH=/home/sappidi/software/fftw-3.3.8
/home/sappidi/software/cmake-3.13.0/bin/cmake ..
-DCMAKE_INCLUDE_PATH=/home/sappidi/software/fftw-3.3.8/include
-DCMAKE_LIBRARY_PATH=/home/sappidi/software/fftw-3.3.8/lib
  -DGMX_GUP=OFF
-DGMX_MPI=ON
-DGMX_OPENMP=ON
-DGMX_X11=ON -DCMAKE_INSTALL_PREFIX=/home/sappidi/software/gromacs-2018.4
-DCMAKE_CXX_COMPILER=/home/sappidi/software/openmpi-2.0.1/bin/mpicxx
-DCMAKE_C_COMPILER=/home/sappidi/software/openmpi-2.0.1/bin/mpicc
make && make install
the sample job runs perfectly without using mpirun.
but when I want to run on multiple processors on single node or multi
nodes, I am getting following error message

"Fatal error:
Your choice of number of MPI ranks and amount of resources results in using
20
OpenMP threads per rank, which is most likely inefficient. The optimum is
usually between 1 and 6 threads per rank. If you want to run with this
setup,
specify the -ntomp option. But we suggest to change the number of MPI
ranks."

I have tried to rectify the problem using several ways but could not
succeed.
The sample job script file for my HPC run is shown below.

#!/bin/bash
#PBS -N test
#PBS -q mini
#PBS -l nodes=1:ppn=20
#PBS -j oe
#$ -e err.$JOB_ID.$JOB_NAME
#$ -o out.$JOB_ID.$JOB_NAME
cd $PBS_O_WORKDIR
export I_MPI_FABRICS=shm:dapl
export I_MPI_MPD_TMPDIR=/scratch/sappidi/largefile/


/home/sappidi/software/openmpi-2.0.1/bin/mpirun -np 20 -machinefile
$PBS_NODEFILE /home/sappidi/software/gromacs-2018.4/bin/gmx_mpi  mdrun -v
-s NVT1.tpr -deffnm test9

I wondering what could be the reason,

Thanking in advance
Praveen
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] DPD in Groamcs

2017-10-24 Thread praveen kumar
Thanks alot Goga N and Raman Preet Singh,

I have to simulate the polymer thin film of the order of sub nanometer
length scale, though I have gone through the DPDMacs, am able to run simple
DPD calculations, but haven't found any reference article that used
DPD-MACS. In addition to that we cannot simply calculate any thermodynamic
properties like Delta_G and Delta_H, using DPDMACS.  As Gromcs Approach of
free-enrgy calculation we cannot simply implement to It. I hope this would
topic of future interest those who want to develop.
However most of the references i have found either uses LAMMPS, DL_Meso and
Mesodyn. But post procssing needs to be coded on their own,I hope we should
go ahead with well tested LAMMPS.

Best
Praveen



On Tue, Oct 24, 2017 at 7:01 PM, <
gromacs.org_gmx-users-requ...@maillist.sys.kth.se> wrote:

> Send gromacs.org_gmx-users mailing list submissions to
> gromacs.org_gmx-users@maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
> gromacs.org_gmx-users-requ...@maillist.sys.kth.se
>
> You can reach the person managing the list at
> gromacs.org_gmx-users-ow...@maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
>1. Contact map (sp...@iacs.res.in)
>2. Re: DPD in Groamcs (Raman Preet Singh)
>3. Re: DPD in Groamcs (Raman Preet Singh)
>4. Re: DPD in Groamcs (Goga, N.)
>5. (no subject) (limingru)
>6. Re: Drude polarizable simulations (Justin Lemkul)
>
>
> --
>
> Message: 1
> Date: Tue, 24 Oct 2017 17:28:38 +0530
> From: sp...@iacs.res.in
> To: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: [gmx-users] Contact map
> Message-ID:
> <20171024172838.horde.dkozh3iro8jn-6chod0w...@mailweb.iacs.res.in>
> Content-Type: text/plain; charset=UTF-8; format=flowed; DelSp=Yes
>
> Hi all
> I want to generate a contact map for all the residues of lysozyme protein.
> I am using gmx mdmat to generate this. Can I generate a text file (which
> can be open in Grace) except the image files(eg: xpm file)? Please let me
> know the particular flag which I can use to get a *.xvg file.
>
> Thanks
> Sunipa Sarkar
>
>
> --
>
> Message: 2
> Date: Tue, 24 Oct 2017 12:27:09 +
> From: Raman Preet Singh 
> To: "gmx-us...@gromacs.org" 
> Subject: Re: [gmx-users] DPD in Groamcs
> Message-ID:
>  INDPRD01.PROD.OUTLOOK.COM>
>
> Content-Type: text/plain; charset="utf-8"
>
> I was looking for DPD in Gromacs too but in vain. You may try DPDmacs,
> POLY DL or LAMPPS.
>
> --
>
> Message: 3
> Date: Tue, 24 Oct 2017 12:32:19 +
> From: Raman Preet Singh 
> To: "gmx-us...@gromacs.org" 
> Subject: Re: [gmx-users] DPD in Groamcs
> Message-ID:
>  INDPRD01.PROD.OUTLOOK.COM>
>
> Content-Type: text/plain; charset="utf-8"
>
> Forgot to mention Materials Studio. An extremely easy-to-use software
> but comes at a hefty license fee.
>
> On Oct 24, 2017 5:57 PM, Raman Preet Singh  mailto:ramanpreetsi...@hotmail.com>> wrote:
>
> I was looking for DPD in Gromacs too but in vain. You may try DPDmacs,
> POLY DL or LAMPPS.
>
> --
>
> Message: 4
> Date: Tue, 24 Oct 2017 15:30:34 +0300
> From: "Goga, N." 
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] DPD in Groamcs
> Message-ID:
> 

[gmx-users] DPD in Groamcs

2017-10-21 Thread praveen kumar
Dear all
is there any possibility of  performing dissipative particle dynamics (DPD)
in gromacs?

Thanks in Advance
Praveen
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Scaling in gromacs

2017-08-14 Thread praveen kumar
Dear GMX users

In order to benchmark my simulation system I would like to know  in gromacs
which code scales linearly till 10K cores?

Thanks
Best Regards
Praveen


-- 
Thanks & Regards
Dr. Praveen Kumar Sappidi,
DST-National Postdoctoral Fellow
Computational Nanoscience Lab
Chemical Engineering Department,
IIT Kanpur
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Rotational correlation function for water

2016-08-09 Thread Praveen Kumar
Hi,

I am calculating the rotational correlation time for water (SPC/E model) by
using the following command-

g_rotacf_mpi -f traj.trr -s pr.tpr -n ind.ndx -P 1  -o watr_ohh.xvg -fitfn
aexp


I am getting the correlation time

COR: Correlation time (plain integral from  0.000 to 2505.000 ps) =

*3.64607 ps*
I have define a group containing three atoms (OW HW1 HW2) in index file .
If, i take two atoms (OW and HW1, suppose if i want to calculate rotational
correlation time along O-H vector), then i am getting correlation time

COR: Correlation time (plain integral from  0.000 to 2505.000 ps) =  *4.80207
ps*

by using command-

g_rotacf_mpi -f traj.trr -s pr.tpr -n ind.ndx -P 1 -d -o watr.xvg -fitfn
aexp

It is reported 1.71 ps (by NMR experiment) and 2.0 ps< (by MD simulation
for SPC/E model).


Could anybody help me to understand, where am i doing wrong?

All these calculation are done from 5 ns production run (NVT ensemble) at
303 K.

Thanks
PRAVEEN KUMAR
Research scholar
INDIAN INSTITUTE OF SCIENCE
EDUCATION AND RESEARCH PUNE
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] rotational correlation function

2016-08-03 Thread Praveen Kumar
Hi,

I want to calculate rotational correlation time for a linear and non linear
system. Could somebody help me to provide some input for calculating
correlation function for linear and non linear molecule with explanation?



Thanks
Pravin
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Calculation of non bonded interaction

2016-07-07 Thread Praveen Kumar
Hi,

I want to calculate non bonded interaction between an ion pair (say "A" and
"B"). There are total 125 ion pairs in the system. How could I calculate
the average non bonded interaction between one cation and anion in the
system?



Thanks,
Pravin
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Cluster size and aggregation number calculation

2014-08-05 Thread Praveen Kumar
Hi,

I am new to Gromacs. Can anybody help me to tell how to calculate  cluster
size and aggregation number in a liquid?

Thanks
Praveen
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error in mdrun

2014-06-04 Thread Praveen Kumar
Harshkumar,

If you are using NPT ensamble, volume will not fix. After some time as box
length decrease than cut-off, you will get such type of error. You can
increase number of water molecules or decrease cut-off.


On Wed, Jun 4, 2014 at 4:06 PM, Harshkumar Singh harshsingh2...@gmail.com
wrote:

 I have been doing simulation of 216 SPC molecules with cubic box dimension
 3 nm.I have set the cut off at 1.2 nm but I get this error during mdrun

 one of the box vectors has become shorter than twice the cut-off or
 box_yy-|box_zy| or box_zz has become smaller than the cut-off error

 The box vector is greater than double than twice the cut-off but I don't
 seem to understand the other possible errors.How can I resolve them?
 --
 Harshkumar Singh
 2nd Year Integrated MSc Chemistry
 IIT Bombay.
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
PRAVEEN KUMAR
Research scholar
INDIAN INSTITUTE OF SCIENCE
EDUCATION AND RESEARCH PUNE
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Cavity size calculation

2014-03-18 Thread Praveen Kumar
Dear all,

  I want to calculate cavity size inside some liquid in which CO2 is
dissolved. Can somebody help me?



Thanks.
PRAVEEN KUMAR
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.