[gmx-users] choosing force field

2013-11-04 Thread pratibha kapoor
Dear all

I would like to carry out unfolding simulations of my dimeric protein and
would like to know which is the better force field to work with out of
gromos 96 43 or 53? Also, is gromos 96 43a1 force field redundant?
When I searched the previous archive, I could see similar question was
raised for gromos 96 43a3 ff and could make out that 53a6 53a7..have
entirely different approach in parameterization compared to 43a3 ff. Also
43a3 would give more stable structures.
So is the case with my simulations but with force field 43a1 (instead of
43a3). I could see an extra non native helix when I carried out simulations
with ff 43a1 which is not present with 53a7 ff. I have no experimental
data/re-sources to confirm this. Also simulations on my system has not been
done before.
I would like to know which out of the two simulations should I consider
more reliable-43a1 or 53a7?
Thanks in advance.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] parallelization

2013-10-17 Thread pratibha kapoor
Dear gromacs users

I would like to run my simulations on all nodes(8) with full utilisation of
all cores(2 each). I have compiled gromacs version 4.6.3 using both thread
mpi and open mpi. I am using following command:
mpirun -np 8 mdrun_mpi -v -s -nt 2 -s *.tpr -c *.gro
But I am getting following error:
Setting the total number of threads is only supported with thread-MPI and
Gromacs was compiled without thread-MPI .
Although during compilation I have used:
cmake .. -DGMX_MPI=ON -DGMX_THREAD_MPI=ON

If I dont use -nt option, I could see that all the processors(8) are
utilised but I am not sure whether all cores are being utilised. For
version 4.6.3 without mpi, I Know by default gromacs uses all the threads
but not sure if mpi version uses all threads or not.
Any help is appreciated.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_sham

2013-10-14 Thread pratibha kapoor
Dear all gromacs users

I am creating free energy landscape using g_sham but my axis are not
getting labelled. I have searched the archive and found that using xmin and
xmax options we can label them.
I have first created my 2D projection xvg file using
g_anaeig -f *.xtc -s *.tpr -first 1 -last 2 -2d *.xvg -v *.trr
and then found  min and max values for both the vectors,
say for vector1 min:-2.25 and max:1.83
and for vector2 min:-1.60 and max: 2.22
and then I have used:
g_sham -f *.xvg -ls *.xpm -notime -xmin -2.25 -1.60 0 -xmax 1.83 2.22 0
and then converted *.xpm to *.eps using
xpm2ps -f *.xpm -o *.eps -rainbow blue
This way I got eps file with only one axis(x axis) labelled and following
line appeared:
Auto tick spacing failed for Y-axis, guessing 1.19375

I would like to ask is this way of labelling the axis correct? If yes, why
didn't y axis get labelled and how to solve the problem?

Thanks in advance.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] parallel simulation

2013-10-07 Thread pratibha kapoor
I would like to run one simulation in parallel so that it utilises all the
available nodes and cores. For that,
I have compiled gromacs with mpi enabled and also installed openmpi on my
machine.
I am using the following command:
mpirun -np 4 mdrun_mpi -v -s *.tpr

When i use top command, I get:

PID  USER   PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

22449 root  20   0  107m  59m 3152 R25   2.9   0:05.42
mdrun_mpi
22450 root  20   0  107m  59m 3152 R25   2.9   0:05.41
mdrun_mpi
22451 root  20   0  107m  59m 3152 R25   2.9   0:05.41
mdrun_mpi
22452 root  20   0  107m  59m 3152 R25   2.9   0:05.40
mdrun_mpi

Similarly when i use mpirun -np 2 mdrun_mpi -v -s *.tpr, I get

PID  USER   PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
22461 root  20   0  108m  59m 3248 R   50  3.0   5:58.64
mdrun_mpi
22462 root  20   0  108m  59m 3248 R   50   3.0   5:58.56
mdrun_mpi

If I look at %CPU column, it is actually 100/(no. of processes)
Why is all the cpu not 100% utilised?
Also if I compare my performance, it is significantly hampered.
Please suggest me the way so that I can run one simulation on all available
nodes, cores and threads.
Thanks in advance.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: parallel simulation

2013-10-07 Thread pratibha kapoor
To add : I am running simulations on institute cluster with 8 nodes (2
cores each).
Please suggest me the way so that I can run one simulation on all available
nodes, cores and threads.
Thanks in advance.



On Mon, Oct 7, 2013 at 1:55 PM, pratibha kapoor
kapoorpratib...@gmail.comwrote:

 I would like to run one simulation in parallel so that it utilises all the
 available nodes and cores. For that,
 I have compiled gromacs with mpi enabled and also installed openmpi on my
 machine.
 I am using the following command:
 mpirun -np 4 mdrun_mpi -v -s *.tpr

 When i use top command, I get:

 PID  USER   PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 22449 root  20   0  107m  59m 3152 R25   2.9   0:05.42
 mdrun_mpi
 22450 root  20   0  107m  59m 3152 R25   2.9   0:05.41
 mdrun_mpi
 22451 root  20   0  107m  59m 3152 R25   2.9   0:05.41
 mdrun_mpi
 22452 root  20   0  107m  59m 3152 R25   2.9   0:05.40
 mdrun_mpi

 Similarly when i use mpirun -np 2 mdrun_mpi -v -s *.tpr, I get

 PID  USER   PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 22461 root  20   0  108m  59m 3248 R   50  3.0   5:58.64
 mdrun_mpi
 22462 root  20   0  108m  59m 3248 R   50   3.0   5:58.56
 mdrun_mpi

 If I look at %CPU column, it is actually 100/(no. of processes)
 Why is all the cpu not 100% utilised?
 Also if I compare my performance, it is significantly hampered.
 Please suggest me the way so that I can run one simulation on all
 available nodes, cores and threads.
 Thanks in advance.

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] principal component analysis

2013-09-28 Thread pratibha kapoor
Dear all users

I would like to calculate pc loadings for various integrated factors in the
form of following sample table:

 Integrated Factors

PC1

PC2

PC3

PC4

PC5

PC6

PC7

PC8

PC9

PC10

Total non polar surface area

0.60

-0.07

-0.76

-0.11

0.08

0.05

-0.16

0.08

-0.01

0.02

Native contacts

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Content of helix

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Average volume of cavity

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Content of turns

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

Some value

I have done cartesian coordinate pca using g_covar and g_anaeig (using -s
and -f flags) in which I have supplied my reference structure file (which
contains atomic coordinates of structure after equilibration). This way I
could get eigenvalues, eigenvectors and principal components.
Now I would like to ask how can I disect these principal components in
terms of various properties such as those enlisted above.
Any help is appreciated.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] principal component analysis

2013-09-27 Thread pratibha kapoor
Dear all gmx users

I would like to calculate pc loadings for various integrated factors in the
form of following sample table:
Integrated factorsPC1PC2PC3 PC4PC5PC6PC7PC8 PC9PC10Total nonpolar surface
area0.60 -0.07-0.76-0.11-0.060.11 0.05-0.160.06-0.02Chain exposed area 0.92
-0.14-0.050.12 0.230.200.09-0.07-0.03 0.08Chain buried area0.74-0.30-0.41
0.120.240.260.05 0.03-0.060.21Chain unpolar exposed area 0.71-0.08-0.630.02
0.03 0.280.00-0.060.08-0.03 Chain unpolar buried area0.69-0.23-0.580.06 0.12
0.17-0.100.20-0.16 -0.01Main chain B factor0.120.77-0.06 0.62-0.01-0.01-0.080.01
0.050.04Side chain B factor0.07 0.74-0.050.66-0.01
0.01-0.030.000.100.01Whole chain B factor
0.090.75-0.050.64 -0.010.00-0.050.010.08 0.02Average number of cavities-0.02
-0.43 0.72-0.090.340.10-0.19 0.170.310.09Average volume of cavity -0.040.14
0.63-0.47-0.48 0.350.020.11-0.010.06 Content of Helix 0.870.090.00-0.41 0.17
-0.10-0.09-0.12 0.130.00and so on..
Please suggest me the way to proceed.
I have done cartesian principal component analysis using
g_covar -f *.xtc -s *.tpr
and using g_anaeig
This way I could get principal components. But then how to dissect each pc
in terms of above integrated factors?
Any help is appreciated.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists