[gmx-users] Principle component analysis data manipulation from gromacs

2019-03-13 Thread SGR160055 Student
To whom it may concern,

I'm trying to analyse the correlation of the motions between the 2 types of
protein (wild-type and the mutant).

The question is, is that possible if I use R programming to visualize the
principle component?

If possible, which simulation data have the related information to plot the
PCA between the 2 protein ?

Your help is much appreciated. Thank you.



Best regards,
Syah
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Energy Conservation Conundrum

2019-03-13 Thread Kruse, Luke E.(MU-Student)
Hello Everyone,


I am a new user to gromacs and I am having trouble with energy conservation in 
my simulation. After I produced the (solvated and neutralized) configuration 
and respective topology files I performed an energy minimization that converged 
to single-precision machine accuracy.


After the minimization, I wanted to ensure that the system is conserving 
energy, so I performed a brief run with no thermo- or barostats in place, 
constraining only the H-bonds with the LINCS algorithm (lincs-order =4 and 
lincs-iter =4). I have tried decreasing the timestep from 2 fs to 1 fs and even 
0.1 fs to no avail. Any recommendation?


Warm regards,

Luke
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs performance

2019-03-13 Thread Szilárd Páll
Hi,

First off, please post full log files; these contain much more than just
the excerpts you paste in.

Secondly, for parallel, multi-node runs this hardware is just too GPU-dense
to achieve a good CPU-GPU load balance and scaling will be really hard too
in most cases, but details will depend on the input systems and settings
(info which we would see in the full log).

Lastly, in general, running a decomposition assuming one rank per core with
GPUs is generally inefficient, typically 2-3 ranks per GPU are ideal (but
in this case the CPU-GPU load balance may be a stronger bottleneck).

Cheers,
--
Szilárd


On Fri, Mar 8, 2019 at 11:12 PM Carlos Rivas  wrote:

> Hey guys,
> Anybody running GROMACS on AWS ?
>
> I have a strong IT background , but zero understanding of GROMACS or
> OpenMPI. ( even less using sge on AWS ),
> Just trying to help some PHD Folks with their work.
>
> When I run gromacs using Thread-mpi on a single, very large node on AWS
> things work fairly fast.
> However, when I switch from thread-mpi to OpenMPI even though everything's
> detected properly, the performance is horrible.
>
> This is what I am submitting to sge:
>
> ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat sge.sh
> #!/bin/bash
> #
> #$ -cwd
> #$ -j y
> #$ -S /bin/bash
> #$ -e out.err
> #$ -o out.out
> #$ -pe mpi 256
>
> cd /shared/charmm-gui/gromacs
> touch start.txt
> /bin/bash /shared/charmm-gui/gromacs/run_eq.bash
> touch end.txt
>
> and this is my test script , provided by one of the Doctors:
>
> ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat run_eq.bash
> #!/bin/bash
> export GMXMPI="/usr/bin/mpirun --mca btl ^openib
> /shared/gromacs/5.1.5/bin/gmx_mpi"
>
> export MDRUN="mdrun -ntomp 2 -npme 32"
>
> export GMX="/shared/gromacs/5.1.5/bin/gmx_mpi"
>
> for comm in min eq; do
> if [ $comm == min ]; then
>echo ${comm}
>$GMX grompp -f step6.0_minimization.mdp -o step6.0_minimization.tpr -c
> step5_charmm2gmx.pdb -p topol.top
>$GMXMPI $MDRUN -deffnm step6.0_minimization
>
> fi
>
> if [ $comm == eq ]; then
>   for step in `seq 1 6`;do
>echo $step
>if [ $step -eq 1 ]; then
>   echo ${step}
>   $GMX grompp -f step6.${step}_equilibration.mdp -o
> step6.${step}_equilibration.tpr -c step6.0_minimization.gro -r
> step5_charmm2gmx.pdb -n index.ndx -p topol.top
>   $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
>fi
>if [ $step -gt 1 ]; then
>   old=`expr $step - 1`
>   echo $old
>   $GMX grompp -f step6.${step}_equilibration.mdp -o
> step6.${step}_equilibration.tpr -c step6.${old}_equilibration.gro -r
> step5_charmm2gmx.pdb -n index.ndx -p topol.top
>   $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
>fi
>   done
> fi
> done
>
>
>
>
> during the output, I see this , and I get really excited, expecting
> blazing speeds and yet, it's much worse than a single node:
>
> Command line:
>   gmx_mpi mdrun -ntomp 2 -npme 32 -deffnm step6.0_minimization
>
>
> Back Off! I just backed up step6.0_minimization.log to
> ./#step6.0_minimization.log.6#
>
> Running on 4 nodes with total 128 cores, 256 logical cores, 32 compatible
> GPUs
>   Cores per node:   32
>   Logical cores per node:   64
>   Compatible GPUs per node:  8
>   All nodes have identical type(s) of GPUs
> Hardware detected on host ip-10-10-5-89 (the node of MPI rank 0):
>   CPU info:
> Vendor: GenuineIntel
> Brand:  Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
> SIMD instructions most likely to fit this hardware: AVX2_256
> SIMD instructions selected at GROMACS compile time: AVX2_256
>   GPU info:
> Number of GPUs detected: 8
> #0: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
> compatible
> #1: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
> compatible
> #2: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
> compatible
> #3: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
> compatible
> #4: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
> compatible
> #5: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
> compatible
> #6: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
> compatible
> #7: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat:
> compatible
>
> Reading file step6.0_minimization.tpr, VERSION 5.1.5 (single precision)
> Using 256 MPI processes
> Using 2 OpenMP threads per MPI process
>
> On host ip-10-10-5-89 8 compatible GPUs are present, with IDs
> 0,1,2,3,4,5,6,7
> On host ip-10-10-5-89 8 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 56 PP ranks in this node:
> 0,0,0,0,0,0,0,1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,3,3,3,3,4,4,4,4,4,4,4,5,5,5,5,5,5,5,6,6,6,6,6,6,6,7,7,7,7,7,7,7
>
>
>
> Any suggestions? Greatly appreciate the help.
>
>
> Carlos J. Rivas
> Senior AWS Solutions Architect - Migration Specialist
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> 

Re: [gmx-users] Gomacs 2019 build on sles12 and centos

2019-03-13 Thread Szilárd Páll
Hi,

I assume the timeout does not happen with non-MPI builds, right? Have you
tried a different MPI flavor?

Cheers,
--
Szilárd


On Mon, Mar 11, 2019 at 10:54 AM Nelson Chris AWE 
wrote:

> Hi All,
> I've built Gromacs 2019 on both a CentOS 7 and SLES12 machine.
> Built using gcc@7.2.0
> Dependencies:
> fftw@3.3.8
> openmpi@3.1.3
>
> When running "make check" on both machines, I'm getting the same timeout
> error for test 29 - see below for extract and attached for full test output
> anyone got any ideas?
>
> Thanks,
> Chris.
>
> ===
> 29/40 Test #29: GmxPreprocessTests ...***Timeout  30.15 sec
> [==] Running 26 tests from 4 test cases.
> [--] Global test environment set-up.
> [--] 4 tests from GenconfTest
> [ RUN  ] GenconfTest.nbox_Works
> [   OK ] GenconfTest.nbox_Works (186 ms)
> [ RUN  ] GenconfTest.nbox_norenumber_Works
> [   OK ] GenconfTest.nbox_norenumber_Works (92 ms)
> [ RUN  ] GenconfTest.nbox_dist_Works
> [   OK ] GenconfTest.nbox_dist_Works (372 ms)
> [ RUN  ] GenconfTest.nbox_rot_Works
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> center of geometry: 1.733667, 1.477000, 0.905167
> [   OK ] GenconfTest.nbox_rot_Works (471 ms)
> [--] 4 tests from GenconfTest (1121 ms total)
>
> [--] 5 tests from InsertMoleculesTest
> [ RUN  ] InsertMoleculesTest.InsertsMoleculesIntoExistingConfiguration
> Reading solute configuration
> Reading molecule configuration
> Initialising inter-atomic distances...
>
> WARNING: Masses and atomic (Van der Waals) radii will be guessed
>  based on residue and atom names, since they could not be
>  definitively assigned from the information in your input
>  files. These guessed numbers might deviate from the mass
>  and radius of the atom type. Please check the output
>  files if necessary.
>
> The information in this email and in any attachment(s) is commercial in
> confidence. If you are not the named addressee(s) or if you receive this
> email in error then any distribution, copying or use of this communication
> or the information in it is strictly prohibited. Please notify us
> immediately by email at admin.internet(at)awe.co.uk, and then delete this
> message from your computer. While attachments are virus checked, AWE plc
> does not accept any liability in respect of any virus which is not
> detected. AWE Plc Registered in England and Wales Registration No 02763902
> AWE, Aldermaston, Reading, RG7 4PR
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Fwd: Probability of number of atomic contacts

2019-03-13 Thread Mahsa
Hi,

When I use the tool, gmx analyze -dist, on the generated file from gmx
mindist for the number of contact, I can get the probability of number of
contacts. So I think if an appropriate index file and cut off distance are
defined by using this approach, I can get the probability of the first
shell coordination number for specific groups during the simulation time.
Is this correct? because the coordination number from gmx rdf gives the
average number of particles within a distance r.

Best regards,
Mahsa

On Fri, Mar 1, 2019 at 12:11 PM Mahsa  wrote:

> Hi Mark,
>
> Thank you for your reply! Actually, I should clarify my last post because
> it seems that I repeated my question :-)
> When I use the tool that Justin suggested, gmx analyze -dist, on the
> generated file from gmx mindist for the number of contact, I can get the
> probability of number of contacts. So I think if an appropriate index file
> and cut off distance are defined by using this approach, I can get the
> probability of the first shell coordination number for specific groups
> during the simulation time. Is this correct? because the coordination
> number from gmx rdf gives the average number of particles within a distance
> r.
>
> Best regards,
> Mahsa
>
> On Fri, Mar 1, 2019 at 5:04 AM Mark Abraham 
> wrote:
>
>> Hi,
>>
>> Which GROMACS tools documentation did you check to see whether it can be
>> done already? :-) I don't know the answer, but that's where to start!
>>
>> Mark
>>
>> On Thu., 28 Feb. 2019, 10:43 Mahsa,  wrote:
>>
>> > Thank you very much for your comments!
>> >
>> > How would it be possible to get probability vs. number of contacts or
>> > probability vs coordination number averaged across the simulations? Can
>> it
>> > be done directly with Gromacs tools or I need some scripts for that?
>> >
>> > Best regards,
>> > Mahsa
>> >
>> > On Thu, Feb 28, 2019 at 7:17 PM Justin Lemkul  wrote:
>> >
>> > >
>> > >
>> > > On 2/28/19 8:59 AM, Mahsa wrote:
>> > > > Hi Justin,
>> > > >
>> > > > Could you please comment on my questions in the previous post?
>> > > >
>> > > > Best regards,
>> > > > Mahsa
>> > > >
>> > > > -- Forwarded message -
>> > > > From: Mahsa 
>> > > > Date: Sun, Feb 17, 2019 at 2:36 PM
>> > > > Subject: Re: [gmx-users] Probability of number of atomic contacts
>> > > > To: 
>> > > >
>> > > >
>> > > > Thank you very much, Justin!
>> > > >
>> > > > I tried this command:
>> > > >
>> > > > gmx_seq analyze -f numcont.xvg -dist num_dist.xvg
>> > > >
>> > > >
>> > > > and I got a histogram. Now the number of contacts between the ion
>> and
>> > the
>> > > > polymer is between 160-180. I just want to be sure if I am doing
>> this
>> > > > analysis correct. When I use gmx mindist, from the index file I
>> choose
>> > a
>> > > > group of the ion (including 46 ions) and then the polymer group (all
>> > > > polymer chains in the box).  I think maybe instead of choosing all
>> > ions,
>> > > I
>> > > > should only select one of them and get the number of contact with
>> the
>> > > > polymers but then since I have 46 of this ion in the simulation box,
>> > can
>> > > it
>> > > > be a good representative of the whole system? If not, what else can
>> I
>> > do
>> > > in
>> > > > this case?
>> > >
>> > > I don't see any point in doing per-ion analysis. You already have the
>> > > answer you want with respect to contacts between the two species.
>> > > Choosing one ion isn't necessarily going to be representative, either.
>> > >
>> > > > Besides, it is mentioned in the Gromacs manual, that if we use the
>> > > > -group option
>> > > > a contact of an atom in another group with multiple atoms in the
>> first
>> > > > group is counted as one contact instead of as multiple contacts. I
>> want
>> > > to
>> > > > count all contact with a polymer chain as 1 contact and check the
>> > number
>> > > of
>> > > > contacts with different polymer chains so by using -group and having
>> > the
>> > > > ion as the first group and polymer as the second group from the
>> index
>> > > file,
>> > > > can I get this?
>> > >
>> > > This option is primarily used to avoid over-counting, e.g. the
>> > > interaction between an ion and carboxylate oxygens will not be counted
>> > > as two contact if each ion-oxygen distance satisfies the criterion;
>> it's
>> > > just one.
>> > >
>> > > > The last question, can I do the same approach to get the
>> distribution
>> > of
>> > > > coordination number for the first coordination shell of ions and
>> > special
>> > > > atoms of the polymers?
>> > >
>> > > You can calculate coordination number by integrating an RDF.
>> > >
>> > > -Justin
>> > >
>> > > --
>> > > ==
>> > >
>> > > Justin A. Lemkul, Ph.D.
>> > > Assistant Professor
>> > > Office: 301 Fralin Hall
>> > > Lab: 303 Engel Hall
>> > >
>> > > Virginia Tech Department of Biochemistry
>> > > 340 West Campus Dr.
>> > > Blacksburg, VA 24061
>> > >
>> > > jalem...@vt.edu | (540) 231-3129
>> > > 

Re: [gmx-users] Problem with cuda toolkit

2019-03-13 Thread Mark Abraham
Hi,

If that was your cmake command, then CUDA_TOOLKIT_ROOT_DIR would not have
that value, and things should have just worked with nvcc where you report
it is located. But using cmake -DCUDA_TOOLKIT_ROOT_DIR=/usr will let normal
path searching find that nvcc is in the bin directory.

Distributions like Fedora can have other issues, like that the binutils
package from the distribution is not yet installed.

Mark

On Wed, 13 Mar 2019 at 06:48 Shahrokh Nasseri (PhD) 
wrote:

> Hi Dears Gromacs Developers and users
>
> I want to install gromacs in Fedora core 25.
>
> My Computer Specifications are as follows:
>
> (base) [sn@localhost ~]$ which nvcc
> /usr/bin/nvcc
> (base) [sn@localhost ~]$ nvcc --version
> nvcc: NVIDIA (R) Cuda compiler driver
> Copyright (c) 2005-2016 NVIDIA Corporation
> Built on Tue_Jan_10_13:22:03_CST_2017
> Cuda compilation tools, release 8.0, V8.0.61
> (base) [sn@localhost ~]$ nvidia-smi
> Wed Mar 13 08:09:56 2019
>
> +-+
> | NVIDIA-SMI 378.13 Driver Version: 378.13
> |
>
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr.
> ECC |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute
> M. |
>
> |===+==+==|
> |   0  GeForce GTX 650 Ti  Off  | :01:00.0 N/A |
> N/A |
> | 20%   32CP8N/A /  N/A |220MiB /   980MiB | N/A
> Default |
>
> +---+--+--+
>
>
> +-+
> | Processes:   GPU
> Memory |
> |  GPU   PID  Type  Process name   Usage
> |
>
> |=|
> |0  Not Supported
>|
>
> +-+
> (base) [sn@localhost ~]$
>
> For installation, I enter the following commands:
> (base) [sn@localhost ~]$ wget
> ftp://ftp.gromacs.org/pub/gromacs/gromacs-2019.1.tar.gz
> ...
> 2019-03-12 18:17:18 (308 KB/s) - ‘gromacs-2019.1.tar.gz’ saved [33435278]
> (base) [sn@localhost ~]$ tar -xzf gromacs-2019.1.tar.gz
> (base) [sn@localhost ~]$ mv gromacs-2019.1.tar.gz gromacs-2019.1
> (base) [sn@localhost ~]$ cd ./gromacs-2019.1/
> (base) [sn@localhost gromacs-2019.1]$ cmake --version
> cmake version 3.6.2
>
> (base) [sn@localhost gromacs-2019.1]$ mkdir build
> (base) [sn@localhost gromacs-2019.1]$ mkdir bin
> (base) [sn@localhost gromacs-2019.1]$ cd build/
> (base) [sn@localhost build]$ ccmake .. -DGMX_BUILD_OWN_FFTW=ON
> -DREGRESSIONTEST_DOWNLOAD=ON
> ...
>  Page 1 of 1
>  BUILD_SHARED_LIBSON
>  BUILD_TESTINGON
>  CMAKE_BUILD_TYPE Release
>  CMAKE_INSTALL_PREFIX /home/sn/gromacs-2019.1/bin
>  CMAKE_PREFIX_PATH
>  CUDA_TOOLKIT_ROOT_DIR/usr/bin --->
> Note: I address to location of nvcc
>  GMXAPI   OFF
>  GMX_BUILD_MDRUN_ONLY OFF
>  GMX_CLANG_CUDA   OFF
>  GMX_DEFAULT_SUFFIX   ON
>  GMX_DOUBLE   OFF
>  GMX_ENABLE_CCACHEOFF
>  GMX_EXTERNAL_TNG OFF
>  GMX_EXTERNAL_ZLIBOFF
>  GMX_FFT_LIBRARY  fftw3
>  GMX_GPU  ON
>  GMX_HWLOCAUTO
>  GMX_MIMICOFF
>  GMX_MPI  OFF
>  GMX_OPENMP   ON
>  GMX_QMMM_PROGRAM None
>  GMX_SIMD AUTO
>  GMX_THREAD_MPI   ON
>  GMX_USE_OPENCL   OFF
>  GMX_USE_TNG  ON
>  GMX_X11  OFF
>  ImageMagick_EXECUTABLE_DIR  /usr/bin
> ...
> Press [c] to configure
> ...
> I get the following error message:
> ...
>  CMake Error at cmake/gmxManageNvccConfig.cmake:194 (message):
>CUDA compiler does not seem to be functional.
>  Call Stack (most recent call first):
>cmake/gmxManageGPU.cmake:204 (include)
>CMakeLists.txt:590 (gmx_gpu_setup)
>
> How should I define "CUDA_TOOLKIT_ROOT_DIR"?
> Please guide me,
> Shahrokh
>
>
>
>
> Shahrokh Nasseri
>
> Assistant Professor of Medical Physics
>
> Department and Research Center of Medical Physics
>
> Mashhad University of Medical Sciences
>
> Mashhad, Iran
>
> Tel: +98 513 8002328
> Fax: +98 513 8002 320
> e-mail: naser...@mums.ac.ir, shahrokh.nass...@gmail.com
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read 

Re: [gmx-users] gmx trjconv trr xtc

2019-03-13 Thread Mark Abraham
Hi,

XTC is a reduced-precision format for coordinates. This is fine for most
use cases, because the difference between 2.012 nm and 2.013nm is dwarfed
by the other modelling approximations (fixed point charges! short
sampling!). It gives you the advantage of using less disk and spending less
time handling files. Further, it leverages the knowledge that in e.g. a
water molecule, the three atoms are not in arbitrary locations, but can be
stored efficiently as a position and two deltas. See
http://manual.gromacs.org/documentation/current/reference-manual/file-formats.html#xtc

Mark

On Tue, 12 Mar 2019 at 00:57 Alex  wrote:

> Dear groamcs user,
> A system of mine contains two molecule type of A and B in water. Using
> gmx trjconv -f out.xtc -o out.last.5ns.trr I  first truncated the last 5ns
> of the system's XTC file as a TRR file and just selected the non-water
> contents so that the TRR file only has A and B. The TRR file is 4.4 GB.
> Then I applied again the "gmx trjconv -pbc cluster -center yes -f
> out.last.5ns.trr -o out.last.5ns.xtc" and again just non-water was chosen.
> I was expecting that the both out.last.5ns.trr and out.last.5ns.xtc would
> have the same size 4.4 GB as the material in both of them are just A and B,
> however, the out.last.5ns.xtc is just 1.4 GB!
> I wonder if that is normal? here I just care about the coordinates
> dynamically and not velocity, do you think I am loosing any coordinate in
> by the second command from out.last.5ns.trr to out.last.5ns.xtc?
>
> Regards
> Alex
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Steps for executing Markov state Modelling (MSM) analysis

2019-03-13 Thread Soham Sarkar
Dear all,
   I have a protein trajectory in xtc format. I want to do the MSM
analysis on this trajectory to see how the process is going on and the
meta-stable states. I have followed the Video series by Frank Noe and team
on MSM, but it is not clear to me how to start it. I have some questions,
1) What are the python packages that I need to install?
2) How should I start it?
3) What kind of data they have generated?
Any one with the introductory steps of MSM analysis, any link or hand-on
tutorial video is highly appreciated.
- Soham
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.