Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Vikas Dubey
Hi,
I am also running calculation on GPUs. If FB restraints are only limited to
CPUs, I am sorry I was not aware of that.

On 24 August 2017 at 17:55, Mark Abraham  wrote:

> Hi,
>
> Thanks. That should not be the problem, because all such computations are
> only on the CPU... But hopefully we will see.
>
> Mark
>
> On Thu, 24 Aug 2017 17:35 Leandro Bortot  wrote:
>
> > Hello all,
> >
> >  This may add something: I had Segmentation Fault using flat-bottom
> > restraints with GPUs before. I just assumed that this type of restraint
> was
> > not supported by GPUs and moved to a CPU-only system.
> >  Sadly it was some time ago and I don't have the files anymore.
> >
> > Best,
> > Leandro
> >
> >
> > On Thu, Aug 24, 2017 at 5:13 PM, Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > Thanks. Good lesson here - try simplifying until things work. That does
> > > suggest there is a bug in flat bottomed position restraints. Can you
> > please
> > > upload a tpr with those restraints, along with a report at
> > > https://redmine.gromacs.org so we can reproduce and hopefully fix it?
> > >
> > > Mark
> > >
> > > On Thu, 24 Aug 2017 16:55 Vikas Dubey  wrote:
> > >
> > > > Hi,
> > > >
> > > > I have just checked with normal restraints. it works fine. Simulation
> > > crash
> > > > with flat bottom restraints.
> > > >
> > > > On 24 August 2017 at 16:43, Mark Abraham 
> > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > Does it work if you just have the normal position restraints, or
> just
> > > > have
> > > > > the flat-bottom restraints? In particular, I could image the latter
> > are
> > > > not
> > > > > widely used and might have a bug.
> > > > >
> > > > > Mark
> > > > >
> > > > > On Thu, Aug 24, 2017 at 4:36 PM Vikas Dubey <
> vikasdubey...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > I have found out that positions restrains is the issue in my FEP
> > > > > > simulation.  As soon as I switch off position restraints it works
> > > > fine. I
> > > > > > have the following the restraint file for the ions in my system
> (I
> > > > don't
> > > > > > see any problems with it):
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > *[ position_restraints ]; atom  type  fx  fy  fz1
> > >  1
> > > > > 0
> > > > > > 0  1000 2 1  0  0  1000 3 1  0  0  1000 4
>  1
> > > > 0  0
> > > > > > 10005 1  0  0  1000 6 1  0  0  1000 8 1
> > 0  0
> > > > > > 1000 9 1  0  0  100010 1  0  0  100011
>  1  0
> > > 0
> > > > > > 100012 1  0  0  100013 1  0  0  100014
>  1  0
> > > 0
> > > > > > 100015 1  0  0  100016 1  0  0  100017
>  1  0
> > > 0
> > > > > > 100018 1  0  0  100019 1  0  0  100020
>  1  0
> > > 0
> > > > > > 100021 1  1000  1000  1000;[ position_restraints ] ; flat
> > > > bottom
> > > > > > position restraints, here for potassium in site I;  type, g(8
> for a
> > > > > > cylinder), r(nm), k7  28  1  1000*
> > > > > >
> > > > > >
> > > > > > On 22 August 2017 at 14:18, Vikas Dubey  >
> > > > wrote:
> > > > > >
> > > > > > > Hi, I use the following script for my cluster. Also, I think
> > > problem
> > > > is
> > > > > > > calculation specific. I have run a quite a few normal
> > simulations ,
> > > > it
> > > > > > > works fine :
> > > > > > >
> > > > > > >
> > > > > > > #SBATCH --job-name=2_1_0
> > > > > > > #SBATCH --mail-type=ALL
> > > > > > > #SBATCH --time=24:00:00
> > > > > > > #SBATCH --nodes=1
> > > > > > > #SBATCH --ntasks-per-node=1
> > > > > > > #SBATCH --ntasks-per-core=2
> > > > > > > #SBATCH --cpus-per-task=4
> > > > > > > #SBATCH --constraint=gpu
> > > > > > > #SBATCH --output out.txt
> > > > > > > #SBATCH --error  err.txt
> > > > > > > #
> > > > > > > # load modules and run simulation
> > > > > > > module load daint-gpu
> > > > > > > module load GROMACS
> > > > > > > export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> > > > > > > export CRAY_CUDA_MPS=1
> > > > > > >
> > > > > > > srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE
> -c
> > > > > > > $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
> > > > > > >
> > > > > > > On 22 August 2017 at 06:11, Nikhil Maroli  >
> > > > wrote:
> > > > > > >
> > > > > > >> Okay, you might need to consider
> > > > > > >>
> > > > > > >> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> 

Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Mark Abraham
Hi,

Does it work if you just have the normal position restraints, or just have
the flat-bottom restraints? In particular, I could image the latter are not
widely used and might have a bug.

Mark

On Thu, Aug 24, 2017 at 4:36 PM Vikas Dubey  wrote:

> Hi everyone,
>
> I have found out that positions restrains is the issue in my FEP
> simulation.  As soon as I switch off position restraints it works fine. I
> have the following the restraint file for the ions in my system (I don't
> see any problems with it):
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[ position_restraints ]; atom  type  fx  fy  fz1 1  0
> 0  1000 2 1  0  0  1000 3 1  0  0  1000 4 1  0  0
> 10005 1  0  0  1000 6 1  0  0  1000 8 1  0  0
> 1000 9 1  0  0  100010 1  0  0  100011 1  0  0
> 100012 1  0  0  100013 1  0  0  100014 1  0  0
> 100015 1  0  0  100016 1  0  0  100017 1  0  0
> 100018 1  0  0  100019 1  0  0  100020 1  0  0
> 100021 1  1000  1000  1000;[ position_restraints ] ; flat bottom
> position restraints, here for potassium in site I;  type, g(8 for a
> cylinder), r(nm), k7  28  1  1000*
>
>
> On 22 August 2017 at 14:18, Vikas Dubey  wrote:
>
> > Hi, I use the following script for my cluster. Also, I think problem is
> > calculation specific. I have run a quite a few normal simulations , it
> > works fine :
> >
> >
> > #SBATCH --job-name=2_1_0
> > #SBATCH --mail-type=ALL
> > #SBATCH --time=24:00:00
> > #SBATCH --nodes=1
> > #SBATCH --ntasks-per-node=1
> > #SBATCH --ntasks-per-core=2
> > #SBATCH --cpus-per-task=4
> > #SBATCH --constraint=gpu
> > #SBATCH --output out.txt
> > #SBATCH --error  err.txt
> > #
> > # load modules and run simulation
> > module load daint-gpu
> > module load GROMACS
> > export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> > export CRAY_CUDA_MPS=1
> >
> > srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
> > $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
> >
> > On 22 August 2017 at 06:11, Nikhil Maroli  wrote:
> >
> >> Okay, you might need to consider
> >>
> >> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
> >>
> >>
> >>
> >> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun
> >> -performance.html
> >>
> >> http://www.gromacs.org/Documentation/Errors#There_is_no_
> >> domain_decomposition_for_n_nodes_that_is_compatible_with_the
> >> _given_box_and_a_minimum_cell_size_of_x_nm
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at http://www.gromacs.org/Support
> >> /Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] FEP calculations on multiple nodes

2017-08-24 Thread Vikas Dubey
Hi everyone,

I have found out that positions restrains is the issue in my FEP
simulation.  As soon as I switch off position restraints it works fine. I
have the following the restraint file for the ions in my system (I don't
see any problems with it):



























*[ position_restraints ]; atom  type  fx  fy  fz1 1  0
0  1000 2 1  0  0  1000 3 1  0  0  1000 4 1  0  0
10005 1  0  0  1000 6 1  0  0  1000 8 1  0  0
1000 9 1  0  0  100010 1  0  0  100011 1  0  0
100012 1  0  0  100013 1  0  0  100014 1  0  0
100015 1  0  0  100016 1  0  0  100017 1  0  0
100018 1  0  0  100019 1  0  0  100020 1  0  0
100021 1  1000  1000  1000;[ position_restraints ] ; flat bottom
position restraints, here for potassium in site I;  type, g(8 for a
cylinder), r(nm), k7  28  1  1000*


On 22 August 2017 at 14:18, Vikas Dubey  wrote:

> Hi, I use the following script for my cluster. Also, I think problem is
> calculation specific. I have run a quite a few normal simulations , it
> works fine :
>
>
> #SBATCH --job-name=2_1_0
> #SBATCH --mail-type=ALL
> #SBATCH --time=24:00:00
> #SBATCH --nodes=1
> #SBATCH --ntasks-per-node=1
> #SBATCH --ntasks-per-core=2
> #SBATCH --cpus-per-task=4
> #SBATCH --constraint=gpu
> #SBATCH --output out.txt
> #SBATCH --error  err.txt
> #
> # load modules and run simulation
> module load daint-gpu
> module load GROMACS
> export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
> export CRAY_CUDA_MPS=1
>
> srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
> $SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0
>
> On 22 August 2017 at 06:11, Nikhil Maroli  wrote:
>
>> Okay, you might need to consider
>>
>> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
>>
>>
>>
>> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun
>> -performance.html
>>
>> http://www.gromacs.org/Documentation/Errors#There_is_no_
>> domain_decomposition_for_n_nodes_that_is_compatible_with_the
>> _given_box_and_a_minimum_cell_size_of_x_nm
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] FEP calculations on multiple nodes

2017-08-22 Thread Vikas Dubey
Hi, I use the following script for my cluster. Also, I think problem is
calculation specific. I have run a quite a few normal simulations , it
works fine :


#SBATCH --job-name=2_1_0
#SBATCH --mail-type=ALL
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --ntasks-per-core=2
#SBATCH --cpus-per-task=4
#SBATCH --constraint=gpu
#SBATCH --output out.txt
#SBATCH --error  err.txt
#
# load modules and run simulation
module load daint-gpu
module load GROMACS
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export CRAY_CUDA_MPS=1

srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
$SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0

On 22 August 2017 at 06:11, Nikhil Maroli  wrote:

> Okay, you might need to consider
>
> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX
>
>
>
> http://manual.gromacs.org/documentation/5.1/user-guide/
> mdrun-performance.html
>
> http://www.gromacs.org/Documentation/Errors#There_is_
> no_domain_decomposition_for_n_nodes_that_is_compatible_with_
> the_given_box_and_a_minimum_cell_size_of_x_nm
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] FEP calculations on multiple nodes

2017-08-21 Thread Nikhil Maroli
Okay, you might need to consider

gmx mdrun -v -ntmpi XX -ntomp XX -deffnm   -gpu_id XXX



http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-performance.html

http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] FEP calculations on multiple nodes

2017-08-21 Thread Vikas Dubey
Hi,

That's exactly the problem. There is no error except segmentation fault. I
have provided the *.log file link below if that helps.



https://filetea.me/n3wNaRevmeUS8iFWrIs4UlHfQ

On 21 August 2017 at 19:12, Nikhil Maroli  wrote:

> Hi,
>
> Where and What is the error?  It is better to upload the file somewhere and
> providing a link here.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] FEP calculations on multiple nodes

2017-08-21 Thread Nikhil Maroli
Hi,

Where and What is the error?  It is better to upload the file somewhere and
providing a link here.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] FEP calculations on multiple nodes

2017-08-21 Thread Vikas Dubey
Hi Micheal,





** What does the logfile say that was output?*
*Ans : Log file output while running on PC (with command gmx mdrun -deffnm
md_0 -nt 36 ).  :*

Using GPU 8x8 non-bonded kernels

Removing pbc first time
Pinning threads with an auto-selected logical core stride of 1

Initializing Parallel LINear Constraint Solver

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess
P-LINCS: A Parallel Linear Constraint Solver for molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 116-122
  --- Thank You ---  

The number of constraints is 36872
There are inter charge-group constraints,
will communicate selected coordinates each lincs iteration
9303 constraints are involved in constraint triangles,
will apply an additional matrix expansion of order 6 for couplings
between constraints inside triangles

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
S. Miyamoto and P. A. Kollman
SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
Water Models
J. Comp. Chem. 13 (1992) pp. 952-962
  --- Thank You ---  


Linking all bonded interactions to atoms
There are 45357 inter charge-group virtual sites,
will an extra communication step for selected coordinates and forces

The initial number of communication pulses is: X 1 Y 1 Z 1
The initial domain decomposition cell size is: X 6.03 nm Y 3.02 nm Z 9.20 nm

The maximum allowed distance for charge groups involved in interactions is:
 non-bonded interactions   1.261 nm
(the following are initial values, they could change due to box deformation)
two-body bonded interactions  (-rdd)   1.261 nm
  multi-body bonded interactions  (-rdd)   1.261 nm
  virtual site constructions  (-rcon)  3.016 nm
  atoms separated by up to 7 constraints  (-rcon)  3.016 nm

When dynamic load balancing gets turned on, these settings will change to:
The maximum number of communication pulses is: X 1 Y 1 Z 1
The minimum size for domain decomposition cells is 1.261 nm
The requested allowed shrink of DD cells (option -dds) is: 0.80
The allowed shrink of domain decomposition cells is: X 0.21 Y 0.42 Z 0.14
The maximum allowed distance for charge groups involved in interactions is:
 non-bonded interactions   1.261 nm
two-body bonded interactions  (-rdd)   1.261 nm
  multi-body bonded interactions  (-rdd)   1.261 nm
  virtual site constructions  (-rcon)  1.261 nm
  atoms separated by up to 7 constraints  (-rcon)  1.261 nm


Making 3D domain decomposition grid 2 x 4 x 2, home cell index 0 0 0

Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
  0:  System

 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
G. Bussi, D. Donadio and M. Parrinello
Canonical sampling through velocity rescaling
J. Chem. Phys. 126 (2007) pp. 014101
  --- Thank You ---  
-



** What command are you using to run on multiple nodes ? *

*I use following script on the cluster. Last line indicates the command. *


#SBATCH --job-name=2_1_0
#SBATCH --mail-type=ALL
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --ntasks-per-core=2
#SBATCH --cpus-per-task=4
#SBATCH --constraint=gpu
#SBATCH --output out.txt
#SBATCH --error  err.txt
#
# load modules and run simulation
module load daint-gpu
module load GROMACS
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export CRAY_CUDA_MPS=1

srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
$SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0


-


** What is the .mdp file?*


My general *.mdp file is similar to what has been described here, apart
from certain changes for protein-membrane system:


http://wwwuser.gwdg.de/~ggroenh/exercise_html/exercise1.html

---


** How many nodes are you running on?*

Simulation runs fine on one node with 24 cores.  I want to run each windows
on maybe 2-3 nodes.  I have tried running simulation on my desktop using
"-nt" flag. It works fine until -nt 30. After that simulation crashes.


Re: [gmx-users] FEP calculations on multiple nodes

2017-08-21 Thread Michael Shirts
Significantly more information is be needed to understand what happened.

* What does the logfile say that was output?
* What command are you using to run on multiple nodes?
* What is the .mdp file?
* How many nodes are you running on?
* What version of the program?

And so forth.

On Mon, Aug 21, 2017 at 4:49 AM, Vikas Dubey 
wrote:

> Hi everyone,
>
> I am trying runa  FEP calculation with a system of ~25 particles. I
> have 20 windows and I am currently running my simulations on 1 node each.
> Since, my system is big, I just get 2.5ns in day. So, I thought to run each
> of my window on multiple nodes but for some reason, it crashes immediately
> after starting with an error.
>
>
> *Segmentation fault (core dumped)*
>
> Simulations run smoothly on one node. No error there. I tried to see file
> but there was nothing written there. Any help would be very much
> appreciated.
>
>
> Thanks,
> Vikas
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] FEP calculations on multiple nodes

2017-08-21 Thread Vikas Dubey
Hi everyone,

I am trying runa  FEP calculation with a system of ~25 particles. I
have 20 windows and I am currently running my simulations on 1 node each.
Since, my system is big, I just get 2.5ns in day. So, I thought to run each
of my window on multiple nodes but for some reason, it crashes immediately
after starting with an error.


*Segmentation fault (core dumped)*

Simulations run smoothly on one node. No error there. I tried to see file
but there was nothing written there. Any help would be very much
appreciated.


Thanks,
Vikas
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.