Hi everybody,
I tried to run a minimization just of the hydrogen of a membrane protein.
I want to do this in vacuum.
But when I started the run with
mpirun mdrun_mpi -deffnm protein -v -nt 2
I get the error that there is a segmentation fault.
But when I only type
mpirun mdrun_mpi
there is no
On 6/12/12 5:54 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
Hi everybody,
I tried to run a minimization just of the hydrogen of a membrane protein.
I want to do this in vacuum.
But when I started the run with
mpirun mdrun_mpi -deffnm protein -v -nt 2
I get the error that there
On 6/12/12 5:54 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
Hi everybody,
I tried to run a minimization just of the hydrogen of a membrane
protein.
I want to do this in vacuum.
But when I started the run with
mpirun mdrun_mpi -deffnm protein -v -nt 2
I get the error that
On 6/12/12 7:05 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 5:54 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
Hi everybody,
I tried to run a minimization just of the hydrogen of a membrane
protein.
I want to do this in vacuum.
But when I started the run
On 6/12/12 7:05 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 5:54 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
Hi everybody,
I tried to run a minimization just of the hydrogen of a membrane
protein.
I want to do this in vacuum.
But when I started the
On 6/12/12 7:34 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:05 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 5:54 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
Hi everybody,
I tried to run a minimization just of the hydrogen of a
On 6/12/12 7:34 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:05 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 5:54 AM, reising...@rostlab.informatik.tu-muenchen.de
wrote:
Hi everybody,
I tried to run a minimization just of the hydrogen of a
On 6/12/12 7:46 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:34 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:05 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 5:54 AM, reising...@rostlab.informatik.tu-muenchen.de
On 6/12/12 7:46 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:34 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:05 AM, reising...@rostlab.informatik.tu-muenchen.de
wrote:
On 6/12/12 5:54 AM, reising...@rostlab.informatik.tu-muenchen.de
On 6/12/12 8:48 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:46 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:34 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:05 AM, reising...@rostlab.informatik.tu-muenchen.de
On 6/12/12 8:48 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:46 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:34 AM, reising...@rostlab.informatik.tu-muenchen.de
wrote:
On 6/12/12 7:05 AM, reising...@rostlab.informatik.tu-muenchen.de
On 6/12/12 10:09 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 8:48 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:46 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
On 6/12/12 7:34 AM, reising...@rostlab.informatik.tu-muenchen.de
On 14/05/2012 3:52 PM, Anirban wrote:
Hi ALL,
I am trying to simulate a membrane protein system using CHARMM36 FF on
GROAMCS4.5.5 on a parallel cluster running on MPI. The system consists
of arounf 1,17,000 atoms. The job runs fine on 5 nodes (5X12=120
cores) using mpirun and gives proper
On Mon, May 14, 2012 at 11:35 AM, Mark Abraham mark.abra...@anu.edu.auwrote:
On 14/05/2012 3:52 PM, Anirban wrote:
Hi ALL,
I am trying to simulate a membrane protein system using CHARMM36 FF on
GROAMCS4.5.5 on a parallel cluster running on MPI. The system consists of
arounf 1,17,000
On 14/05/2012 4:18 PM, Anirban wrote:
On Mon, May 14, 2012 at 11:35 AM, Mark Abraham
mark.abra...@anu.edu.au mailto:mark.abra...@anu.edu.au wrote:
On 14/05/2012 3:52 PM, Anirban wrote:
Hi ALL,
I am trying to simulate a membrane protein system using CHARMM36
FF on
Hi ALL,
I am trying to simulate a membrane protein system using CHARMM36 FF on
GROAMCS4.5.5 on a parallel cluster running on MPI. The system consists of
arounf 1,17,000 atoms. The job runs fine on 5 nodes (5X12=120 cores) using
mpirun and gives proper output. But whenever I try to submit it on
Hi ALL,
I am trying to simulate a membrane protein system using CHARMM36 FF on
GROAMCS4.5.5 on a parallel cluster running on MPI. The system consists of
arounf 1,17,000 atoms. The job runs fine on 5 nodes (5X12=120 cores) using
mpirun and gives proper output. But whenever I try to submit it on
Dear GRomacs users,
I am using the -rerun option of mdrun to read the coordinates of a trajectory
and to compute the potential energy of a molecule during MD.
This operation when performed in parallel, using mdrun_mpi, the energy of the
bonded interactions is not computed. But using one core,
i used the option still i get the error as=
/bin/sh ../../libtool --tag=CC --mode=compile mpCC -DHAVE_CONFIG_H -I.
-I../../src -I../../include
-DGMXLIBDIR=\/home/staff/sec/secdpal/soft/gromacs/share/top\
-I/home/staff/sec/secdpal/soft/include -O3 -qarch=ppc64 -qtune=pwr5 -c -o
vmdio.lo
On 10/12/2011 7:54 PM, aiswarya pawar wrote:
i used the option still i get the error as=
/bin/sh ../../libtool --tag=CC --mode=compile mpCC -DHAVE_CONFIG_H
-I. -I../../src -I../../include
-DGMXLIBDIR=\/home/staff/sec/secdpal/soft/gromacs/share/top\
-I/home/staff/sec/secdpal/soft/include
Hi,
I tried giving this-
./configure --prefix=/home/soft/gromacs --host=ppc --build=ppc64
--enable-mpi --with-fft=fftw3 MPICC=mpcc CC=xlc CFLAGS=-O3 -qarch=450d
-qtune=450 CXX=mpixlC_r CXXFLAGS=-O3 -qarch=450d -qtune=450
and the configure process ran well.
but when i gave make mdrun, i get an
Hi,
I tried giving this-
./configure --prefix=/home/soft/gromacs --host=ppc --build=ppc64
--enable-mpi --with-fft=fftw3 MPICC=mpcc CC=xlc CFLAGS=-O3 -qarch=450d
-qtune=450 CXX=mpixlC_r CXXFLAGS=-O3 -qarch=450d -qtune=450
and the configure process ran well.
but when i gave make mdrun, i get an
On 10/12/2011 6:31 PM, aiswarya pawar wrote:
Hi,
I tried giving this-
./configure --prefix=/home/soft/gromacs --host=ppc --build=ppc64
--enable-mpi --with-fft=fftw3 MPICC=mpcc CC=xlc CFLAGS=-O3
-qarch=450d -qtune=450 CXX=mpixlC_r CXXFLAGS=-O3 -qarch=450d
-qtune=450
and the configure
On 8/12/2011 6:35 PM, aiswarya pawar wrote:
Hi users,
Am running the mdrun_mpi on cluster with the md.mdp parameters as-
; VARIOUS PREPROCESSING OPTIONS
title= Position Restrained Molecular Dynamics
; RUN CONTROL PARAMETERS
constraints = all-bonds
integrator = md
dt =
Hi users,
Am running the mdrun_mpi on cluster with the md.mdp parameters as-
; VARIOUS PREPROCESSING OPTIONS
title= Position Restrained Molecular Dynamics
; RUN CONTROL PARAMETERS
constraints = all-bonds
integrator = md
dt = 0.002 ; 2fs !
nsteps = 250 ; total 5000 ps.
Ok. Solved it. Nothing wrong with the LSF/SLURM/MPI stuff. GMXRC.bash
hadn't executed properly to set the environment up (typo) and the shared
libraries weren’t being found. Seems this makes mdrun_mpi run like serial
mdrun!
Oops.
Lee
On 16/04/2011 22:51, Larcombe, Lee
On 4/18/2011 6:44 PM, Larcombe, Lee wrote:
Ok. Solved it. Nothing wrong with the LSF/SLURM/MPI stuff. GMXRC.bash
hadn't executed properly to set the environment up (typo) and the shared
libraries weren’t being found. Seems this makes mdrun_mpi run like serial
mdrun!
OK, but failing to pick up
On 16/04/2011 12:13 AM, Larcombe, Lee wrote:
Hi gmx-users
We have an HPC setup running HP_MPI and LSF/SLURM. Gromacs 4.5.3 has been
compiled with mpi support
The compute nodes on the system contain 2 x dual core Xeons which the system
sees as 4 processors
An LSF script called gromacs_run.lsf
Thanks Mark,
HP-MPI is configured correctly on the system - an HP XC 3000 800 cores.
It works for all the other users (none gromacs) and no I've tested it, I
can launch an mpi job which runs fine on the login node (two quad core
xeons)
It seems to be an issue with the number of processors passed
Hi gmx-users
We have an HPC setup running HP_MPI and LSF/SLURM. Gromacs 4.5.3 has been
compiled with mpi support
The compute nodes on the system contain 2 x dual core Xeons which the system
sees as 4 processors
An LSF script called gromacs_run.lsf is as shown below
#BSUB -N
#BSUB -J
Hello
In version 4.0.7 in .ll file, I use command line:
mpiexec mdrun_mpi -v -s topol.tpr
I get error which does noe recognize mdrun_mpi
I change it to mdrun and it works.
1) is the bold command line ok?
2) version 4.0.7 does not need _mpi after commands?
thanks
D.Aghaie--
gmx-users mailing
delara aghaie wrote:
Hello
In version 4.0.7 in .ll file, I use command line:
*mpiexec mdrun_mpi -v -s topol.tpr*
I get error which does noe recognize mdrun_mpi
I change it to mdrun and it works.
1) is the bold command line ok?
Only if you have (1) compiled with MPI support and (2) named
Thank you, I have been to that page probably a good 100 times by now.
Was the 'No.' response with regards to my primary question? Or to the
one within the parentheses?
Suppose I remove my existing installation and reinstall, I am hoping
to figure out when/where exactly should I specify
./configure --enable-mpi --program-suffix=_mpi
make mdrun
make install-mdrun
make links
Sorry for the random asterisk* symbols they must have came through from some
formatting.
On Wed, Jan 26, 2011 at 12:53 PM, Justin Kat justin@mail.mcgill.cawrote:
Thank you, I have been to that page
Alright. So meaning I should have instead issued:
./configure --enable-mpi --program-suffix=_mpi
make mdrun
make install-mdrun
make links
to have installed an MPI-enabled executable called mdrun_mpi apart from the
existing mdrun executable? (Would I also need to append the _mpi suffix when
On 26/01/2011 8:50 AM, Justin Kat wrote:
Alright. So meaning I should have instead issued:
./configure --enable-mpi --program-suffix=_mpi||
make mdrun
make install-mdrun
make links
to have installed an MPI-enabled executable called mdrun_mpi apart
from the existing mdrun executable? (Would I
Dear gmx users,
I have installed the parallel version 4.0.7 of gromacs on one of the nodes
of my cluster. Here is the steps I've done through root:
first, the normal installation:
./configure
make
make install
make links
then issued commands below for the mpi build:
./configure
Justin Kat wrote:
Dear gmx users,
I have installed the parallel version 4.0.7 of gromacs on one of the
nodes of my cluster. Here is the steps I've done through root:
first, the normal installation:
./configure
make
make install
make links
then issued commands below for the mpi
Thank you for the reply!
hmm mdrun_mpi does not appear in the list of executables in
/usr/local/gromacs/bin (and well therefore not in /usr/local/bin).
Which set of installation commands that I used should have compiled the
mdrun_mpi executable? And how should I go about getting the mdrun_mpi
Justin Kat wrote:
Thank you for the reply!
hmm mdrun_mpi does not appear in the list of executables in
/usr/local/gromacs/bin (and well therefore not in /usr/local/bin).
Which set of installation commands that I used should have compiled the
mdrun_mpi executable? And how should I go about
Hi,
you can check with
ldd mdrun_mpi
whether really all needed libraries were found. Is libimf.so in
/opt/intel/fc/10.1.008/lib ? The intel compilers also come with
files called iccvars.sh or ictvars.sh. If you do
source /path/to/iccvars.sh
everything should be set as needed. Check the Intel
- Original Message -
From: zhongjin zhongjin1...@yahoo.com.cn
Date: Thursday, July 8, 2010 18:53
Subject: [gmx-users] mdrun_mpi: error while loading shared libraries:
libimf.so: cannot open shared object file: No such file or directory
To: gmx-users@gromacs.org
- Original Message -
From: quantrum75 quantru...@yahoo.com
Date: Wednesday, June 30, 2010 7:12
Subject: [gmx-users] mdrun_mpi issue.
To: gmx-users@gromacs.org
---
| Hi Folks,
I am trying to run a simulation under GMX 4.0.5. When
User reported it's problem with input file.
2009/4/15 annalisa bordogna annalisa.bordo...@gmail.com:
Hi,
I received a similar error during an equilibration by steepest descent in
which I had posed constraints on water, leaving the protein free to move.
I suggest to control your mdp file...
process crashes around 100 steps out of 1000 requested.
On Fri, Apr 10, 2009 at 4:32 PM, Justin A. Lemkul jalem...@vt.edu wrote:
nam kim wrote:
I have segmentation fault error while running mdrun_mpi( gromacs 4.0.4).
I have installed gromacs 4.0.4 two month ago and been working fine.
nam kim wrote:
process crashes around 100 steps out of 1000 requested.
Fine, but you still haven't answered my question. Do you receive any other
messages?
Do other systems run on the specific hardware you're using? You may just have
some instability in this particular system that is
I have segmentation fault error while running mdrun_mpi( gromacs 4.0.4).
I have installed gromacs 4.0.4 two month ago and been working fine.
Today, I just got Segment errors. Rebooting does not much help.
Here is log:
[rd:06790] *** Process received signal ***
[d:06790] Signal: Segmentation
nam kim wrote:
I have segmentation fault error while running mdrun_mpi( gromacs 4.0.4).
I have installed gromacs 4.0.4 two month ago and been working fine.
Today, I just got Segment errors. Rebooting does not much help.
Here is log:
[rd:06790] *** Process received signal ***
[d:06790]
Hi all,
I have an issue doing parallel runs where the simulation would just
hang at seemingly random intervals anywhere from an hour to a day.
There are no error messages reported in the logs and nothing funny
from dmesg.
My set up is two dual-core Pentium D. I run with -np 4 to take
49 matches
Mail list logo