[gmx-users] gromacs installation in IBM BLUEGENE

2007-11-28 Thread Anupam Nath Jha

Dear all

I've tried compiling both version 3.3.1 and 3.3.2 of GROMACS on an IBM Blue
Gene/L system but get the same error message with each version.

To compile the GROMACS I've used the following settings/configure flags:

./configure\
  --enable-mpi  
\
  --enable-float
\
  --program-suffix=_mpi 
\
  --prefix=/home/phd/04/mbuanjha/soft/gmx332
\
  --enable-type-prefix  
\
  --enable-all-static   
\
  --disable-nice
\
  --disable-shared  
\
  --disable-threads 
\
  --without-motif   
\
  --without-x   
\
  --without-malloc  
\
  --with-fft=fftw3  
\
  --enable-ppc-altivec  
\
  --build=i686-linux-gnu
\
  --host=powerpc
\
CC=/bgl/BlueLight/ppcfloor/bglsys/bin/mpixlc
\
   F77=/bgl/BlueLight/ppcfloor/bglsys/bin/mpixlf77  
\
   CXX=/bgl/BlueLight/ppcfloor/bglsys/bin/mpixlcxx  
\
 MPICC=/bgl/BlueLight/ppcfloor/bglsys/bin/mpixlc
\
CFLAGS=-O3 -qarch=440d -qtune=440 -qhot   
\
FFLAGS=-O3 -qarch=440d -qtune=440 -qhot   
\
  CXXFLAGS=-O3 -qarch=440d -qtune=440 -qhot   
\
   LDFLAGS=-L/home/phd/04/mbuanjha/soft/fftw312/lib/
-L/home/phd/04/mbuanjha/soft/MYINC\
  CPPFLAGS=-I/home/phd/04/mbuanjha/soft/fftw312/include/
-I/home/phd/04/mbuanjha/soft/MYINC  \
  LIBS=-lmpich.rts -lmsglayer.rts -lrts.rts -ldevices.rts -lcxxmpich.rts
-lnsl



at the time of running this, i got few warning, as:

configure:26419: WARNING: Couldn't find XDR headers and/or libraries - using our
own
configure:26428: checking for working memcmp
configure:26517: result: no

but when i make mdrun, it gives error:

(cd ./src/gmxlib  make ; exit 0)
make[1]: Entering directory 
`/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/gmxlib'
Making all in nonbonded
make[2]: Entering directory
`/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/gmxlib/nonbonded'
Making all in nb_kernel
make[3]: Entering directory
`/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/gmxlib/nonbonded/nb_kernel'
rm -f kernel-stamp
./mknb   -software_invsqrt
 Gromacs nonbonded kernel generator (-h for help)
 Generating single precision functions in C.
 Using Gromacs software version of 1/sqrt(x).
Error: Cannot open nb_kernel010_c.c for writing.
make[3]: *** [kernel-stamp] Error 1
make[3]: Leaving directory
`/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/gmxlib/nonbonded/nb_kernel'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory
`/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/gmxlib/nonbonded'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/gmxlib'
(cd ./src/mdlib  make ; exit 0)
make[1]: Entering directory `/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/mdlib'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/mdlib'
(cd ./src/kernel  make mdrun ; exit 0)
make[1]: Entering directory 
`/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/kernel'
make[1]: *** No rule to make target `../gmxlib/libgmx_mpi.la', needed by
`mdrun'.  Stop.
make[1]: Leaving directory `/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/kernel'


and stops.

thanks in advance
anupam



-- 
Science is facts; just as houses are made of stone, so is science is made of
facts; but a pile of stones is not a house, and  a collection of facts is not
necessarily science.

Anupam Nath Jha
Ph. D. Student
Saraswathi Vishveshwara Lab
Molecular Biophysics Unit
IISc,Bangalore-560012
Karnataka
Ph. no.-22932611



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to 

Re: [gmx-users] Gromacs slow and crashes on Leopard.

2007-11-28 Thread Carsten Kutzner
Hadas Leonov wrote:
 After installing with openmpi - I ran some benchmarks for 4 processors
 on Mac-Pro:
 d.villin:   
 Leopard performance:  13714 ps/day
 old OS performance:41143 ps/day.
 gmx-benchmark :  48000 ps/day.
 
 d.poly-ch2
 Leopard performance:  8640 ps/day
 old OS performance:18000 ps/day
 gmx-benchmark:20571 ps/day
 
 old OS refers to OSX 10.4.9.
 The slow speed also happens when running only on one CPU. d.villin took
 6 times slower than usual. So it can't just be open-mpi fault, can it?
 
 Can it be due to compiling gromacs while disabling ia32 optimization?
Hi Hadas,

probably you are running the C inner loops now. You will want to not
only disable the Itanium inner loops, but at the same time enable the
x86 inner loops.

Carsten
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] pdb2gmx error

2007-11-28 Thread Tsjerk Wassenaar
Hi Tawhid,

The file you got was in the format used by the _program_ GROMOS. But this
format is not supported by pdb2gmx/Gromacs. The GROMOS _forcefield_ is a set
of equations and parameters, which stands apart from the formatting of the
building blocks. You could've at least taken a bit of effort to look at the
other building block definitions, and you would've seen the differences.
Also, the fact that you got complaints about # should have been indicative
to you that you were doing something wrong. In regards the building blocks
(the .rtp file), read Chapter 5 (notably 5.5). This should help you to
convert the GROMOS format building block to a Gromacs format one.

Tsjerk

On Nov 28, 2007 1:34 AM, Justin A. Lemkul [EMAIL PROTECTED] wrote:

 Quoting Tawhid Ezaz [EMAIL PROTECTED]:

  I am sorry, my previous mail bounced back, as it was over 50 KB.
 
  I changed the ffG43a1 to include the GTP and GDP which I got from a
 previous
  post here. {
 http://www.gromacs.org/pipermail/gmx-users/2005-May/015188.html}
 
  I  changed some structure like giving comment with ; as the origicanal #
 was
  giving error. but now it tells me there is an error that 05* is not
 found in
  the atom type database.
 
 
 
  Program pdb2gmx, VERSION 3.3.1
  Source code file: resall.c, line: 148
 
  Fatal error:
  Atom type O5* (residue GTP) not found in atomtype database
  ---
 
 
 
  I am including the file here .the GTP residue is at the bottom of the
 file.
 
 
 https://netfiles.uiuc.edu/tezaz2/shared/ffG43a1.rtp
 
  Is there any other GTP and GDP residue for the force field 43a1?

 Have a look at the other entries in the .rtp file, and notice how the
 formatting
 of the GTP molecule you entered looks nothing like the others.  Use the
 format
 of other residues to guide you (paying special attention to similar
 molecules,
 like ATP, which is already a part of ffG43a1).  The information in the
 file you
 found appears to be in a different format, although you may still be able
 to
 extract information from it.

 Alternatively, you could use the PRODRG beta server to generate a
 suitably-formatted topology (which you can include as an .itp file in your
 system topology).  However, this topology will require editting, as some
 of the
 parameters (most notably charges) will likely be unsatisfactory.

 -Justin

 
  Thanks in advance.
 
  Tawhid
 
 
 
 
 
  - Original Message 
  From: Justin A. Lemkul [EMAIL PROTECTED]
  To: Discussion list for GROMACS users gmx-users@gromacs.org
  Sent: Monday, November 26, 2007 9:14:37 PM
  Subject: Re: [gmx-users] pdb2gmx error
 
  Quoting Tawhid Ezaz [EMAIL PROTECTED]:
 
   Thanks a lot Justin. It worked really nice.
  
   now I am stuck with another problem. I need to add the Building block
   GTP and
   GDP in 43a1, as I need to get the topology of a tubulin monomer.  I
   got the
   file from a old post here.
  
   http://www.gromacs.org/pipermail/gmx-users/2005-May/015188.html
  
   I copied and pasted it into my file, yet, as the structure doesn't
   match it
   gave several errors.
  
   is there any way to  fix this?
 
  That depends entirely upon what your errors are and what you're trying
   to do.
  Please provide more details.
 
  -Justin
 
  
   Thanks a lot for your help.
  
   Tawhid
  
  
  
   - Original Message 
   From: Justin A. Lemkul [EMAIL PROTECTED]
   To: Discussion list for GROMACS users gmx-users@gromacs.org
   Sent: Monday, November 26, 2007 7:14:43 PM
   Subject: Re: [gmx-users] pdb2gmx error
  
  
   Quoting Tawhid Ezaz [EMAIL PROTECTED]:
  
Hi,
   
I have just started to learn gromacs. I am facing a problem to make
topology
file of a myoglobin. I got the pdb file from the csc tutorial.
   
When i am running the pdb2gmx with
   
pdb2gmx -f conf.pdb -p topol.top
   
i get the message, HEME148 CAB1428   0.569
 HEME148 CAC1437   0.562   0.820
N-terminus: NH3+
C-terminus: COO-
Now there are 148 residues with 1451 atoms
---
Program pdb2gmx, VERSION 3.3.1
Source code file: pdb2top.c, line: 570
   
Fatal error:
atom N not found in residue 1ACE while combining tdb and rtp
---
   
I was using 43a1 force field (0), {but none of them works}.
   
i read some previous post and got the idea that there should be a N
atom with
should be linked with ACE, but my conf.pdb files does not have any.
   I
have
topol.top file from the tutorial but didn't show how I can generate
that.
  
   Use the -ter option with pdb2gmx, and select 'none' when prompted.
 That way,
   pdb2gmx does not look for an N-terminal nitrogen (which is obviously
absent in
   an acetyl group).
  
   -Justin
  
   
I am a totally newbie, thus I am wondering what should I do. Which
file I
should correct to get rid of those? Do 

Re: [gmx-users] Barcelona vs Xeon

2007-11-28 Thread Carsten Kutzner
Hi Martin,

we have carried out benchmarks on a Barcelona 2.0 GHz versus a 2.3 GHz
Clovertown. Here are numbers for an 8 atom system with PME
electrostatics and Gromacs 3.3.1.

On the Barcelona we get a performance of 0.38, 0.71, 1.27 and 2.40
ns/day for 1, 2, 4, and 8 CPUs, respectively. On the Clovertown we get
0.52, 0.97, 1.75, and 2.74 ns/day. So indeed the scaling on the AMDs is
much better on 8 CPUs (80% compared to 66%), but the absolute
performance is better on the Intels.

For a 2.0 GHz Clovertown the absolute performance on 8 CPUs is probably
nearly the same as on a Barcelona, but the single-CPU performance will
still be better on the Clovertown. One has to keep in mind, however,
that this is just a single benchmark and other MD systems might show a
different scaling behaviour.

Carsten



Martin Höfling wrote:
 Hi all,
 
 Barcelona 1.9GHz vs Xeon 2.0GHz
 
 has anyone here benchmarked these two available 8-core solutions? Speed/core 
 wins probably the intel side but what about scaling on 8 cores? Has the 
 NUMA/Hypertransport architecture an advantage here?
 
 In price, they are very similar. Xeon is of course available with higher 
 frequencies but also increases in cost/cpu very fast.
 
 Best
   Martin
 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before posting!
 Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-- 
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Barcelona vs Xeon

2007-11-28 Thread Berk Hess

Hi,

For Gromacs 4.0, which will come out in a few months, the situation will be
quite different. The parallelization has been much improved and automatic
sorting of atoms optimizes the memory access a lot.
Both these things have much more effect on the Core2 CPUs which don't
have hypertransport or a built in memory controller.
So both the scaling and the absolute will go up significantly for Core2
and only slightly for AMD CPUs.

Berk.


From: Carsten Kutzner [EMAIL PROTECTED]
Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
To: Discussion list for GROMACS users gmx-users@gromacs.org
Subject: Re: [gmx-users] Barcelona vs Xeon
Date: Wed, 28 Nov 2007 09:55:05 +0100

Hi Martin,

we have carried out benchmarks on a Barcelona 2.0 GHz versus a 2.3 GHz
Clovertown. Here are numbers for an 8 atom system with PME
electrostatics and Gromacs 3.3.1.

On the Barcelona we get a performance of 0.38, 0.71, 1.27 and 2.40
ns/day for 1, 2, 4, and 8 CPUs, respectively. On the Clovertown we get
0.52, 0.97, 1.75, and 2.74 ns/day. So indeed the scaling on the AMDs is
much better on 8 CPUs (80% compared to 66%), but the absolute
performance is better on the Intels.

For a 2.0 GHz Clovertown the absolute performance on 8 CPUs is probably
nearly the same as on a Barcelona, but the single-CPU performance will
still be better on the Clovertown. One has to keep in mind, however,
that this is just a single benchmark and other MD systems might show a
different scaling behaviour.

Carsten



Martin Höfling wrote:
 Hi all,

 Barcelona 1.9GHz vs Xeon 2.0GHz

 has anyone here benchmarked these two available 8-core solutions? 
Speed/core

 wins probably the intel side but what about scaling on 8 cores? Has the
 NUMA/Hypertransport architecture an advantage here?

 In price, they are very similar. Xeon is of course available with higher
 frequencies but also increases in cost/cpu very fast.

 Best
Martin
 ___
 gmx-users mailing listgmx-users@gromacs.org
 http://www.gromacs.org/mailman/listinfo/gmx-users
 Please search the archive at http://www.gromacs.org/search before 
posting!

 Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to [EMAIL PROTECTED]
 Can't post? Read http://www.gromacs.org/mailing_lists/users.php

--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


_
Play online games with your friends with Messenger 
http://www.join.msn.com/messenger/overview


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Gromacs slow and crashes on Leopard.

2007-11-28 Thread David van der Spoel

Hadas Leonov wrote:

When is version 3.3.3 due ?


Soon.

But meanwhile I put a beta version at the ftp site.

ftp://ftp.gromacs.org/pub/gromacs/gromacs_3.3.3_pre1.tar.gz

Please use with care, for testing purposes only.

--
David.

David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  75124 Uppsala, Sweden
phone:  46 18 471 4205  fax: 46 18 511 755
[EMAIL PROTECTED]   [EMAIL PROTECTED]   http://folding.bmc.uu.se

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Gromacs slow and crashes on Leopard.

2007-11-28 Thread Hadas Leonov

On Wed, 2007-11-28 at 09:20 +0100, Carsten Kutzner wrote: 
 Hadas Leonov wrote:
  After installing with openmpi - I ran some benchmarks for 4 processors
  on Mac-Pro:
  d.villin:   
  Leopard performance:13714 ps/day
  old OS performance:41143 ps/day.
  gmx-benchmark :  48000 ps/day.
  
  d.poly-ch2
  Leopard performance:  8640 ps/day
  old OS performance:18000 ps/day
  gmx-benchmark:20571 ps/day
  
  old OS refers to OSX 10.4.9.
  The slow speed also happens when running only on one CPU. d.villin took
  6 times slower than usual. So it can't just be open-mpi fault, can it?
  
  Can it be due to compiling gromacs while disabling ia32 optimization?
 Hi Hadas,
 
 probably you are running the C inner loops now. You will want to not
 only disable the Itanium inner loops, but at the same time enable the
 x86 inner loops.
 

Thanks Carsten, 
I tried recompliing with --enable-x86-64-sse, but it's still slow as
before. 

Hadas.

 Carsten
 

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Gromacs slow and crashes on Leopard.

2007-11-28 Thread David van der Spoel

Hadas Leonov wrote:
On Wed, 2007-11-28 at 09:20 +0100, Carsten Kutzner wrote: 

Hadas Leonov wrote:

After installing with openmpi - I ran some benchmarks for 4 processors
on Mac-Pro:
d.villin:   
Leopard performance: 	13714 ps/day

old OS performance:41143 ps/day.
gmx-benchmark :  48000 ps/day.

d.poly-ch2
Leopard performance:  8640 ps/day
old OS performance:18000 ps/day
gmx-benchmark:20571 ps/day

old OS refers to OSX 10.4.9.
The slow speed also happens when running only on one CPU. d.villin took
6 times slower than usual. So it can't just be open-mpi fault, can it?

Can it be due to compiling gromacs while disabling ia32 optimization?

Hi Hadas,

probably you are running the C inner loops now. You will want to not
only disable the Itanium inner loops, but at the same time enable the
x86 inner loops.



Thanks Carsten, 
I tried recompliing with --enable-x86-64-sse, but it's still slow as
before. 

Did you try the beta version I put on the ftp site?


Hadas.


Carsten



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
David.

David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  75124 Uppsala, Sweden
phone:  46 18 471 4205  fax: 46 18 511 755
[EMAIL PROTECTED]   [EMAIL PROTECTED]   http://folding.bmc.uu.se

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] parallel simulation crash on 6 processors

2007-11-28 Thread servaas michielssens
I tried to run a gromacs simulation (gromacs 3.3.1, MD, 18000 atoms) on 2 
systems:

Intel(R) Pentium(R) CPU 2.40GHz with 100Mbit network
and
AMD Opteron(tm) Processor 250 with 1Gbit network
On both systems I had a crash when I tried to run with more then 5 processors. 
From 1-5 there was no problem.


kind regards,

servaas michielssens___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Re: [gmx-users] Gromacs slow and crashes on Leopard.

2007-11-28 Thread Carsten Kutzner
Hadas Leonov wrote:
 Great, thanks! 
 
 It does solve the ia32 compilation problems, but not the lam-mpi
 compilation problems - I still get undefined symbols for a few lam
 variables. 
 
 So still using open-mpi, Gromacs works a little better now, for d.villin
 benchmark, the performance are:
 1 CPU: 13214 ps/day (compared to 3592 ps/day before, and 18000ps/day on
 OSX 10.4)
 4 CPUs: 38000ps/day (compared to 13714 ps/day before,  41143ps/day on
 OSX 10.4, and 48000ps/day in the gmx published benchmark). 
The performance differences might as well have to do with LAM's memory
management. For some benchmark systems I have seen a 10% performance
improve on a *single* CPU when Gromacs was compiled with LAM-MPI support
compared to the non-MPI version. This was on Linux, however.

Carsten
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] parallel simulation crash on 6 processors

2007-11-28 Thread David van der Spoel

servaas michielssens wrote:
I tried to run a gromacs simulation (gromacs 3.3.1, MD, 18000 atoms) on 
2 systems:
 
Intel(R) Pentium(R) CPU 2.40GHz with 100Mbit network

and
AMD Opteron(tm) Processor 250 with 1Gbit network
On both systems I had a crash when I tried to run with more then 5 
processors. From 1-5 there was no problem.
 

more details please.
 
kind regards,
 
servaas michielssens





___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
David.

David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  75124 Uppsala, Sweden
phone:  46 18 471 4205  fax: 46 18 511 755
[EMAIL PROTECTED]   [EMAIL PROTECTED]   http://folding.bmc.uu.se

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] gromacs installation in IBM BLUEGENE

2007-11-28 Thread Fiona Reid

Dear Anupam,

I also obtained a similar error to you when trying to install GROMACS 
on Blue Gene. I posted a message to the list on Monday but have since

found a solution.


`/home/phd/04/mbuanjha/soft/gromacs-3.3.2/src/gmxlib/nonbonded/nb_kernel'
rm -f kernel-stamp
./mknb   -software_invsqrt
 Gromacs nonbonded kernel generator (-h for help)
 Generating single precision functions in C.
 Using Gromacs software version of 1/sqrt(x).
Error: Cannot open nb_kernel010_c.c for writing.


Essentially the problem is that the executable ./mknb is compiled for the 
compute nodes on Blue Gene and when the make process attempts to run this 
executable on the front-end (login nodes running Linux) it fails.


I've managed to solve this problem by running ./mknb with the appropriate 
options, e.g. ./mknb -software_invsqrt for your example on the backend 
on a single processor. This enables the required kernel_c.c files to be 
created. Once the kernel files have been created you can continue the 
make process. At the final link stage you might encounter a number of 
undefined symbols, e.g. _nss_files_getaliasent_r and others beginning with

_nss_*

Adding -lnss_files -lnss_dns -lresolv -lc -lnss_files -lnss_dns -lresolv
to LDFLAGS should solve this problem.

Hope this helps.

Fiona
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] opls bonded parameters

2007-11-28 Thread singh
Dear gmx users,

 

I am trying to manually make a topology file for a peptide with few
substitutions. I had trouble in understanding regarding assignment of
dihedral paramaters in opls by grompp. As far as I understood,
ffoplsaabond.itp contains all the dihedral parameters where the bond_type
from ffoplsaanb.itp are used to define the dihedral angles.   

If I take an entry from ffoplsaa.rtb for example glycine,

[ GLY ]

 [ atoms ]

 Nopls_238   -0.500 1

 Hopls_2410.300 1

CAopls_223B   0.080 1

   HA1opls_1400.060 1

   HA2opls_1400.060 1

 Copls_2350.500 2

 Oopls_236   -0.500 2

 

 

And search for a dihedral H-CA-C-O in ffoplsbon.itp, I don't find any
entries. Probably, I have to look for the bond_type corresponding to
opls_??? entries and then search in ffoplsaabond.itp. It means that the
first column entries in rtb file is only used to map PDB atom types to opls
atom types and is of no further use. Please let me know if I am wrong.

 

With Regards,

Gurpreet Singh  

 

-

University of Dortmund
Department of Chemistry
Physical Chemistry I  -  Biophysical Chemistry
Otto-Hahn Str. 6
D-44227 Dortmund
Germany
Office:   C1-06 room 176
Phone:  +49 231 755 3916

Fax: +49 231 755 3901

-

 

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] Compile Gromacs-3.3.2 on HP xxxw9300 Workstation: AMD64 Opteron with Linux

2007-11-28 Thread Tandia, Adama
Dear ALL;

I'm trying to upgrade to GMX-3.3.2 with icc, ifort and mpich2. I used the 
following command with configure:
./configure --prefix=$HOME/GMX332 --enable-mpi --enable-double --enable-fortran 
F77=ifort CC=icc

The compilation stops after couple of minutes and I get the followinfg error 
message:
nb_kernel_x86_64_sse2.c:98: error: `eNR_NBKERNEL_NR' undeclared here (not in a 
function)

Anyone has an idea of what is going on? I searched the user-list but did not 
find anything like this.

Thanks,

Adama

++
Adama Tandia
Modeling  Simulation
Corning INC
Corning, NY 14831 USA
Tel: 607 248 1036
Fax: 607 974 3405
www.corning.com

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] Compile Gromacs-3.3.2 on HP xxxw9300 Workstation: AMD64 Opteron with Linux

2007-11-28 Thread David van der Spoel

Tandia, Adama wrote:

Dear ALL;

I'm trying to upgrade to GMX-3.3.2 with icc, ifort and mpich2. I used the 
following command with configure:
./configure --prefix=$HOME/GMX332 --enable-mpi --enable-double --enable-fortran 
F77=ifort CC=icc

The compilation stops after couple of minutes and I get the followinfg error 
message:
nb_kernel_x86_64_sse2.c:98: error: `eNR_NBKERNEL_NR' undeclared here (not in a 
function)

Anyone has an idea of what is going on? I searched the user-list but did not 
find anything like this.

You can  make your life easier by using gcc instead. Fortran doesn't 
help on x86 or x86_64 chips.



Thanks,

Adama

++
Adama Tandia
Modeling  Simulation
Corning INC
Corning, NY 14831 USA
Tel: 607 248 1036
Fax: 607 974 3405
www.corning.com

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php



--
David.

David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  75124 Uppsala, Sweden
phone:  46 18 471 4205  fax: 46 18 511 755
[EMAIL PROTECTED]   [EMAIL PROTECTED]   http://folding.bmc.uu.se

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Setting up Carbon Nanotube Simulations

2007-11-28 Thread Joshua D. Moore
Dear Bob Johnson and other users,

I am attempting to set up some infinite carbon nanotube simulations.  To
begin, I want to take a single nanotube with some argon inside and try to
simulate it with GROMACS with PBC in the axial direction (it appears I need
to use pbc=full).

So I have a lot of experience with NAMD, but really nothing with GROMACS

So far I can use x2top to generate a topology file from a pdb

It seems to work ok as I can make it output all of the dihedrals and seems
to match up with what I would get with a psf for NAMD.

I am a bit confused on how to include my forcefield with the simulation and
to generate the proper *gro file.  The x2top generated *top file seems to
not have the proper mass for Argon and also doesn't include the parameters
for the forcefield.  I attempted to generate my own forcefield files in the
top directory (ffCNT.itp, ffCNTnb.itp, etc, etc.).  It seems from reading
the mailing list that I don't need to do this and can just use my pdb file
and topology file generated from x2top to start the simulation.

How, then would I include my CNT force field (A morse bond, cosine angle,
and RB dihedral)?  I have created these files already as I stated above.
Could I just include this as a separate *itp file as the methanol example
shows?  Does this file then override the *top file?  So I could include it
as #include CNT.itp or something in the *top file?This is my main
question.  I am also a bit confused on the format of this as I don't really
have a molecule as for methanol, just atoms for the carbon and argon
atoms.

Second question (and very naive).  Can you start a MD simulation with
GROMACS without a *gro file and just a *pdb?

I realize that my message here is a bit naive, so I hoping someone will be
nice enough to overloook my lack of general knowledge about GROMACS :)

Thanks

Joshua Moore

PS I address Bob Johnson as he seems to be the expert on nanotubes in
GROMACS :)

-- 

Joshua D. Moore
Graduate Student
North Carolina State University
Dept. of Chemical and Biomolecular Engineering
Box 7905 Centennial Campus
Engineering Building I
911 Partners Way
Raleigh, NC  27695-7905
Phone: (919) 513-2051
Fax:   (919) 513-2470
Email:  [EMAIL PROTECTED]

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php