Re: [gmx-users] segmentation fault from power6 kernel

2011-11-03 Thread Fabio AFFINITO
Hi Mark,
today I tested the 4.5.5 version. It seems here there's not that problem. 
Anyway, I will take some days to make more tests.

Thank you,

Fabio

- Messaggio originale -
Da: "Mark Abraham" 
A: "Discussion list for GROMACS users" 
Inviato: Giovedì, 3 novembre 2011 14:40:39
Oggetto: Re: [gmx-users] segmentation fault from power6 kernel


On 3/11/2011 7:59 PM, Fabio Affinito wrote:

Thank you, Mark.
Using GMX_NOOPTIMIZEDKERNELS=1 everything runs fine on power6.
I also tried to run on a linux cluster and it went ok.

Sounds like a bug. Please file a report here http://redmine.gromacs.org 
including your observations and a .tpr that will reproduce them.

Thanks,

Mark





Fabio




The most likely issue is some normal "blowing up" scenario leading to a
table-lookup-overrun segfault in the 3xx series kernels. I don't know
why the usual error messages in such scenarios did not arise on this
platform. Try setting the environment variable GMX_NOOPTIMIZEDKERNELS to
1 to see if this is a power6-specific kernel issue. Try running the .tpr
on another platform.

Mark



--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segmentation fault from power6 kernel

2011-11-03 Thread Fabio Affinito

Thank you, Mark.
Using GMX_NOOPTIMIZEDKERNELS=1 everything runs fine on power6.
I also tried to run on a linux cluster and it went ok.


Fabio



The most likely issue is some normal "blowing up" scenario leading to a
table-lookup-overrun segfault in the 3xx series kernels. I don't know
why the usual error messages in such scenarios did not arise on this
platform. Try setting the environment variable GMX_NOOPTIMIZEDKERNELS to
1 to see if this is a power6-specific kernel issue. Try running the .tpr
on another platform.

Mark


--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segmentation fault from power6 kernel

2011-11-02 Thread Fabio AFFINITO
Dear all,
I've trying to run a simulation on a IBM Power6 cluster. At the
beginning of the simulation I've got a segmentation fault. I investigated with 
TotalView and I've found that this segmentation violation originates in the 
pwr6kernel310.F
Up to now, I still didn't find what is behind this seg violation. I would like 
to ask if anybody is aware of a bug behind this function.
The simulation is obtained by using Gromacs 4.5.3 compiled in double precision.
The options that I specified in the configure are:
--disable-threads --enable-power6 --enable-mpi

The log file doesn't provide much informations:

Log file opened on Wed Nov  2 20:11:02 2011
Host: sp0202  pid: 11796682  nodeid: 0  nnodes:  1
The Gromacs distribution was built Thu Dec 16 14:44:40 GMT+01:00 2010 by
propro01@sp0201 (AIX 1 00C3E6444C00)


 :-)  G  R  O  M  A  C  S  (-:

   Gromacs Runs One Microsecond At Cannonball Speeds

:-)  VERSION 4.5.3  (-:

Written by Emile Apol, Rossen Apostolov, Herman J.C. Berendsen,
  Aldert van Buuren, Pär Bjelkmar, Rudi van Drunen, Anton Feenstra,
Gerrit Groenhof, Peter Kasson, Per Larsson, Pieter Meulenhoff,
   Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
Michael Shirts, Alfons Sijbers, Peter Tieleman,

   Berk Hess, David van der Spoel, and Erik Lindahl.

   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2010, The GROMACS development team at
Uppsala University & The Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

  :-)  mdrun_d (double precision)  (-:


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
  --- Thank You ---  


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
  --- Thank You ---  

Input Parameters:
   integrator   = md
   nsteps   = 250
   init_step= 0
   ns_type  = Grid
   nstlist  = 10
   ndelta   = 2
   nstcomm  = 10
   comm_mode= Linear
   nstlog   = 2500
   nstxout  = 2500
   nstvout  = 2500
   nstfout  = 0
   nstcalcenergy= 10
   nstenergy= 2500
   nstxtcout= 2500
   init_t   = 0
   delta_t  = 0.002
   xtcprec  = 1000
   nkx  = 50
   nky  = 50
   nkz  = 50
   pme_order= 4
   ewald_rtol   = 1e-05
   ewald_geometry   = 0
   epsilon_surface  = 0
   optimize_fft = TRUE
   ePBC = xyz
   bPeriodicMols= FALSE
   bContinuation= FALSE
   bShakeSOR= FALSE
   etc  = Nose-Hoover
   nsttcouple   = 10
   epc  = No
   epctype  = Isotropic
   nstpcouple   = -1
   tau_p= 1
   ref_p (3x3):
  ref_p[0]={ 0.0e+00,  0.0e+00,  0.0e+00}
  ref_p[1]={ 0.0e+00,  0.0e+00,  0.0e+00}
  ref_p[2]={ 0.0e+00,  0.0e+00,  0.0e+00}
   compress (3x3):
  compress[0]={ 0.0e+00,  0.0e+00,  0.0e+00}
  compress[1]={ 0.0e+00,  0.0e+00,  0.0e+00}
  compress[2]={ 0.0e+00,  0.0e+00,  0.0e+00}
   refcoord_scaling = No
   posres_com (3):
  posres_com[0]= 0.0e+00
  posres_com[1]= 0.0e+00
  posres_com[2]= 0.0e+00
   posres_comB (3):
  posres_comB[0]= 0.0e+00
  posres_comB[1]= 0.0e+00
  posres_comB[2]= 0.0e+00
   andersen_seed= 815131
   rlist   

Re: [gmx-users] genconf and bonded interactions

2011-07-27 Thread Fabio Affinito

Ok, the problem is solved.
Thank you.

Fabio
On 07/27/2011 05:17 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

Thanks.
This actually solved the problem in grompp.
I still have problems when running. This is the log of mdrun:

Initializing Domain Decomposition on 4096 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 0.583 nm, LJ-14, atoms 28051 28056
multi-body bonded interactions: 0.583 nm, Proper Dih., atoms 28051 28056
Minimum cell size due to bonded interactions: 0.642 nm
Maximum distance for 7 constraints, at 120 deg. angles, all-trans:
1.139 nm
Estimated maximum distance required for P-LINCS: 1.139 nm
This distance will limit the DD cell size, you can override this with
-rcon
Guess for relative PME load: 0.43
Will use 2400 particle-particle and 1696 PME only nodes
This is a guess, check the performance at the end of the log file
Using 1696 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 2400 cells with a minimum initial size of
1.424 nm
The maximum allowed number of cells is: X 18 Y 18 Z 6

---
Program mdrun_mpi_bg, VERSION 4.5.4
Source code file: domdec.c, line: 6438

Fatal error:
There is no domain decomposition for 2400 nodes that is compatible
with the given box and a minimum cell size of 1.4242 nm
Change the number of nodes or mdrun option -rcon or -dds or your
LINCS settings
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the
GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"Meet Me At the Coffee Shop" (Red Hot Chili Peppers)


I imagine that now I have to tune the dds and rcon parameter. Am I right?



I've never touched those parameters and I do not know how they will
affect performance or the stability of your systems. The easier solution
is to simply reduce the number of processors so that the cell sizes
increase a bit.

http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm


-Justin




--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] genconf and bonded interactions

2011-07-27 Thread Fabio Affinito

Thanks.
This actually solved the problem in grompp.
I still have problems when running. This is the log of mdrun:

Initializing Domain Decomposition on 4096 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 0.583 nm, LJ-14, atoms 28051 28056
  multi-body bonded interactions: 0.583 nm, Proper Dih., atoms 28051 28056
Minimum cell size due to bonded interactions: 0.642 nm
Maximum distance for 7 constraints, at 120 deg. angles, all-trans: 1.139 nm
Estimated maximum distance required for P-LINCS: 1.139 nm
This distance will limit the DD cell size, you can override this with -rcon
Guess for relative PME load: 0.43
Will use 2400 particle-particle and 1696 PME only nodes
This is a guess, check the performance at the end of the log file
Using 1696 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 2400 cells with a minimum initial size of 1.424 nm
The maximum allowed number of cells is: X 18 Y 18 Z 6

---
Program mdrun_mpi_bg, VERSION 4.5.4
Source code file: domdec.c, line: 6438

Fatal error:
There is no domain decomposition for 2400 nodes that is compatible with the 
given box and a minimum cell size of 1.4242 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS settings
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

"Meet Me At the Coffee Shop" (Red Hot Chili Peppers)


I imagine that now I have to tune the dds and rcon parameter. Am I right?

Fabio

On 07/27/2011 04:48 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

This is the key part:


processing coordinates...
Warning: atom name 18113 in topol.top and out.gro does not match (MN1
- CN1)
Warning: atom name 18114 in topol.top and out.gro does not match (MN2
- CN2)
Warning: atom name 18115 in topol.top and out.gro does not match (N -
CN3)
Warning: atom name 18116 in topol.top and out.gro does not match (H1
- N)
Warning: atom name 18117 in topol.top and out.gro does not match (H2
- CA)
Warning: atom name 18118 in topol.top and out.gro does not match (H3
- CB)
Warning: atom name 18119 in topol.top and out.gro does not match (CA
- OA)
Warning: atom name 18120 in topol.top and out.gro does not match (HA
- P)
Warning: atom name 18121 in topol.top and out.gro does not match
(MCB1 - OB)
Warning: atom name 18122 in topol.top and out.gro does not match
(MCB2 - OC)
Warning: atom name 18123 in topol.top and out.gro does not match (CB
- OD)
Warning: atom name 18124 in topol.top and out.gro does not match (HB1
- CC)
Warning: atom name 18125 in topol.top and out.gro does not match (HB2
- CD)
Warning: atom name 18126 in topol.top and out.gro does not match (HB3
- OE)
Warning: atom name 18127 in topol.top and out.gro does not match (C -
C1A)
Warning: atom name 18128 in topol.top and out.gro does not match (O -
O1A)
Warning: atom name 18129 in topol.top and out.gro does not match (N -
C1B)
Warning: atom name 18130 in topol.top and out.gro does not match (H -
C1C)
Warning: atom name 18131 in topol.top and out.gro does not match (CA
- C1D)
Warning: atom name 18132 in topol.top and out.gro does not match (HA
- C1E)
(more than 20 non-matching atom names)

WARNING 1 [file topol.top, line 50]:
219116 non-matching atom names
atom names from topol.top will be used
atom names from out.gro will be ignored



This is the root of the problem. Your topology does not match your
coordinates in terms of the order of the molecules. grompp (and, in then
later, mdrun) thinks that protein is lipid, lipids are water, etc, so
you're effectively telling it that things are bonded within a molecule
when they're not. Even if the simulation initially ran it would blow up
immediately because you're mapping the wrong coordinates onto the wrong
molecules.

The coordinate file from genconf is an exact replica of your system, but
the molecules are not re-ordered in any convenient way. So, for your
system, instead of:

[ molecules ]
Protein_A 4N
Protein_B 4N
Protein_C 4N
Protein_D 4N
POPC 4N
POPG 4N
SOL 4N
K+ 4N

you need this in your topology:

[ molecules ]
Protein_A N
Protein_B N
Protein_C N
Protein_D N
POPC N
POPG N
SOL N
K+ N
(repeated three more times)

Otherwise, reorganize the .gro file. Topology manipulation is probably
easier, though.

-Justin




--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Li

Re: [gmx-users] genconf and bonded interactions

2011-07-27 Thread Fabio Affinito

On 07/27/2011 04:12 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

On 07/27/2011 03:54 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

This is the mdrun output:

Initializing Domain Decomposition on 4096 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 30.031 nm, LJ-14, atoms 40702 40705
multi-body bonded interactions: 30.031 nm, Angle, atoms 40701 40704
Minimum cell size due to bonded interactions: 33.035 nm
Maximum distance for 7 constraints, at 120 deg. angles, all-trans:
1.139 nm
Estimated maximum distance required for P-LINCS: 1.139 nm
Guess for relative PME load: 0.43
Will use 2400 particle-particle and 1696 PME only nodes
This is a guess, check the performance at the end of the log file
Using 1696 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 2400 cells with a minimum initial size of
41.293 nm
The maximum allowed number of cells is: X 0 Y 0 Z 0

---
Program mdrun_mpi_bg, VERSION 4.5.4
Source code file: domdec.c, line: 6438

Fatal error:
There is no domain decomposition for 2400 nodes that is compatible
with the given box and a minimum cell size of 41.2932 nm
Change the number of nodes or mdrun option -rdd or -dds
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the
GROMACS
website at http://www.gromacs.org/Documentation/Errors
---



This is bizarre. Did you fix the periodicity of the original coordinate
file (conf_start.gro) and rebuild the system, or did you try to run
trjconv on the replicated system? The former is the correct approach.

-Justin


This is what I did (in the chronological order):

1) trjconv -f conf_start.gro -o conf_whole.gro -pbc whole


Presumably in conjunction with a .tpr file, in order for this to work?


Ok, I added.




2) genconf -f conf_whole.gro -o out.gro -nbox 2 2 1
3) i modified the topol.top (modifying the number of molecules)
4) grompp -f grompp.mdp -c out.gro -p topol.top -maxwarn 3


What are the warnings you are trying to circumvent? Please provide the
full grompp output. I can think of a few other possibilities for the
source of your problem, but I do not want to venture idle guesswork
without seeing all of the warnings you've got.


This is the full grompp output:


 :-)  G  R  O  M  A  C  S  (-:

   Gromacs Runs One Microsecond At Cannonball Speeds

:-)  VERSION 4.0.7  (-:


  Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
  modify it under the terms of the GNU General Public License
 as published by the Free Software Foundation; either version 2
 of the License, or (at your option) any later version.

:-)  grompp  (-:

Option Filename  Type Description

  -f grompp.mdp  Input, Opt!  grompp input file with MD parameters
 -po  mdout.mdp  Output   grompp input file with MD parameters
  -cout.gro  InputStructure file: gro g96 pdb tpr tpb tpa
  -r   conf.gro  Input, Opt.  Structure file: gro g96 pdb tpr tpb tpa
 -rb   conf.gro  Input, Opt.  Structure file: gro g96 pdb tpr tpb tpa
  -n  index.ndx  Input, Opt.  Index file
  -p  topol.top  InputTopology file
 -pp  processed.top  Output, Opt. Topology file
  -o  topol.tpr  Output   Run input file: tpr tpb tpa
  -t   traj.trr  Input, Opt.  Full precision trajectory: trr trj cpt
  -e   ener.edr  Input, Opt.  Energy file: edr ene

Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-niceint0   Set the nicelevel
-[no]v   bool   yes Be loud and noisy
-timereal   -1  Take frame at or first after this time.
-[no]rmvsbds bool   yes Remove constant bonded interactions with virtual
sites
-maxwarn int3   Number of allowed warnings during input processing
-[no]zerobool   no  Set parameters for bonded interactions without
defaults to zero instead of generating an error
-[no]renum   bool   yes Renumber atomtypes and minimize number of
atomtypes

Ignoring obsolete mdp entry 'dihre-tau'

Back Off! I just backed up mdout.mdp to ./#mdout.mdp.1#

Re: [gmx-users] genconf and bonded interactions

2011-07-27 Thread Fabio Affinito

On 07/27/2011 03:54 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

This is the mdrun output:

Initializing Domain Decomposition on 4096 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 30.031 nm, LJ-14, atoms 40702 40705
multi-body bonded interactions: 30.031 nm, Angle, atoms 40701 40704
Minimum cell size due to bonded interactions: 33.035 nm
Maximum distance for 7 constraints, at 120 deg. angles, all-trans:
1.139 nm
Estimated maximum distance required for P-LINCS: 1.139 nm
Guess for relative PME load: 0.43
Will use 2400 particle-particle and 1696 PME only nodes
This is a guess, check the performance at the end of the log file
Using 1696 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 2400 cells with a minimum initial size of
41.293 nm
The maximum allowed number of cells is: X 0 Y 0 Z 0

---
Program mdrun_mpi_bg, VERSION 4.5.4
Source code file: domdec.c, line: 6438

Fatal error:
There is no domain decomposition for 2400 nodes that is compatible
with the given box and a minimum cell size of 41.2932 nm
Change the number of nodes or mdrun option -rdd or -dds
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the
GROMACS
website at http://www.gromacs.org/Documentation/Errors
---



This is bizarre. Did you fix the periodicity of the original coordinate
file (conf_start.gro) and rebuild the system, or did you try to run
trjconv on the replicated system? The former is the correct approach.

-Justin


This is what I did (in the chronological order):

1) trjconv -f conf_start.gro -o conf_whole.gro -pbc whole
2) genconf -f conf_whole.gro -o out.gro -nbox 2 2 1
3) i modified the topol.top (modifying the number of molecules)
4) grompp -f grompp.mdp -c out.gro -p topol.top  -maxwarn 3
5) launched mdrun

Fabio
--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] genconf and bonded interactions

2011-07-27 Thread Fabio Affinito

On 07/27/2011 03:38 PM, Justin A. Lemkul wrote:


Please make sure to keep the discussion on the list.


Sorry, I just was continuing the discussion after sending you my 
coordinates.




Fabio Affinito wrote:

Justin,
first of all, I thank you very much for your help: it's very precious.
I regret I didn't notice before the molecule was broken.


I suggested to you yesterday that your initial configuration was broken.
You told me it wasn't.


I checked with VMD but I was in error. I acknowledged my error, I think. 
Didn't I?





Unfortunately the problem is not solved with your recipe. I used
trjconv on the starting configuration and then I used the genconf.
The mdrun fails with the same error. In this case, the atoms involved
in the error are not the same of the previous one. But once again the
bond length is completely odd.
During the grompp I had some warning, but I can't understand where
they are coming from:



Certain bonded interactions can't take place between massless particles.
As the warning states, if pdb2gmx produced the topology, don't worry
about it.


Cleaning up constraints and constant bonded interactions with virtual
sites
Removed 1305 Angles with virtual sites, 6251 left
Removed 480 Proper Dih.s with virtual sites, 363 left
Converted 2230 Constraints with virtual sites to connections, 2596 left
Warning: removed 582 Constraints with vsite with Virtual site 3out
construction
This vsite construction does not guarantee constant bond-length
If the constructions were generated by pdb2gmx ignore this warning
Cleaning up constraints and constant bonded interactions with virtual
sites
Removed 1305 Angles with virtual sites, 6251 left
Removed 480 Proper Dih.s with virtual sites, 363 left
Converted 2230 Constraints with virtual sites to connections, 2596 left
Warning: removed 582 Constraints with vsite with Virtual site 3out
construction
This vsite construction does not guarantee constant bond-length
If the constructions were generated by pdb2gmx ignore this warning
Cleaning up constraints and constant bonded interactions with virtual
sites
Removed 1305 Angles with virtual sites, 6251 left
Removed 480 Proper Dih.s with virtual sites, 363 left
Converted 2230 Constraints with virtual sites to connections, 2596 left
Warning: removed 582 Constraints with vsite with Virtual site 3out
construction
This vsite construction does not guarantee constant bond-length
If the constructions were generated by pdb2gmx ignore this warning
Cleaning up constraints and constant bonded interactions with virtual
sites
Removed 1305 Angles with virtual sites, 6251 left
Removed 480 Proper Dih.s with virtual sites, 363 left
Converted 2230 Constraints with virtual sites to connections, 2596 left
Warning: removed 582 Constraints with vsite with Virtual site 3out
construction
This vsite construction does not guarantee constant bond-length
If the constructions were generated by pdb2gmx ignore this warning


According to your experience, this can be pertinent with my problems?



Not likely. The virtual sites are within the protein, not the lipids,
right?

Without the mdrun output, there's little else I can tell you.


This is the mdrun output:

Initializing Domain Decomposition on 4096 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 30.031 nm, LJ-14, atoms 40702 40705
  multi-body bonded interactions: 30.031 nm, Angle, atoms 40701 40704
Minimum cell size due to bonded interactions: 33.035 nm
Maximum distance for 7 constraints, at 120 deg. angles, all-trans: 1.139 nm
Estimated maximum distance required for P-LINCS: 1.139 nm
Guess for relative PME load: 0.43
Will use 2400 particle-particle and 1696 PME only nodes
This is a guess, check the performance at the end of the log file
Using 1696 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 2400 cells with a minimum initial size of 
41.293 nm

The maximum allowed number of cells is: X 0 Y 0 Z 0

---
Program mdrun_mpi_bg, VERSION 4.5.4
Source code file: domdec.c, line: 6438

Fatal error:
There is no domain decomposition for 2400 nodes that is compatible with 
the given box and a minimum cell size of 41.2932 nm

Change the number of nodes or mdrun option -rdd or -dds
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Fabio




-Justin


Thank you once again,

Fabio

On 07/27/2011 02:45 PM, Justin A. Lemkul wrote:


The problem is indeed as I suspected. Your molecules are broken across
periodic boundaries in the initial configuration. When you replicate
with genbox, you've now replicated broken molecules 

Re: [gmx-users] genconf and bonded interactions

2011-07-27 Thread Fabio Affinito

On 07/26/2011 07:02 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

On 07/26/2011 05:06 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

Maybe this is a different issue... but it's ok that after the 99,999th
atom the counter restarts from zero?


21374SOL OW9 12.986 9.021 7.036 -0.0037 -0.4345 0.3977
21374SOL HW1 0 13.069 8.987 7.081 0.5916 0.5409 0.0638


Could this be the origin of my problem?



Atom numbering is not the problem. This happens all the time for systems
of hundreds of thousands of atoms, which Gromacs handles just fine.
Please investigate the points I suggested before.


Yes, but this doesn't make things easier! :-)
According to the log the atoms to consider are 159986 and 159990



That's not what you posted before. The .log output indicated atoms
193657 and 193660 were problematic.


Sorry. It's because I've tried with many systems (with different -nbox 
values) and the error was always the same.





Browsing the conf.gro, if I didn't make mistakes this atoms are:


30040POPG OE59986 0.080 13.158 2.964 0.0885 0.4154 -0.0859



30040POPG C1C59990 0.219 26.034 3.221 0.6449 0.0750 0.1313


But their distance is 12.8nm, while md.log reports 38.911 nm...



In any case, why are atoms four bonds (based on the original .log output
of 1-4 interactions being a problem) away separated by 12.8 nm? Seems
very odd to me. I ask yet again - what are your box vectors, before and
after manipulation with genconf?



Seems odd to me, too. For the box vectors (sorry if I didnt answer 
before), in  this case:


after:   26.04658  26.04658   8.75317
before:  13.02329  13.02329   8.75317

Hope it helps.

Best regards,

Fabio



-Justin


So what?

F.






-Justin


Thanks again,

Fabio

On 07/26/2011 04:38 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

On 07/26/2011 04:30 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

On 07/26/2011 04:19 PM, Justin A. Lemkul wrote:



Were the molecules whole in the coordinate file you replicated? If
not,
the bonds will now be assigned across the entire box.

-Justin


Yes and not, depending on what you mean by "whole".
It is an ion channel, so it's made of four chains.
This clarifies? (i guess not..)


By whole, I mean that the molecules are not split across periodic
boundaries in the initial configuration that you replicated. If you
replicate a periodic break, then you split the molecules by a
distance
equal to the new periodic distance.

-Justin


Ok, so: no, it's not broken.



What you need to do is use the information mdrun provided you to
diagnose what's going on. Apparently atoms 193657 193660 are separated
by 31 nm. What are your box vectors? Where are these atoms in the
system? Then you'll have your answer. The only reason I can think
of for
such extreme distances is a periodicity issue.

-Justin














--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] genconf and bonded interactions

2011-07-26 Thread Fabio Affinito

On 07/26/2011 05:06 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

Maybe this is a different issue... but it's ok that after the 99,999th
atom the counter restarts from zero?


21374SOL OW9 12.986 9.021 7.036 -0.0037 -0.4345 0.3977
21374SOL HW1 0 13.069 8.987 7.081 0.5916 0.5409 0.0638


Could this be the origin of my problem?



Atom numbering is not the problem. This happens all the time for systems
of hundreds of thousands of atoms, which Gromacs handles just fine.
Please investigate the points I suggested before.


Yes, but this doesn't make things easier! :-)
According to the log the atoms to consider are 159986 and 159990

Browsing the conf.gro, if I didn't make mistakes this atoms are:


 30040POPGOE59986   0.080  13.158   2.964  0.0885  0.4154 -0.0859



 30040POPG   C1C59990   0.219  26.034   3.221  0.6449  0.0750  0.1313


But their distance is 12.8nm, while md.log reports 38.911 nm...

So what?

F.






-Justin


Thanks again,

Fabio

On 07/26/2011 04:38 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

On 07/26/2011 04:30 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

On 07/26/2011 04:19 PM, Justin A. Lemkul wrote:



Were the molecules whole in the coordinate file you replicated? If
not,
the bonds will now be assigned across the entire box.

-Justin


Yes and not, depending on what you mean by "whole".
It is an ion channel, so it's made of four chains.
This clarifies? (i guess not..)


By whole, I mean that the molecules are not split across periodic
boundaries in the initial configuration that you replicated. If you
replicate a periodic break, then you split the molecules by a distance
equal to the new periodic distance.

-Justin


Ok, so: no, it's not broken.



What you need to do is use the information mdrun provided you to
diagnose what's going on. Apparently atoms 193657 193660 are separated
by 31 nm. What are your box vectors? Where are these atoms in the
system? Then you'll have your answer. The only reason I can think of for
such extreme distances is a periodicity issue.

-Justin









--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] genconf and bonded interactions

2011-07-26 Thread Fabio Affinito
Maybe this is a different issue... but it's ok that after the 99,999th 
atom the counter restarts from zero?



 21374SOL OW9  12.986   9.021   7.036 -0.0037 -0.4345  0.3977
 21374SOLHW10  13.069   8.987   7.081  0.5916  0.5409  0.0638


Could this be the origin of my problem?

Thanks again,

Fabio

On 07/26/2011 04:38 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

On 07/26/2011 04:30 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

On 07/26/2011 04:19 PM, Justin A. Lemkul wrote:



Were the molecules whole in the coordinate file you replicated? If
not,
the bonds will now be assigned across the entire box.

-Justin


Yes and not, depending on what you mean by "whole".
It is an ion channel, so it's made of four chains.
This clarifies? (i guess not..)


By whole, I mean that the molecules are not split across periodic
boundaries in the initial configuration that you replicated. If you
replicate a periodic break, then you split the molecules by a distance
equal to the new periodic distance.

-Justin


Ok, so: no, it's not broken.



What you need to do is use the information mdrun provided you to
diagnose what's going on. Apparently atoms 193657 193660 are separated
by 31 nm. What are your box vectors? Where are these atoms in the
system? Then you'll have your answer. The only reason I can think of for
such extreme distances is a periodicity issue.

-Justin




--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] genconf and bonded interactions

2011-07-26 Thread Fabio Affinito

On 07/26/2011 04:30 PM, Justin A. Lemkul wrote:



Fabio Affinito wrote:

On 07/26/2011 04:19 PM, Justin A. Lemkul wrote:



Were the molecules whole in the coordinate file you replicated? If not,
the bonds will now be assigned across the entire box.

-Justin


Yes and not, depending on what you mean by "whole".
It is an ion channel, so it's made of four chains.
This clarifies? (i guess not..)


By whole, I mean that the molecules are not split across periodic
boundaries in the initial configuration that you replicated. If you
replicate a periodic break, then you split the molecules by a distance
equal to the new periodic distance.

-Justin


Ok, so: no, it's not broken.


--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] genconf and bonded interactions

2011-07-26 Thread Fabio Affinito

On 07/26/2011 04:19 PM, Justin A. Lemkul wrote:



Were the molecules whole in the coordinate file you replicated? If not,
the bonds will now be assigned across the entire box.

-Justin


Yes and not, depending on what you mean by "whole".
It is an ion channel, so it's made of four chains.
This clarifies? (i guess not..)

F

--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] genconf and bonded interactions

2011-07-26 Thread Fabio Affinito

Hi all,
I used genconf because I wanted to replicate a membrane with ion channel 
on the xy plane:

genconf -f conf.gro -o out.gro -nbox 2 2 1

Then I edited by hand the .top file where I modified the number of 
molecules in the system.


When attempting to run, disregarding the number of processors, the mdrun 
crashes because domain decomposition fails.

Looking with attention to the log I find this:

Initializing Domain Decomposition on 4096 nodes
  Dynamic load balancing: auto
  Will sort the charge groups at every domain (re)decomposition
  Initial maximum inter charge-group distances:
two-body bonded interactions: 30.871 nm, LJ-14, atoms 193657 193660
multi-body bonded interactions: 30.871 nm, Angle, atoms 193656 193659
Minimum cell size due to bonded interactions: 33.959 nm
Maximum distance for 7 constraints, at 120 deg. angles, all-trans: 1.139 nm
Estimated maximum distance required for P-LINCS: 1.139 nm
Guess for relative PME load: 0.44
Will use 2304 particle-particle and 1792 PME only nodes
This is a guess, check the performance at the end of the log file
Using 1792 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 2304 cells with a minimum initial size of 42.448 
nm
The maximum allowed number of cells is: X 1 Y 1 Z 0


Now, I'm wondering why do I have such big bond interation length.. 31nm!
I guess that the problems in the DD arises from this.

Can you give me some suggestions?

Thanks in advance,

Fabio

--
Fabio Affinito, PhD
SuperComputing Applications and Innovation Department
CINECA - via Magnanelli, 6/3, 40033 Casalecchio di Reno (Bologna) - ITALY
Tel: +39 051 6171794  Fax: +39 051 6132198
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] scaling of replica exchange

2011-02-23 Thread Fabio Affinito
Hi Mark,
I've checked with Valeria and the problem is actually on the setup of
the system (unfair overlapping of the temperature distribution). So I
think that the loss of efficiency should be of a factor between 2 or 3
in her case and not more.

Bye,

Fabio

On 02/23/2011 10:31 AM, Mark Abraham wrote:
> 
> 
> On 02/23/11, *Valeria Losasso *  wrote:
>>
>> Thank you Mark. I found one message of this month concerning this
>> topic, and there are some small suggestions. I don't think that such a
>> changes can restore a factor of 26, but it could be worth to try to
>> see what happens. I will let you know.
> 
> They won't. The problem is that every 10 (or so) MD steps every
> processor does global communication to check nothing's gone wrong. That
> resulted from some unrelated bits of code trying to share the same
> machinery for efficiency, and treading on each others' toes.
> 
> Mark
> 
>> Valeria
>>
>>
>>
>> On Wed, 23 Feb 2011, Mark Abraham wrote:
>>
>> >
>> >
>> >On 02/23/11, Valeria Losasso  wrote:
>> >
>> >  Dear all,
>> >  I am making some tests to start using replica exchange
>> molecular dynamics on my system in water. The setup is ok
>> >  (i.e. one replica alone runs correctly), but I am not able to
>> parallelize the REMD. Details follow:
>> >
>> >  - the test is on 8 temperatures, so 8 replicas
>> >  - Gromacs version 4.5.3
>> >  - One replica alone, in 30 minutes with 256 processors, makes
>> 52500 steps. 8 replicas with 256x8 = 2048
>> >  processors, make 300 (!!) steps each = 2400 in total (I arrived
>> to these numbers just to see some update of the
>> >  log file: since I am running on a big cluster, I can not use
>> more than half an hour for tests with less than 512
>> >  processors)
>> >  - I am using mpirun with options -np 256 -s  md_.tpr -multi 8
>> -replex 1000
>> >
>> >
>> >There have been two threads on this topic in the last month or so,
>> please check the archives. The implementation of
>> >multi-simulations scales poorly. The scaling of replica-exchange
>> itself is not great either. I have a working version under
>> >final development that scales much better. Watch this space.
>> >
>> >Mark
>> >


-- 
*
Fabio Affinito, PhD
CINECA
SuperComputing Applications and Innovation Department - SCAI
Via Magnanelli, 6/3
40033 Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] domain decomposition error

2010-11-17 Thread Fabio Affinito
Hi,
I'm trying to run a simulation with 4.5.3 (double precision) on bluegene.
I get this error:


> NOTE: Turning on dynamic load balancing
> 
> vol 0.35! imb F 54% pme/F 2.06 step 100, remaining runtime: 3 s
> vol 0.33! imb F 48% pme/F 2.09 step 200, remaining runtime: 2 s
> vol 0.35! imb F 28% pme/F 2.42 step 300, remaining runtime: 1 s
> vol 0.34! imb F 26% pme/F 2.46 step 400, remaining runtime: 0 s
> 
> ---
> Program mdrun_mpi_bg_d, VERSION 4.5.3
> Source code file: domdec.c, line: 3581
> 
> Fatal error:
> Step 490: The X-size (0.78) times the triclinic skew factor (1.00) is 
> smaller than the smallest allowed cell size (0.80) for domain 
> decomposition grid cell 4 2 2
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
> 
> "Everything's formed from particles" (Van der Graaf Generator)
> 
> Error on node 188, will try to stop all the nodes
> Halting parallel program mdrun_mpi_bg_d on CPU 188 out of 256
> 

Can anybody give me some hint about that?

Thanks in advance,

Fabio
-- 
*
Fabio Affinito, PhD
CINECA
SuperComputing Applications and Innovation Department - SCAI
Via Magnanelli, 6/3
40033 Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] problem linking fftw2 with gromacs 4.5.3

2010-11-17 Thread Fabio Affinito
Hi folks,
I'm experiencing some problem when I try to compile gromacs 4.5.3
linking fftw.
This is the configure:
> ./configure --prefix=/bgp/userinternal/cin0644a/gromacs.453 \
>   --host=ppc --build=ppc64 --enable-ppc-sqrt=1 \
>   --disable-ppc-altivec \
>   --enable-bluegene --enable-mpi --with-fft=fftw2 \
>   --program-suffix=_mpi_bg_dp \
>   --without-x \
>  CC=mpixlc_r \
>  CFLAGS="-O3 -qarch=450d -qtune=450" \
>  MPICC=mpixlc_r CXX=mpixlC_r \
>  CXXFLAGS="-O3 -qarch=450d -qtune=450" \
>  CPPFLAGS="-I/bgp/userinternal/cin0644a/fftw2/include" \
>  F77=mpixlf77_r FFLAGS="-O3 -qarch=auto -qtune=auto" \
>  LDFLAGS="-L/bgp/userinternal/cin0644a/fftw2/lib" 

This is the error during the make
> mkdir .libs
> libtool: link: cannot find the library `../mdlib/libmd_mpi.la' or unhandled 
> argument `../mdlib/libmd_mpi.la'
> make[1]: *** [libgmxpreprocess_mpi.la] Error 1
> make[1]: Leaving directory 
> `/bgp/userinternal/cin0644a/gromacs-4.5.3/src/kernel'

The directory /lib:

> libfftw.a   libfftw_threads.a   librfftw.a   librfftw_threads.a
> libfftw.la  libfftw_threads.la  librfftw.la  librfftw_threads.la

The directory /include:

> fftw.h  fftw_threads.h  rfftw.h  rfftw_threads.h

Do you have any hint?

Thanks in advance,

Fabio



-- 
*
Fabio Affinito, PhD
CINECA
SuperComputing Applications and Innovation Department - SCAI
Via Magnanelli, 6/3
40033 Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] problem linking fftw2 with gromacs 4.5.3

2010-11-16 Thread Fabio Affinito
Sorry, Mark,
the problem was in fact in the compilation of fftw.
I solved everything by compiling them in the most simple way:
./configure --prefix=whatelse
and now everything works fine.

Thank you

On 11/16/2010 04:18 PM, Mark Abraham wrote:
> 
> 
> - Original Message -
> From: Fabio Affinito 
> Date: Wednesday, November 17, 2010 1:58
> Subject: [gmx-users] problem linking fftw2 with gromacs 4.5.3
> To: gmx-users@gromacs.org
> 
>> Hi folks,
>> I'm experiencing some problem when I try to compile gromacs 4.5.3
>> linking fftw.
>> This is the configure:
>> > ./configure --prefix=/bgp/userinternal/cin0644a/gromacs.453 \
>> >   --host=ppc --build=ppc64 --enable-ppc-sqrt=1 \
>> >   --disable-ppc-altivec \
>> >   --enable-bluegene --enable-mpi --with-fft=fftw2 \
>> >   --program-suffix=_mpi_bg_dp \
>> >   --without-x \
>> >  CC=mpixlc_r \
>> >  CFLAGS="-O3 -qarch=450d -qtune=450" \
>> >  MPICC=mpixlc_r CXX=mpixlC_r \
>> >  CXXFLAGS="-O3 -qarch=450d -qtune=450" \
>> >  CPPFLAGS="-I/bgp/userinternal/cin0644a/fftw2/include" \
>> >  F77=mpixlf77_r FFLAGS="-O3 -qarch=auto -qtune=auto" \
>> >  LDFLAGS="-L/bgp/userinternal/cin0644a/fftw2/lib"
>>
>> This is the error during the make
>> > mkdir .libs
>> > libtool: link: cannot find the library `../mdlib/libmd_mpi.la'
>> or unhandled argument `../mdlib/libmd_mpi.la'
>> > make[1]: *** [libgmxpreprocess_mpi.la] Error 1
>> > make[1]: Leaving directory `/bgp/userinternal/cin0644a/gromacs-
>> 4.5.3/src/kernel'
>> The directory /lib:
>>
>> > libfftw.a   libfftw_threads.a  
>> librfftw.a   librfftw_threads.a
>> > libfftw.la  libfftw_threads.la  librfftw.la 
>> librfftw_threads.la
>> The directory /include:
>>
>> > fftw.h  fftw_threads.h  rfftw.h  rfftw_threads.h
>>
>> Do you have any hint?
> 
> I'd be looking for errors higher up in the output, where
> mdlib/libmd_mpi.la got (not) made.
> 
> Mark
> 
>>
>> Thanks in advance,
>>
>> Fabio
>>
>>
>>
>> --
>> *
>> Fabio Affinito, PhD
>> CINECA
>> SuperComputing Applications and Innovation Department - SCAI
>> Via Magnanelli, 6/3
>> 40033 Casalecchio di Reno (Bologna) ITALY
>> +39/051/6171794 (Phone)
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 


-- 
*
Fabio Affinito, PhD
CINECA
SuperComputing Applications and Innovation Department - SCAI
Via Magnanelli, 6/3
40033 Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_cluster: optimal cutoff

2010-11-05 Thread Fabio Affinito
On 10/31/2010 08:20 PM, Valeria Losasso wrote:
> Dear all,
> for my cluster analysis I am using the g_cluster tool with the gromos method.
> The problem is that I have to compare the results for system of different 
> lengths, and of course the result of the cluster analysis changes according 
> to the cutoff chosen. So what will be a great choice in this case?
> I was thinking about different possibilities, namely: i) choosing - as it is 
> quite frequent in the literature - an arbitrary cutoff (like the default 
> 0.1), but using the same for different systems would be probably not 
> suitable...
> ii) looking for every case at the RMSD distribution and choosing the minimum 
> value between the two peaks - in this case the cutoff would vary for every 
> system; iii) choosing for every system the cutoff that allows to have in the 
> largest cluster the 50% of the structures, and also in this case the cutoff 
> would be different for the different cases...
> 
> Any hint?
> Thanks a lot,
> Valeria
> 
> 
> 
Option ii) is the right one.

Fabio

-- 
*
Fabio Affinito, PhD
CINECA
SuperComputing Applications and Innovation Department - SCAI
Via Magnanelli, 6/3
40033 Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] ffyw3f library not found..

2010-09-24 Thread Fabio Affinito
Hi Mark,
thanks for the explanation. Anyway, I had this kind of problem only when
configuring for the frontend, whilst everything was ok with fftw3 when I
configured and compiled for the computation nodes.

Fabio

On 09/23/2010 03:20 PM, Mark Abraham wrote:
> 
> 
> - Original Message -
> From: Fabio Affinito 
> Date: Thursday, September 23, 2010 20:54
> Subject: Re: [gmx-users] ffyw3f library not found..
> To: Discussion list for GROMACS users 
> 
>> On 09/23/2010 12:27 PM, Kamalesh Roy wrote:
>> > Dear users
>> >
>> > I am trying to install Gromacs-4.5 with fftw-3.2.2  in
>> Fedora version 9,
>> > when I am trying to install the Groamcs
>> > after installing fftw it is giving me an error that ftwf
>> library not found.
>> >
>> > Please hep me.
>> >
>> > --
>> > Regards
>> > Kamalesh Roy
>> >
>>
>> Same problem here.
>> Using:
>> ./configure --prefix=/bgp/userinternal/cin0644a/gromacs \
>>  --enable-ppc-sqrt \
>>  --disable-ppc-altivec \
>>  --with-fft=fftw3 \
>>  --without-x \
>>  CFLAGS="-O3 -qarch=auto -qtune=auto" \
>>  CC="xlc_r -q64" \
>>  CXX="xlC_r -q64" \
>>  CXXFLAGS="-O3 -qarch=auto -qtune=auto" \
>>  CPPFLAGS="-I/bgp/userinternal/cin0644a/fftwlibs/include" \
>>  F77="xlf_r -q64" \
>>  FFLAGS="-O3 -qnoprefetch -qarch=auto -qtune=auto" \
>>  LDFLAGS="-L/bgp/userinternal/cin0644a/fftwlibs/lib"
>> is fine with 4.0.7 but it generates
>> checking for main in -lfftw3f... no
>> configure: error: Cannot find fftw3f library
>> in 4.5.1
>>
>> I successfully installed 4.5.1 only using the --with-fft=fftpack
> 
> Don't use that, it will be diabolically slow for PME simulations!
> 
> Unfortunately, the BlueGene-specific GROMACS installation instructions
> didn't have the clue that the precision of GROMACS and FFTW has to
> match. It does now.
> 
> Presumably both you and the original poster need to install a
> single-precision copy of FFTW, or configure a double-precision version
> of GROMACS (if you know you need double precision, or don't mind your
> simulation being much slower).
> 
> Mark
> 


-- 
*
Fabio Affinito, PhD
CINECA
InterUniversity Computer Center
Via Magnanelli, 6/3
Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
e-mail: f.affin...@cineca.it
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] fftw and configure in 4.0.7 vs 4.5.1

2010-09-24 Thread Fabio Affinito
Hi everybody,
I'm experiencing several problems by running configure in 4.5.1. More
specifically, it seems it doesn't find fftw libraries.
It seems strange 'cause the same options used in 4.0.7 work fine.
These are my configure flags:

./configure --prefix=/bgp/userinternal/cin0644a/gromacs \
 --enable-ppc-sqrt \
 --disable-ppc-altivec \
 --with-fft=fftw3 \
 --without-x \
 --enable-shared \
 CFLAGS="-O3 -qarch=auto -qtune=auto" \
 CC="xlc_r -q64" \
 CXX="xlC_r -q64" \
 CXXFLAGS="-O3 -qarch=auto -qtune=auto" \
 CPPFLAGS="-I/bgp/userinternal/cin0644a/fftwlibs/include" \
 F77="xlf_r -q64" \
 FFLAGS="-O3 -qnoprefetch -qarch=auto -qtune=auto" \
 LDFLAGS="-L/bgp/userinternal/cin0644a/fftwlibs/lib"

This is my error in 4.5.1
checking for sqrt in -lm... yes
checking for fftw3.h... yes
checking for main in -lfftw3f... no
configure: error: Cannot find fftw3f library

Whilst in 4.0.7 everything works.

Any hints?

Thanks in advance,

Fabio

-- 
*
Fabio Affinito, PhD
CINECA
InterUniversity Computer Center
Via Magnanelli, 6/3
Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
e-mail: f.affin...@cineca.it
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] ffyw3f library not found..

2010-09-23 Thread Fabio Affinito
On 09/23/2010 12:27 PM, Kamalesh Roy wrote:
> Dear users
> 
> I am trying to install Gromacs-4.5 with fftw-3.2.2  in Fedora version 9,
> when I am trying to install the Groamcs
> after installing fftw it is giving me an error that ftwf library not found.
> 
> Please hep me.
> 
> -- 
> Regards
> Kamalesh Roy
> 

Same problem here.
Using:
./configure --prefix=/bgp/userinternal/cin0644a/gromacs \
 --enable-ppc-sqrt \
 --disable-ppc-altivec \
 --with-fft=fftw3 \
 --without-x \
 CFLAGS="-O3 -qarch=auto -qtune=auto" \
 CC="xlc_r -q64" \
 CXX="xlC_r -q64" \
 CXXFLAGS="-O3 -qarch=auto -qtune=auto" \
 CPPFLAGS="-I/bgp/userinternal/cin0644a/fftwlibs/include" \
 F77="xlf_r -q64" \
 FFLAGS="-O3 -qnoprefetch -qarch=auto -qtune=auto" \
 LDFLAGS="-L/bgp/userinternal/cin0644a/fftwlibs/lib"
is fine with 4.0.7 but it generates
checking for main in -lfftw3f... no
configure: error: Cannot find fftw3f library
in 4.5.1

I successfully installed 4.5.1 only using the --with-fft=fftpack

Fabio



-- 
*
Fabio Affinito, PhD
CINECA
InterUniversity Computer Center
Via Magnanelli, 6/3
Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
e-mail: f.affin...@cineca.it
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] error on compilation on BlueGene/P

2010-09-21 Thread Fabio Affinito
Like for the frontend, the --enable-fortran is the problem.
Maybe it could be useful to update the instruction page :)

Fabio


On 09/21/2010 10:54 AM, Mark Abraham wrote:
> 
> 
> - Original Message -
> From: Fabio Affinito 
> Date: Tuesday, September 21, 2010 18:51
> Subject: Re: [gmx-users] error on compilation on BlueGene/P
> To: Discussion list for GROMACS users 
> 
>> Thank you, Mark and Berk!
>> your suggestion was helpful and I successfully compiled on the
>> frontend.Now I have a problem when I compile on the compute nodes.
>> Configure was fine with these parameters:
>>
>> ../configure --prefix=/bgp/userinternal/cin0644a/gromacs \
>>   --host=ppc --build=ppc64 --enable-ppc-sqrt=1 \
>>   --disable-ppc-altivec \
>>   --enable-bluegene --enable-fortran --enable-mpi \
>>   --with-fft=fftw3 \
>>   --program-suffix=_mpi_bg \
>>   --without-x \
>>  CC=mpixlc_r \
>>  CFLAGS="-O3 -qarch=450d -qtune=450" \
>>  MPICC=mpixlc_r CXX=mpixlC_r \
>>  CXXFLAGS="-O3 -qarch=450d -qtune=450" \
>>  CPPFLAGS="-I/bgp/userinternal/cin0644a/fftwlibs/include" \
>>  F77=mpixlf77_r FFLAGS="-O3 -qarch=auto -qtune=auto" \
>>  LDFLAGS="-L/bgp/userinternal/cin0644a/fftwlibs/lib" \
>>  LIBS="-lmass"
>>
>> But when I compile using "make mdrun" the compilation stops in
>> this way.
>>
>> creating libgmxpreprocess_mpi.la
>> (cd .libs && rm -f libgmxpreprocess_mpi.la && ln -s
>> ../libgmxpreprocess_mpi.la libgmxpreprocess_mpi.la)
>> make[1]: *** No rule to make target `../mdlib/libmd_mpi.la',
>> needed by
>> `mdrun'.  Stop.
>> make[1]: Leaving directory
>> `/bgp/userinternal/cin0644a/gmx/bgp/src/kernel'
>> Any suggestion? Thanks!
> 
> 
> Normally that would be symptomatic of an earlier error in make (or maybe
> configure). Please check carefully.
> 
> Mark
> 


-- 
*
Fabio Affinito, PhD
CINECA
InterUniversity Computer Center
Via Magnanelli, 6/3
Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
e-mail: f.affin...@cineca.it
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] error on compilation on BlueGene/P

2010-09-21 Thread Fabio Affinito
Thank you, Mark and Berk!
your suggestion was helpful and I successfully compiled on the frontend.
Now I have a problem when I compile on the compute nodes.
Configure was fine with these parameters:

../configure --prefix=/bgp/userinternal/cin0644a/gromacs \
  --host=ppc --build=ppc64 --enable-ppc-sqrt=1 \
  --disable-ppc-altivec \
  --enable-bluegene --enable-fortran --enable-mpi \
  --with-fft=fftw3 \
  --program-suffix=_mpi_bg \
  --without-x \
 CC=mpixlc_r \
 CFLAGS="-O3 -qarch=450d -qtune=450" \
 MPICC=mpixlc_r CXX=mpixlC_r \
 CXXFLAGS="-O3 -qarch=450d -qtune=450" \
 CPPFLAGS="-I/bgp/userinternal/cin0644a/fftwlibs/include" \
 F77=mpixlf77_r FFLAGS="-O3 -qarch=auto -qtune=auto" \
 LDFLAGS="-L/bgp/userinternal/cin0644a/fftwlibs/lib" \
 LIBS="-lmass"

But when I compile using "make mdrun" the compilation stops in this way.

creating libgmxpreprocess_mpi.la
(cd .libs && rm -f libgmxpreprocess_mpi.la && ln -s
../libgmxpreprocess_mpi.la libgmxpreprocess_mpi.la)
make[1]: *** No rule to make target `../mdlib/libmd_mpi.la', needed by
`mdrun'.  Stop.
make[1]: Leaving directory `/bgp/userinternal/cin0644a/gmx/bgp/src/kernel'

Any suggestion? Thanks!

Fabio


On 09/21/2010 12:03 AM, Mark Abraham wrote:
> Hi,
> 
> IIRC GROMACS has done something radical to FORTRAN inner loops (like
> removing them) since those instructions were written. Removing
> --enable-fortran will make your symptoms go away. The C inner loops will
> be fine, should you ever be running mdrun on the front end nodes.
> 
>> ... and if I add the --enable-bluegene flag :
>>
>>
> "../../../../../src/gmxlib/nonbonded/nb_kernel_bluegene/nb_kernel_gen_bluegene.h",
>> line 163.21: 1506-1231 (S) The built-in function "__fpsub" is
>> not valid
>> for the target architecture.
>>
>> and more similar errors.
> 
> Sure. --enable-bluegene is only useful for the mdrun binary for the
> compute system.
> 
> Mark
> 
>> On 09/20/2010 05:35 PM, Fabio Affinito wrote:
>> > Hi all,
>> > I'm trying to install Gromacs on BG/P following the
>> instruction reported
>> > here:
>> >
>>
> http://www.gromacs.org/Downloads/Installation_Instructions/GROMACS_on_BlueGene>
> 
>> > I ran configure:
>> > ../configure --prefix=/bgp/userinternal/cin0644a/gromacs \
>> >  --enable-ppc-sqrt \
>> >  --disable-ppc-altivec \
>> >  --enable-fortran \
>> >  --with-fft=fftw3 \
>> >  --without-x \
>> >  CFLAGS="-O3 -qarch=auto -qtune=auto" \
>> >  CC="xlc_r -q64" \
>> >  CXX="xlC_r -q64" \
>> >  CXXFLAGS="-O3 -qarch=auto -qtune=auto" \
>> >  CPPFLAGS="-I/bgp/userinternal/cin0644a/fftwlibs/include" \
>> >  F77="xlf_r -q64" \
>> >  FFLAGS="-O3 -qnoprefetch -qarch=auto -qtune=auto" \
>> >  LDFLAGS="-L/bgp/userinternal/cin0644a/fftwlibs/lib"
>> >
>> > But when I compile with make I get this error:
>> >
>> >
>>
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",>
> line 42.10: 1506-296 (S) #include file "nbkernel010_f77_single.h" not found.
>> >
>>
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",>
> line 43.10: 1506-296 (S) #include file "nbkernel020_f77_single.h" not found.
>> >
>>
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",>
> line 44.10: 1506-296 (S) #include file "nbkernel030_f77_single.h" not found.
>> >
>>
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",>
> line 45.10: 1506-296 (S) #include file "nbkernel100_f77_single.h" not found.
>> > [...]
>> >
>>
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",>
> line 114.5: 1506-045 (S) Undeclared identifier nbkernel010_f77_single.
>> >
>>
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",>
> line 115.5: 1506-045 (S) Undeclared identifier nbkernel020_f77_single.
>> >
>>
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_

Re: [gmx-users] error on compilation on BlueGene/P

2010-09-20 Thread Fabio Affinito
... and if I add the --enable-bluegene flag :

"../../../../../src/gmxlib/nonbonded/nb_kernel_bluegene/nb_kernel_gen_bluegene.h",
line 163.21: 1506-1231 (S) The built-in function "__fpsub" is not valid
for the target architecture.

and more similar errors.


F.

On 09/20/2010 05:35 PM, Fabio Affinito wrote:
> Hi all,
> I'm trying to install Gromacs on BG/P following the instruction reported
> here:
> http://www.gromacs.org/Downloads/Installation_Instructions/GROMACS_on_BlueGene
> 
> I ran configure:
> ../configure --prefix=/bgp/userinternal/cin0644a/gromacs \
>  --enable-ppc-sqrt \
>  --disable-ppc-altivec \
>  --enable-fortran \
>  --with-fft=fftw3 \
>  --without-x \
>  CFLAGS="-O3 -qarch=auto -qtune=auto" \
>  CC="xlc_r -q64" \
>  CXX="xlC_r -q64" \
>  CXXFLAGS="-O3 -qarch=auto -qtune=auto" \
>  CPPFLAGS="-I/bgp/userinternal/cin0644a/fftwlibs/include" \
>  F77="xlf_r -q64" \
>  FFLAGS="-O3 -qnoprefetch -qarch=auto -qtune=auto" \
>  LDFLAGS="-L/bgp/userinternal/cin0644a/fftwlibs/lib"
> 
> But when I compile with make I get this error:
> 
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
> line 42.10: 1506-296 (S) #include file "nbkernel010_f77_single.h" not found.
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
> line 43.10: 1506-296 (S) #include file "nbkernel020_f77_single.h" not found.
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
> line 44.10: 1506-296 (S) #include file "nbkernel030_f77_single.h" not found.
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
> line 45.10: 1506-296 (S) #include file "nbkernel100_f77_single.h" not found.
> [...]
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
> line 114.5: 1506-045 (S) Undeclared identifier nbkernel010_f77_single.
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
> line 115.5: 1506-045 (S) Undeclared identifier nbkernel020_f77_single.
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
> line 116.5: 1506-045 (S) Undeclared identifier nbkernel030_f77_single.
> "../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
> line 117.5: 1506-045 (S) Undeclared identifier nbkernel100_f77_single.
> 
> 
> Do you have any hint about that?
> 
> Thanks in advance!
> 
> F.
> 
> 


-- 
*
Fabio Affinito, PhD
CINECA
InterUniversity Computer Center
Via Magnanelli, 6/3
Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
e-mail: f.affin...@cineca.it
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] error on compilation on BlueGene/P

2010-09-20 Thread Fabio Affinito
Hi all,
I'm trying to install Gromacs on BG/P following the instruction reported
here:
http://www.gromacs.org/Downloads/Installation_Instructions/GROMACS_on_BlueGene

I ran configure:
../configure --prefix=/bgp/userinternal/cin0644a/gromacs \
 --enable-ppc-sqrt \
 --disable-ppc-altivec \
 --enable-fortran \
 --with-fft=fftw3 \
 --without-x \
 CFLAGS="-O3 -qarch=auto -qtune=auto" \
 CC="xlc_r -q64" \
 CXX="xlC_r -q64" \
 CXXFLAGS="-O3 -qarch=auto -qtune=auto" \
 CPPFLAGS="-I/bgp/userinternal/cin0644a/fftwlibs/include" \
 F77="xlf_r -q64" \
 FFLAGS="-O3 -qnoprefetch -qarch=auto -qtune=auto" \
 LDFLAGS="-L/bgp/userinternal/cin0644a/fftwlibs/lib"

But when I compile with make I get this error:

"../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
line 42.10: 1506-296 (S) #include file "nbkernel010_f77_single.h" not found.
"../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
line 43.10: 1506-296 (S) #include file "nbkernel020_f77_single.h" not found.
"../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
line 44.10: 1506-296 (S) #include file "nbkernel030_f77_single.h" not found.
"../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
line 45.10: 1506-296 (S) #include file "nbkernel100_f77_single.h" not found.
[...]
"../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
line 114.5: 1506-045 (S) Undeclared identifier nbkernel010_f77_single.
"../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
line 115.5: 1506-045 (S) Undeclared identifier nbkernel020_f77_single.
"../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
line 116.5: 1506-045 (S) Undeclared identifier nbkernel030_f77_single.
"../../../../../src/gmxlib/nonbonded/nb_kernel_f77_single/nb_kernel_f77_single.c",
line 117.5: 1506-045 (S) Undeclared identifier nbkernel100_f77_single.


Do you have any hint about that?

Thanks in advance!

F.


-- 
*
Fabio Affinito, PhD
CINECA
InterUniversity Computer Center
Via Magnanelli, 6/3
Casalecchio di Reno (Bologna) ITALY
+39/051/6171794 (Phone)
e-mail: f.affin...@cineca.it
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] centering molecule in the water box

2008-07-01 Thread Fabio Affinito

Peyman,
the protein moves because the c.o.m. motion wasn't subtracted during  
the dynamics.





On Tuesday 01 July 2008 12:24, Fabio Affinito wrote:

I think if it moves, then there is something more basic wrong, do  
you remove

your center of mass motion appropriately? is your box homogeneously
equilibrated? but if it does not move and it looks like as if it  
moved, then
it's visual problem and not important! you could e.g. write  
something to cut
the solvent molecules from one side and put them on the other side  
with some
mapping dependant on your box type, if you would like to see your  
molecule at

the center.
btw, self diffusion is the diffusion of one molecule through others  
of its own
kind, not through other molecules, solvent or else. if there is no  
chemical
potential ( ~concentration ) difference, no mass transfer would take  
place.


Peyman





Fabio Affinito, PhD
SISSA/ISAS -Statistical and Biological Physics
Via Beirut, 4
34014 Trieste
ITALY
email: [EMAIL PROTECTED]  phone:+39 040 3787 303  fax:+39 040 3787 528



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] centering molecule in the water box

2008-07-01 Thread Fabio Affinito

I choosed "Protein" for centering and "System" for output.
Same stuff with editconf.

F.

Fabio Affinito, PhD
email: [EMAIL PROTECTED]  phone:+39 040 3787 303  fax:+39 040 3787 528



___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] centering molecule in the water box

2008-07-01 Thread Fabio Affinito
The protein experience self-diffusion and so it moves through the  
simulation box.

I tried also with editconf but the result is the same.

F.


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] centering molecule in the water box

2008-07-01 Thread Fabio Affinito

Hi all,
During my MD the molecule experience a drift. Now I want to put the  
molecule at the center of the water box.
I tried with trjconv using the -pbc mol and -center flag and using a  
reference frame where the molecule is at the center of the box.
It seems that all the box (water+molecule) is translated, and so the  
position of the molecule relatively to the box is unchanged.
I also checked the gmx-users list and I didn't get any useful  
suggestion.


Fabio

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: RE: [gmx-users] crash nsgrid.c

2008-05-09 Thread Fabio Affinito

Hi Berk,
thanks for your answer. I've checked the charge groups and every thing  
seems to be ok (no single groups...). Actually I'm using 3.2.1  
version, but it doesn't seem to be depending on that.

What could I check?

Fabio

On May 8, 2008, at 4:00 PM, [EMAIL PROTECTED] wrote:


--

Message: 3
Date: Thu, 8 May 2008 15:21:16 +0200
From: Berk Hess <[EMAIL PROTECTED]>
Subject: RE: [gmx-users] crash nsgrid.c
To: Discussion list for GROMACS users 
Message-ID: <[EMAIL PROTECTED]>
Content-Type: text/plain; charset="iso-8859-1"

Hi,There can be many reasons for such problems.I assume you use  
solvent.One option is that you could have made your whole proteina  
single charge group. grompp in Gromacs 3.3.3 checks for this,but  
older versions do not.Berk


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] crash nsgrid.c

2008-05-08 Thread Fabio Affinito


Dear all,
During my simulations of a GFP protein (force field made with amber,  
then converted to gromacs), I usually get this error:


Fatal error: ci = 8108 should be in 0 .. 7937 [FILE nsgrid.c, LINE 218]

What this that means?

If I restart the simulation from the frame just before the crash, the  
run is ok for a (random) time and then it crashes again.
I checked this on different kind of machines and I get always the same  
behavior.


To make things clearier, I include the input.mdp file:

title=
cpp  = /lib/cpp
include  =
define   =
integrator   = md
dt   = 0.002
nsteps   = 50
nstxout  = 1
nstvout  = 0
nstlog   = 500
nstenergy= 500
nstxtcout= 500
energygrps   = System
pbc  = xyz
nstlist  = 15
epsilon_r= 1.
ns_type  = grid
coulombtype  = pme
vdwtype  = Cut-off
rlist= 0.8
rcoulomb = 0.8
rvdw = 0.8
tcoupl   = Nose-Hoover
tc-grps  = System
tau_t= 2.0
ref_t= 300.00
pcoupl   = Parrinello-Rahman
pcoupltype   = isotropic
tau_p= 4.0
compressibility  = 4.5e-5
ref_p= 1.0
gen_vel  = no
gen_seed = 173529
constraints  = All-bonds
constraint_algorithm = lincs
shake_tol= 0.0001

Any suggestion is welcome.

Thanks

F.A.
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] fatal error in nsgrid

2007-09-28 Thread Fabio Affinito
I've been running on 16 processors a protein MD simulation. During the 
simulation I've got a crash and the standard error reported this:


Fatal error: ci = 10280 should be in 0 .. 10050 [FILE nsgrid.c, LINE 218]
[0] MPI Abort by user Aborting program !
[0] Aborting program!


What could that be?

Fabio
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


Re: [gmx-users] dPCA - trajectories

2007-08-28 Thread Fabio Affinito


On 29/ago/07, at 04:11, Mark Abraham wrote:


Dear all,

Using

g_angle -f ../trj1_0_100.xtc -s ../g_prot.tpr -n test.ndx -or -type
dihedral -e 1



g_covar -f traj.trr -s dummy.gro -nofit

It seems like the trajectory is of a different length, because a
different timestep seems to be used:


Well, you are using a different trajectory file...


It is different because it is made of angles, not of coordinates. But  
the length, the numbers of frames and the timestep should be the  
same. Shouldn't it?



Fabio



Mark

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before  
posting!

Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php



Fabio Affinito, PhD
SISSA/ISAS - Statistical and Biological Physics

Via Beirut, 4   email: [EMAIL PROTECTED]
34014 Trieste   phone: +39 040 3787 303
ITALY   fax: +39 040 3787 528




___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

[gmx-users] dPCA - trajectories

2007-08-28 Thread Fabio Affinito

Dear all,

Using

g_angle -f ../trj1_0_100.xtc -s ../g_prot.tpr -n test.ndx -or -type 
dihedral -e 1


As expected, I obtained the following statement

[cut]
Group 0 (Backbone) has   664 elements
There is one group in the index
Reading file ../g_prot.tpr, VERSION 3.2.1 (single precision)
Last frame  25000 time 1.000
There are 166 dihedrals. Will fill 111 atom positions with cos/sin
[cut]

Then, when using the covariance analysis

g_covar -f traj.trr -s dummy.gro -nofit

It seems like the trajectory is of a different length, because a 
different timestep seems to be used:


[cut]
Calculating the average structure ...
trn version: GMX_trn_file (double precision)
Last frame  25000 time 25000.000

Constructing covariance matrix (333x333) ...
Last frame  25000 time 25000.000
Read 25001 frames
[cut]

I suppose it's just because in the new trajectory the information of the 
timestep length is lost. Am I right?


Thank you,

Fabio

___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] Re: mk_angndx

2007-06-26 Thread Fabio Affinito

angle multiplicity force constant



so, does it refer to the angle values contained in the topology files?

Thank you,

Fabio
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php


[gmx-users] mk_angndx

2007-06-26 Thread Fabio Affinito

Hi everybody,
what's the meaning of the labels in the angle.ndx generated by the 
mk_angndx?

For example, I've got groups named [ Phi=0.0_3_3.77 ]..
what does that mean?

Thanks,

Fabio


___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]

Can't post? Read http://www.gromacs.org/mailing_lists/users.php