[gmx-users] Re: choosing force field

2013-11-08 Thread pratibha
Sorry for the previous mistake. Instead of 53a7, the force field which I
used was 53a6.


On Fri, Nov 8, 2013 at 12:10 AM, Justin Lemkul [via GROMACS] <
ml-node+s5086n5012325...@n6.nabble.com> wrote:

>
>
> On 11/7/13 12:14 PM, pratibha wrote:
> > My protein contains metal ions which are parameterized only in gromos
> force
> > field. Since I am a newbie to MD simulations, it would be difficult for
> me
> > to parameterize those myself.
> > Can you please guide me as per my previous mail  which out of the two
> > simulations should I consider  more reliable-43a1 or 53a7?
>
> AFAIK, there is no such thing as 53A7, and your original message was full
> of
> similar typos, making it nearly impossible to figure out what you were
> actually
> doing.  Can you indicate the actual force field(s) that you have been
> using in
> case someone has any ideas?  The difference between 53A6 and 54A7 should
> be
> quite pronounced, in my experience, thus any guesses as to what "53A7"
> should be
> doing are not productive because I don't know what that is.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 601
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> [hidden email]  |
> (410) 706-7441
>
> ==
> --
> gmx-users mailing list[hidden 
> email]
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to [hidden 
> email].
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
> --
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://gromacs.5086.x6.nabble.com/choosing-force-field-tp5012242p5012325.html
>  To unsubscribe from choosing force field, click 
> here
> .
> NAML
>


--
View this message in context: 
http://gromacs.5086.x6.nabble.com/choosing-force-field-tp5012242p5012370.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Reaction field zero, ions, twin-range and LIE

2013-11-08 Thread Williams Ernesto Miranda Delgado
Hello users
For making a rerun of a MD simulation done with PME, but this time using
Reaction field zero, do I eliminate the ions from the system, previously
added to neutralize and ran with PME? What do you think about using a
twin-range cut off instead of reaction field zero? I need to obtain
precise Coul SR and Coul LR for LIE calculation.
Thank you again
Williams

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: Ligand simulation for LIE with PME

2013-11-08 Thread Williams Ernesto Miranda Delgado
Greetings again
If I use a salt concentration for neutralizing the protein-ligand complex
and run MD using PME, and the ligand is neutral, do I perform ligand MD
simulation without adding any salt concentration? It could be relevant for
LIE free energy calculation if I don't include salt in ligand (neutral)
simulation, even when I simulate Protein-ligand system with salt?
Thanks

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Simulation box size, LIE and PME

2013-11-08 Thread Williams Ernesto Miranda Delgado
Greetings
The discussion list had helped me about understanding what to do when I
want to calculate binding free energy using LIE after doing MD simulation
using PME.
Now I need your help about choosing the simulation box size for ligand and
complex. I used -d 1.0 in editconf for the complex simulation and -d 1.6
for the ligand simulation. Are this values ok? Should I use a different
value of -d for the ligand simulation?
Thank you
Williams

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Re: free energy

2013-11-08 Thread kghm
Dear Kieu Thu 

Thanks for your comment about free energy. Unfortunately, I could not send a
email to Paissoni Cristina in the Gromacs Forum.
Could you give me email address of Paissoni Cristina? Finding a tool for
calculation MM/PBSA with Gromacs is very vital for me.

Best Regards
Kiana

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/free-energy-tp5012246p5012363.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Question about make_ndx and g_angle

2013-11-08 Thread Chang Woon Jang
Dear Riccardo Concu, 

  The reason is that I used the make_ndx file in the working directory. 
Now, I solved that problem with installing gromacs as following  

cmake .. -DGMX_BUILD_OWN_FFTW=Off
Anyway, I appreciate your comment.

Best regards,
Changwoon Jang





On Friday, November 8, 2013 3:31 AM, Riccardo Concu  wrote:
 
Dear Changwoon,
why are you using ./make_ndx instead of make_ndx? Did you try to use the
command without ./?
Regards

El jue, 07-11-2013 a las 14:12 -0800, Chang Woon Jang escribió:
> Dear Users, 
> 
>      I am using openSUSE 12.3 and try to use make_ndx and g_angle. When I try 
>the following command, there is an error message. 
> 
> > ./make.ndx -f data.pdb
> 
> ./make_ndx: error while loading shared libraries: libcudart.so.4:cannot open 
> shared object file: No such file or directory
> 
> Do I need cuda library in order to use "make_ndx" and "g_angle" ?
> 
> Thanks. 
> 
> Best regards,
> Changwoon Jang


-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Looking for advice on Monte Carlo simulations, please

2013-11-08 Thread Andrew S. Paluch
The only open source Monte Carlo package I have every used or now of is 
MCCS Towhee:


http://towhee.sourceforge.net/

Andrew

On 11/08/2013 11:23 AM, Andrew DeYoung wrote:

Hi,

I have been using Gromacs for my MD liquid simulations for about 2 years
now, and, of course, it has been working great!

Now, in the current part of my project, I am looking at liquid structure,
not dynamics.  Of course, one can analyze liquid structure by simply
analyzing an MD trajectory.  I have run into some difficulties with
sampling, however.  I could try some "special" sampling techniques (umbrella
sampling, replica exchange, etc.), but another thing that has been suggested
to me is to try Monte Carlo simulations.  Monte Carlo simulations, of
course, do not model dynamics, but since I'm only interested in structure at
this point, Monte Carlo might be a faster, less computationally expensive
way to do sampling of possible liquid structures/configurations.

It looks like, according to the Gromacs website, that a Monte Carlo
integrator is in the process of being implemented in Gromacs (maybe for
version 5.0 or so?).  If that is possible and will be funded, that would be
great, because I would just like to use my same force field, same parameters
as I have for my MD simulations--but just now I want to use Monte Carlo to
quickly generate (low-energy/high-probability) configurations.

But, until Monte Carlo is implemented in Gromacs, do you have any
suggestions of another package that I can use to do MC on my system?
Perhaps there is some MC package out there that is somewhat "similar" to
Gromacs, such that maybe I could write scripts to translate my system
topology from Gromacs to the MC package?  Do you have any experience or
advice on Monte Carlo simulations?

By the way, I hesitate to even ask questions like "when is this or that
feature going to be implemented in Gromacs?", because I know that the
development of Gromacs is a huge project and that many people are spending
lots of time and resources to continually add features!  I am not yet very
fluent in C/C++ and am not much of a programmer.  I am more of a
chemist/simulator.  So I just want to thank you all for all your work on
making Gromacs what it is!

Thanks!

Andrew DeYoung
Carnegie Mellon University



--

Andrew S. Paluch, PhD
Department of Chemical, Paper, and Biomedical Engineering
Miami University
paluc...@miamioh.edu
(513) 529-0784
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Looking for advice on Monte Carlo simulations, please

2013-11-08 Thread Andrew DeYoung
Hi, 

I have been using Gromacs for my MD liquid simulations for about 2 years
now, and, of course, it has been working great!

Now, in the current part of my project, I am looking at liquid structure,
not dynamics.  Of course, one can analyze liquid structure by simply
analyzing an MD trajectory.  I have run into some difficulties with
sampling, however.  I could try some "special" sampling techniques (umbrella
sampling, replica exchange, etc.), but another thing that has been suggested
to me is to try Monte Carlo simulations.  Monte Carlo simulations, of
course, do not model dynamics, but since I'm only interested in structure at
this point, Monte Carlo might be a faster, less computationally expensive
way to do sampling of possible liquid structures/configurations.

It looks like, according to the Gromacs website, that a Monte Carlo
integrator is in the process of being implemented in Gromacs (maybe for
version 5.0 or so?).  If that is possible and will be funded, that would be
great, because I would just like to use my same force field, same parameters
as I have for my MD simulations--but just now I want to use Monte Carlo to
quickly generate (low-energy/high-probability) configurations.

But, until Monte Carlo is implemented in Gromacs, do you have any
suggestions of another package that I can use to do MC on my system?
Perhaps there is some MC package out there that is somewhat "similar" to
Gromacs, such that maybe I could write scripts to translate my system
topology from Gromacs to the MC package?  Do you have any experience or
advice on Monte Carlo simulations?

By the way, I hesitate to even ask questions like "when is this or that
feature going to be implemented in Gromacs?", because I know that the
development of Gromacs is a huge project and that many people are spending
lots of time and resources to continually add features!  I am not yet very
fluent in C/C++ and am not much of a programmer.  I am more of a
chemist/simulator.  So I just want to thank you all for all your work on
making Gromacs what it is!  

Thanks!

Andrew DeYoung
Carnegie Mellon University

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mpi segmentation error in continuation of REMD simulation with gromacs 4.5.5

2013-11-08 Thread Mark Abraham
OK, thanks.

Please open a new issue at redmine.gromacs.org, describe your observations
as above, and upload a tarball of your input files.

Mark


On Fri, Nov 8, 2013 at 2:14 PM, Qin Qiao  wrote:

> On Fri, Nov 8, 2013 at 7:18 PM, Mark Abraham  >wrote:
>
> > Hi,
> >
> > That shouldn't happen if your MPI library is working (have you tested it
> > with other programs?) and configured properly. It's possible this is a
> > known bug, so please let us know if you can reproduce it in the latest
> > releases.
> >
> > Mark
> >
> >
> > Hi,
>
> I installed different versions of gromacs with the same MPI library.
> Surprisingly, the problem doesn't occur in gromacs-4.5.1.. but still in the
> gromacs-4.6.3... The MPI version is MVAPICH2-1.9a for infinite band.
>
> Best,
>
> Qin
>
> On Fri, Nov 8, 2013 at 6:55 AM, Qin Qiao  wrote:
> >
> > > Dear all,
> > >
> > > I'm trying to continue a REMD simulation using gromacs4.5.5 under NPT
> > > ensemble, and I got the following errors when I tried to use 2 cores
> per
> > > replica:
> > >
> > > "[node-ib-4.local:mpi_rank_25][error_sighandler] Caught error:
> > Segmentation
> > > fault (signal 11)
> > > [node-ib-13.local:mpi_rank_63][error_sighandler] Caught error:
> > Segmentation
> > > fault (signal 11)
> > > ...
> > > "
> > >
> > > Surprisingly, it worked fine when I tried to use only 1 core per
> > replica..
> > > I have no idea what caused the problem.. Could you give me some advice?
> > >
> > > ps. the command I used is
> > > "srun .../gromacs-4.5.5-mpi-slurm/bin/mdrun_infiniband -s remd_.tpr
> > -multi
> > > 48 -replex 1000 -deffnm remd_ -cpi remd_.cpt -append"
> > >
> > > Best
> > > Qin
> > > --
> > > gmx-users mailing listgmx-users@gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > * Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Reg error in Compilation Of Gromacs 4.6.2

2013-11-08 Thread Justin Lemkul



On 11/8/13 7:44 AM, vidhya sankar wrote:


Dear Justin Thank you for your Previous reply,



I am trying to Install gromacs 4.6.2  in a cluster having centos
OS with teh Following Command   I got following error
cmake ..
-DGMX_BUILD_OWN_FFTW=ON   -DGMX_MPI=ON
-DGMX_DOUBLE=ON


  CMake Warning at CMakeLists.txt:785 (message):
  No C AVX flag found. Consider a newer compiler, or try SSE4.1 (lower
  performance).


  CMake Warning at CMakeLists.txt:802 (message):
  No C FMA4 flag found. Consider a newer compiler, or try SSE4.1 (lower
  performance).


  CMake Error at CMakeLists.txt:872 (gmx_test_avx_gcc_maskload_bug):
  GMX_TEST_AVX_GCC_MASKLOAD_BUG Macro invoked with incorrect arguments
for
  macro named: GMX_TEST_AVX_GCC_MASKLOAD_BUG


  -- The GROMACS-managed build of FFTW 3 will configure with the
following optimizations: --enable-sse2
  -- Configuring incomplete, errors occurred!

  The
Aforesaid Error Indicates What is the Lack ?
  I
Think  I need Latest Compiler (Please Indicate Version)
is it Correct Not?
  How
to solve the problem What are Prerequestite i need ?



You haven't said what compiler you're using, but suffice it to say based on the 
above messages, it's too old to make use of the best optimizations.  Either 
upgrade the compiler or accept SSE4.1 optimization instead of AVX.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Re: CHARMM .mdp settings for GPU

2013-11-08 Thread Justin Lemkul



On 11/7/13 11:32 PM, Rajat Desikan wrote:

Dear All,
The setting that I mentioned above are from Klauda et al., for a POPE
membrane system. They can be found in charmm_npt.mdp in lipidbook (link
below)
http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html

Is there any reason not to use their .mdp parameters for a membrane-protein
system? Justin's recommendation is highly valued since I am using his
forcefield. Justin, your comments please



Careful now, it's not "my forcefield."  I derived only a very small part of it 
:)


To summarize:
Klauda et al., suggest
rlist  = 1.0
rlistlong= 1.4
rvdw_switch  = 0.8
vdwtype= Switch
coulombtype  = pme
DispCorr= EnerPres ;only usefull with reaction-field
and pme or pppm
rcoulomb   = 1.0
rcoulomb_switch= 0.0
rvdw = 1.2

Justin's recommendation (per mail above)
vdwtype = switch
rlist = 1.2
rlistlong = 1.4
rvdw = 1.2
rvdw-switch = 1.0
rcoulomb = 1.2



The differences between these two sets of run parameters are very small, dealing 
mostly with Coulomb and neighbor searching cutoffs.  I would suspect that any 
difference between simulations run with these two settings would be similarly 
small or nonexistent, given that rcoulomb is a bit flexible when using PME.  The 
value of rlist is rarely mentioned in papers, so it is good that the authors 
have provided the actual input file.  Previous interpretation of CHARMM usage 
generally advised setting rcoulomb = 1.2 to remain consistent with the original 
switching/shifting functions.  That setting becomes a bit less stringent when 
using PME.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] mpi segmentation error in continuation of REMD simulation with gromacs 4.5.5

2013-11-08 Thread Qin Qiao
On Fri, Nov 8, 2013 at 7:18 PM, Mark Abraham wrote:

> Hi,
>
> That shouldn't happen if your MPI library is working (have you tested it
> with other programs?) and configured properly. It's possible this is a
> known bug, so please let us know if you can reproduce it in the latest
> releases.
>
> Mark
>
>
> Hi,

I installed different versions of gromacs with the same MPI library.
Surprisingly, the problem doesn't occur in gromacs-4.5.1.. but still in the
gromacs-4.6.3... The MPI version is MVAPICH2-1.9a for infinite band.

Best,

Qin

On Fri, Nov 8, 2013 at 6:55 AM, Qin Qiao  wrote:
>
> > Dear all,
> >
> > I'm trying to continue a REMD simulation using gromacs4.5.5 under NPT
> > ensemble, and I got the following errors when I tried to use 2 cores per
> > replica:
> >
> > "[node-ib-4.local:mpi_rank_25][error_sighandler] Caught error:
> Segmentation
> > fault (signal 11)
> > [node-ib-13.local:mpi_rank_63][error_sighandler] Caught error:
> Segmentation
> > fault (signal 11)
> > ...
> > "
> >
> > Surprisingly, it worked fine when I tried to use only 1 core per
> replica..
> > I have no idea what caused the problem.. Could you give me some advice?
> >
> > ps. the command I used is
> > "srun .../gromacs-4.5.5-mpi-slurm/bin/mdrun_infiniband -s remd_.tpr
> -multi
> > 48 -replex 1000 -deffnm remd_ -cpi remd_.cpt -append"
> >
> > Best
> > Qin
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-requ...@gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Reg error in Compilation Of Gromacs 4.6.2

2013-11-08 Thread vidhya sankar
 
Dear Justin Thank you for your Previous reply,



I am trying to Install gromacs 4.6.2  in a cluster having centos
OS with teh Following Command   I got following error
cmake ..
-DGMX_BUILD_OWN_FFTW=ON   -DGMX_MPI=ON 
-DGMX_DOUBLE=ON


 CMake Warning at CMakeLists.txt:785 (message):
 No C AVX flag found. Consider a newer compiler, or try SSE4.1 (lower
 performance). 
 

 CMake Warning at CMakeLists.txt:802 (message):
 No C FMA4 flag found. Consider a newer compiler, or try SSE4.1 (lower
 performance). 
 

 CMake Error at CMakeLists.txt:872 (gmx_test_avx_gcc_maskload_bug):
 GMX_TEST_AVX_GCC_MASKLOAD_BUG Macro invoked with incorrect arguments
for
 macro named: GMX_TEST_AVX_GCC_MASKLOAD_BUG 
 

 -- The GROMACS-managed build of FFTW 3 will configure with the
following optimizations: --enable-sse2
 -- Configuring incomplete, errors occurred! 

 The
Aforesaid Error Indicates What is the Lack ?
 I
Think  I need Latest Compiler (Please Indicate Version)  
is it Correct Not?  
 How
to solve the problem What are Prerequestite i need ? 

 Already 
Gromacs 4.5.5 has been installed sucessfully 

Thanks
in Advance

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-08 Thread Jones de Andrade
Ok, you convinced me. I'll really give it a try. I'm following this throw
the suposition that openMP implementation on intel compilers is not as good
as GNU compilers. I already tested intel openMP in our cluster, and it just
sucked in comparison to the pure MPI compilation.

Let's hope I can make it work together with the IMPI. I don't want to
install openmpi as just a user also (did it a long time ago as root, not in
limited space).

Thanks again!


On Fri, Nov 8, 2013 at 9:21 AM, Mark Abraham wrote:

> On Fri, Nov 8, 2013 at 12:02 AM, Jones de Andrade  >wrote:
>
> > Really?
>
>
> Of course. With openmp gets to use all your cores for PME+bondeds+stuff
> while the GPU does PP. Any version without openmp gets to use one core per
> domain, which is bad.
>
>
> > An what about gcc+mpi? should I expect any improvement?
> >
>
> Run how and compared with what? Using an external MPI library within a
> single node is a complete waste of time compared with the alternatives
> (thread-MPI, OpenMP, or both).
>
> Mark
>
>
> >
> >
> > On Thu, Nov 7, 2013 at 6:51 PM, Mark Abraham  > >wrote:
> >
> > > You will do much better with gcc+openmp than icc-openmp!
> > >
> > > Mark
> > >
> > >
> > > On Thu, Nov 7, 2013 at 9:17 PM, Jones de Andrade  > > >wrote:
> > >
> > > > Did it a few days ago. Not so much of a problem here.
> > > >
> > > > But I compiled everything, including fftw, with it. The only error I
> > got
> > > > was that I should turn off the separable compilation, and that the
> user
> > > > must be in the group video.
> > > >
> > > > My settings are (yes, I know it should go better with openmp, but
> > openmp
> > > > goes horrobly in our cluster, I don't know why):
> > > >
> > > > setenv CC  "/opt/intel/bin/icc"
> > > > setenv CXX "/opt/intel/bin/icpc"
> > > > setenv F77 "/opt/intel/bin/ifort"
> > > > setenv CMAKE_PREFIX_PATH /storage/home/johannes/lib/fftw/vanilla/
> > > > mkdir build
> > > > cd build
> > > > cmake .. -DGMX_GPU=ON -DCUDA_SEPARABLE_COMPILATION=OFF
> > > > -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_OPENMP=OFF -DGMX_MPI=ON
> > > > -DGMX_THREAD_MPI=OFF -DMPIEXEC_MAX_NUMPROCS=1024
> > -DBUILD_SHARED_LIBS=OFF
> > > > -DGMX_PREFER_STATIC_LIBS=ON
> > > > -DCMAKE_INSTALL_PREFIX=/storage/home/johannes/bin/gromacs/vanilla/
> > > > make
> > > > make install
> > > > cd ..
> > > > rm -rf build
> > > >
> > > >
> > > > On Thu, Nov 7, 2013 at 3:02 PM, Mark Abraham <
> mark.j.abra...@gmail.com
> > > > >wrote:
> > > >
> > > > > icc and CUDA is pretty painful. I'd suggest getting latest gcc.
> > > > >
> > > > > Mark
> > > > >
> > > > >
> > > > > On Thu, Nov 7, 2013 at 2:42 PM,  wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I'm having trouble compiling v 4.6.3 with GPU support using CUDA
> > > > 5.5.22.
> > > > > >
> > > > > > The configuration runs okay and I have made sure that I have set
> > > paths
> > > > > > correctly.
> > > > > >
> > > > > > I'm getting errors:
> > > > > >
> > > > > > $ make
> > > > > > [  0%] Building NVCC (Device) object
> > > > > >
> > > > >
> > > >
> > >
> >
> src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
> > > > > > icc: command line warning #10006: ignoring unknown option
> > > '-dumpspecs'
> > > > > > /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o:
> In
> > > > > > function `_start':
> > > > > > (.text+0x20): undefined reference to `main'
> > > > > > CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:206
> > > > > (message):
> > > > > >   Error generating
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> /apps/src/gromacs/gromacs-4.6.3/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
> > > > > >
> > > > > >
> > > > > > make[2]: ***
> > > > > >
> > > > >
> > > >
> > >
> >
> [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
> > > > > > Error 1
> > > > > > make[1]: ***
> [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all]
> > > > Error
> > > > > 2
> > > > > > make: *** [all] Error 2
> > > > > >
> > > > > > Any help would be appreciated.
> > > > > >
> > > > > > Regards,
> > > > > > Ahmed.
> > > > > >
> > > > > > --
> > > > > > Scanned by iCritical.
> > > > > >
> > > > > > --
> > > > > > gmx-users mailing listgmx-users@gromacs.org
> > > > > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > > > > * Please search the archive at
> > > > > > http://www.gromacs.org/Support/Mailing_Lists/Search before
> > posting!
> > > > > > * Please don't post (un)subscribe requests to the list. Use the
> > > > > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > > >
> > > > > --
> > > > > gmx-users mailing listgmx-users@gromacs.org
> > > > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > > > * Please search the archive at
> > > > > http://www.gromacs.org/Support/Mailing_Lists/Search before
> posting!
> > > > > * Please do

Re: [gmx-users] Problem compiling Gromacs 4.6.3 with CUDA

2013-11-08 Thread Mark Abraham
On Fri, Nov 8, 2013 at 12:02 AM, Jones de Andrade wrote:

> Really?


Of course. With openmp gets to use all your cores for PME+bondeds+stuff
while the GPU does PP. Any version without openmp gets to use one core per
domain, which is bad.


> An what about gcc+mpi? should I expect any improvement?
>

Run how and compared with what? Using an external MPI library within a
single node is a complete waste of time compared with the alternatives
(thread-MPI, OpenMP, or both).

Mark


>
>
> On Thu, Nov 7, 2013 at 6:51 PM, Mark Abraham  >wrote:
>
> > You will do much better with gcc+openmp than icc-openmp!
> >
> > Mark
> >
> >
> > On Thu, Nov 7, 2013 at 9:17 PM, Jones de Andrade  > >wrote:
> >
> > > Did it a few days ago. Not so much of a problem here.
> > >
> > > But I compiled everything, including fftw, with it. The only error I
> got
> > > was that I should turn off the separable compilation, and that the user
> > > must be in the group video.
> > >
> > > My settings are (yes, I know it should go better with openmp, but
> openmp
> > > goes horrobly in our cluster, I don't know why):
> > >
> > > setenv CC  "/opt/intel/bin/icc"
> > > setenv CXX "/opt/intel/bin/icpc"
> > > setenv F77 "/opt/intel/bin/ifort"
> > > setenv CMAKE_PREFIX_PATH /storage/home/johannes/lib/fftw/vanilla/
> > > mkdir build
> > > cd build
> > > cmake .. -DGMX_GPU=ON -DCUDA_SEPARABLE_COMPILATION=OFF
> > > -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_OPENMP=OFF -DGMX_MPI=ON
> > > -DGMX_THREAD_MPI=OFF -DMPIEXEC_MAX_NUMPROCS=1024
> -DBUILD_SHARED_LIBS=OFF
> > > -DGMX_PREFER_STATIC_LIBS=ON
> > > -DCMAKE_INSTALL_PREFIX=/storage/home/johannes/bin/gromacs/vanilla/
> > > make
> > > make install
> > > cd ..
> > > rm -rf build
> > >
> > >
> > > On Thu, Nov 7, 2013 at 3:02 PM, Mark Abraham  > > >wrote:
> > >
> > > > icc and CUDA is pretty painful. I'd suggest getting latest gcc.
> > > >
> > > > Mark
> > > >
> > > >
> > > > On Thu, Nov 7, 2013 at 2:42 PM,  wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I'm having trouble compiling v 4.6.3 with GPU support using CUDA
> > > 5.5.22.
> > > > >
> > > > > The configuration runs okay and I have made sure that I have set
> > paths
> > > > > correctly.
> > > > >
> > > > > I'm getting errors:
> > > > >
> > > > > $ make
> > > > > [  0%] Building NVCC (Device) object
> > > > >
> > > >
> > >
> >
> src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
> > > > > icc: command line warning #10006: ignoring unknown option
> > '-dumpspecs'
> > > > > /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In
> > > > > function `_start':
> > > > > (.text+0x20): undefined reference to `main'
> > > > > CMake Error at cuda_tools_generated_pmalloc_cuda.cu.o.cmake:206
> > > > (message):
> > > > >   Error generating
> > > > >
> > > > >
> > > >
> > >
> >
> /apps/src/gromacs/gromacs-4.6.3/src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir//./cuda_tools_generated_pmalloc_cuda.cu.o
> > > > >
> > > > >
> > > > > make[2]: ***
> > > > >
> > > >
> > >
> >
> [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/./cuda_tools_generated_pmalloc_cuda.cu.o]
> > > > > Error 1
> > > > > make[1]: *** [src/gmxlib/cuda_tools/CMakeFiles/cuda_tools.dir/all]
> > > Error
> > > > 2
> > > > > make: *** [all] Error 2
> > > > >
> > > > > Any help would be appreciated.
> > > > >
> > > > > Regards,
> > > > > Ahmed.
> > > > >
> > > > > --
> > > > > Scanned by iCritical.
> > > > >
> > > > > --
> > > > > gmx-users mailing listgmx-users@gromacs.org
> > > > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > > > * Please search the archive at
> > > > > http://www.gromacs.org/Support/Mailing_Lists/Search before
> posting!
> > > > > * Please don't post (un)subscribe requests to the list. Use the
> > > > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > > >
> > > > --
> > > > gmx-users mailing listgmx-users@gromacs.org
> > > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > > * Please don't post (un)subscribe requests to the list. Use the
> > > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > --
> > > gmx-users mailing listgmx-users@gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > * Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to gmx-users-requ...@gromacs.org.
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > --
> > gmx-users mailing listgmx-users@gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Ma

Re: [gmx-users] mpi segmentation error in continuation of REMD simulation with gromacs 4.5.5

2013-11-08 Thread Mark Abraham
Hi,

That shouldn't happen if your MPI library is working (have you tested it
with other programs?) and configured properly. It's possible this is a
known bug, so please let us know if you can reproduce it in the latest
releases.

Mark


On Fri, Nov 8, 2013 at 6:55 AM, Qin Qiao  wrote:

> Dear all,
>
> I'm trying to continue a REMD simulation using gromacs4.5.5 under NPT
> ensemble, and I got the following errors when I tried to use 2 cores per
> replica:
>
> "[node-ib-4.local:mpi_rank_25][error_sighandler] Caught error: Segmentation
> fault (signal 11)
> [node-ib-13.local:mpi_rank_63][error_sighandler] Caught error: Segmentation
> fault (signal 11)
> ...
> "
>
> Surprisingly, it worked fine when I tried to use only 1 core per replica..
> I have no idea what caused the problem.. Could you give me some advice?
>
> ps. the command I used is
> "srun .../gromacs-4.5.5-mpi-slurm/bin/mdrun_infiniband -s remd_.tpr -multi
> 48 -replex 1000 -deffnm remd_ -cpi remd_.cpt -append"
>
> Best
> Qin
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] TFE-water simulation

2013-11-08 Thread João Henriques
Hello again,

That depends on the peptide. There is no general answer. I am starting with
a linear conformations, but that's because I'm working with intrinsically
disordered proteins. That's as far as I can go regarding telling you about
what I'm doing. I'm not at liberty to discuss these things, it's not public
yet, sorry.

Best regards,
João


On Fri, Nov 8, 2013 at 11:32 AM, Archana Sonawani-Jagtap <
ask.arch...@gmail.com> wrote:

> Should I start with helical peptides and see if it maintains the helicity
> or I can start with random coil?
>
> Do random coil peptides take long simulation time to form helical peptides?
>
> any help on this will be appreciated.
>
>
> On Tue, Nov 5, 2013 at 12:25 AM, Archana Sonawani-Jagtap <
> ask.arch...@gmail.com> wrote:
>
> > Thanks Joao Henriques for helping me with the steps.
> > On Nov 4, 2013 3:18 PM, "João Henriques"  >
> > wrote:
> >
> >> Hello Archana,
> >>
> >> I'm also toying with a TFE-water system, therefore I am also a newbie.
> >> This
> >> is what I am doing, I hope it helps:
> >>
> >> 1) Since I'm using G54A7 I created a TFE.itp using GROMOS parameters (I
> >> don't use PRODGR, see why in DOI: 10.1021/ci100335w).
> >> 2) Do the math and check how many molecules of TFE you're going to need
> >> for
> >> a given v/v TFE-water ratio and a given simulation box volume.
> >> 3) Build box with the correct size.
> >> 4) Randomly insert correct number of TFE molecules.
> >> 5) Solvate.
> >> 6) Insert protein.
> >>
> >> Hopefully, the amount of TFE and water molecules that will be deleted in
> >> inserting the protein in the final step will be proportional, given that
> >> the TFE molecules are well distributed.
> >>
> >> I've tried many different ways of doing this and it's always impossible
> to
> >> maintain a perfect TFE-water ratio, no matter the order and manner of
> >> insertion of each system component. I've also never been able to insert
> >> the
> >> correct number of waters after the TFE. My calculations predict a higher
> >> number, but the solvation algorithm can't find enough space for them.
> >>
> >> In sum, either you place each molecule by hand and you spend a life time
> >> building the system, or you just make a few compromises and deal with
> it.
> >> I
> >> ended up going with the former as I have a limited amount of time on my
> >> hands and I am aware of the approximations I am doing.
> >>
> >> Best regards,
> >>
> >> João Henriques
> >> 
> >> PhD student
> >> Division of Theoretical Chemistry
> >> Lund University
> >> Lund, Sweden
> >> 
> >> joao.henriq...@teokem.lu.se
> >> http://www.teokem.lu.se/~joaoh/
> >>
> >>
> >> On Thu, Oct 24, 2013 at 7:15 PM, Justin Lemkul  wrote:
> >>
> >> >
> >> >
> >> > On 10/24/13 1:13 PM, Archana Sonawani-Jagtap wrote:
> >> >
> >> >> Dear Justin,
> >> >>
> >> >> I have not constructed the system but I have downloaded it from ATB
> >> >> website. To maintain the number of TFE and water molecules(1:1 v/v)
> in
> >> the
> >> >> system (I don't want to add extra water molecules) I tried many
> >> options in
> >> >> genbox but still it adds 678 water molecules. Can you provide me some
> >> >> hint?
> >> >>
> >> >>
> >> > Not without seeing your actual command(s).
> >> >
> >> >
> >> >  Is their need to remove periodicity of this pre-equilibrated system
> as
> >> in
> >> >> case of lipids?
> >> >>
> >> >>
> >> > No idea.  Are the molecules broken in the initial configuration?
> >> >
> >> > -Justin
> >> >
> >> > --
> >> > ==
> >> >
> >> >
> >> > Justin A. Lemkul, Ph.D.
> >> > Postdoctoral Fellow
> >> >
> >> > Department of Pharmaceutical Sciences
> >> > School of Pharmacy
> >> > Health Sciences Facility II, Room 601
> >> > University of Maryland, Baltimore
> >> > 20 Penn St.
> >> > Baltimore, MD 21201
> >> >
> >> > jalem...@outerbanks.umaryland.edu | (410) 706-7441
> >> >
> >> > ==
> >> >
> >> > --
> >> > gmx-users mailing listgmx-users@gromacs.org
> >> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> > * Please search the archive at http://www.gromacs.org/
> >> > Support/Mailing_Lists/Search before posting!
> >> > * Please don't post (un)subscribe requests to the list. Use the www
> >> > interface or send it to gmx-users-requ...@gromacs.org.
> >> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> >
> >> --
> >> gmx-users mailing listgmx-users@gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> * Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-requ...@gromacs.org.
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >
>
>
> --
> Archana Sonawani-Jagtap
> Senior Research Fellow,
> Bio

Re: [gmx-users] TFE-water simulation

2013-11-08 Thread Archana Sonawani-Jagtap
Should I start with helical peptides and see if it maintains the helicity
or I can start with random coil?

Do random coil peptides take long simulation time to form helical peptides?

any help on this will be appreciated.


On Tue, Nov 5, 2013 at 12:25 AM, Archana Sonawani-Jagtap <
ask.arch...@gmail.com> wrote:

> Thanks Joao Henriques for helping me with the steps.
> On Nov 4, 2013 3:18 PM, "João Henriques" 
> wrote:
>
>> Hello Archana,
>>
>> I'm also toying with a TFE-water system, therefore I am also a newbie.
>> This
>> is what I am doing, I hope it helps:
>>
>> 1) Since I'm using G54A7 I created a TFE.itp using GROMOS parameters (I
>> don't use PRODGR, see why in DOI: 10.1021/ci100335w).
>> 2) Do the math and check how many molecules of TFE you're going to need
>> for
>> a given v/v TFE-water ratio and a given simulation box volume.
>> 3) Build box with the correct size.
>> 4) Randomly insert correct number of TFE molecules.
>> 5) Solvate.
>> 6) Insert protein.
>>
>> Hopefully, the amount of TFE and water molecules that will be deleted in
>> inserting the protein in the final step will be proportional, given that
>> the TFE molecules are well distributed.
>>
>> I've tried many different ways of doing this and it's always impossible to
>> maintain a perfect TFE-water ratio, no matter the order and manner of
>> insertion of each system component. I've also never been able to insert
>> the
>> correct number of waters after the TFE. My calculations predict a higher
>> number, but the solvation algorithm can't find enough space for them.
>>
>> In sum, either you place each molecule by hand and you spend a life time
>> building the system, or you just make a few compromises and deal with it.
>> I
>> ended up going with the former as I have a limited amount of time on my
>> hands and I am aware of the approximations I am doing.
>>
>> Best regards,
>>
>> João Henriques
>> 
>> PhD student
>> Division of Theoretical Chemistry
>> Lund University
>> Lund, Sweden
>> 
>> joao.henriq...@teokem.lu.se
>> http://www.teokem.lu.se/~joaoh/
>>
>>
>> On Thu, Oct 24, 2013 at 7:15 PM, Justin Lemkul  wrote:
>>
>> >
>> >
>> > On 10/24/13 1:13 PM, Archana Sonawani-Jagtap wrote:
>> >
>> >> Dear Justin,
>> >>
>> >> I have not constructed the system but I have downloaded it from ATB
>> >> website. To maintain the number of TFE and water molecules(1:1 v/v) in
>> the
>> >> system (I don't want to add extra water molecules) I tried many
>> options in
>> >> genbox but still it adds 678 water molecules. Can you provide me some
>> >> hint?
>> >>
>> >>
>> > Not without seeing your actual command(s).
>> >
>> >
>> >  Is their need to remove periodicity of this pre-equilibrated system as
>> in
>> >> case of lipids?
>> >>
>> >>
>> > No idea.  Are the molecules broken in the initial configuration?
>> >
>> > -Justin
>> >
>> > --
>> > ==
>> >
>> >
>> > Justin A. Lemkul, Ph.D.
>> > Postdoctoral Fellow
>> >
>> > Department of Pharmaceutical Sciences
>> > School of Pharmacy
>> > Health Sciences Facility II, Room 601
>> > University of Maryland, Baltimore
>> > 20 Penn St.
>> > Baltimore, MD 21201
>> >
>> > jalem...@outerbanks.umaryland.edu | (410) 706-7441
>> >
>> > ==
>> >
>> > --
>> > gmx-users mailing listgmx-users@gromacs.org
>> > http://lists.gromacs.org/mailman/listinfo/gmx-users
>> > * Please search the archive at http://www.gromacs.org/
>> > Support/Mailing_Lists/Search before posting!
>> > * Please don't post (un)subscribe requests to the list. Use the www
>> > interface or send it to gmx-users-requ...@gromacs.org.
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> --
>> gmx-users mailing listgmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>


-- 
Archana Sonawani-Jagtap
Senior Research Fellow,
Biomedical Informatics Centre,
NIRRH (ICMR), Parel
Mumbai, India.
9960791339
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] after using ACPYPE , GROMACS OPLS itp file generated an atom type like opls_x with mass 0.000

2013-11-08 Thread Alan
Hi, this feature is really really experimental and should indeed be
avoided. If opls you want, then give a try with
http://www.aribeiro.net.br/mktop/

Alan


On 8 November 2013 04:42, aditya sarma  wrote:

> Hi,
> i was trying to generate topology for p-phenylene vinylene polymer for OPLS
> forcefield using acpype . The itp file i got has the atomtype opls_x with
> mass 0.00. Is there any way to rectify this?
>
> After reading through how acpype works i found out this was one of the
> possible errors but there was no solution to it.
>
> This is a part of the itp file generated:
>
>  [ atoms ]
> ;   nr  type  resi  res  atom  cgnr charge  mass   ; qtot
> bond_type
>  1 opls_145 1   LIG C1-0.117500 12.01100 ; qtot
> -0.118  CA
>  2 opls_145 1   LIGC12-0.055800 12.01100 ; qtot
> -0.173  CA
>  3 opls_145 1   LIGC23-0.117500 12.01100 ; qtot
> -0.291  CA
>  4 opls_145 1   LIGC34-0.131000 12.01100 ; qtot
> -0.422  CA
>  5 opls_145 1   LIGC45-0.125000 12.01100 ; qtot
> -0.547  CA
>  6 opls_145 1   LIGC56-0.131000 12.01100 ; qtot
> -0.678  CA
>  7 opls_x 1   LIGC67-0.099200  0.0 ; qtot
> -0.777  x
>  8 opls_x 1   LIGC78-0.105200  0.0 ; qtot
> -0.882  x
>  9 opls_145 1   LIGC89-0.048800 12.01100 ; qtot
> -0.931  CA
> 10 opls_145 1   LIGC9   10-0.119500 12.01100 ; qtot
> -1.051  CA
> 11 opls_145 1   LIG   C10   11-0.118500 12.01100 ; qtot
> -1.169  CA
> 12 opls_145 1   LIG   C11   12-0.051800 12.01100 ; qtot
> -1.221  CA
> 13 opls_145 1   LIG   C12   13-0.118500 12.01100 ; qtot
> -1.339  CA
> 14 opls_145 1   LIG   C13   14-0.119500 12.01100 ; qtot
> -1.459  CA
> 15 opls_x 1   LIG   C14   15-0.101200  0.0 ; qtot
> -1.560  x
> 16 opls_x 1   LIG   C15   16-0.103200  0.0 ; qtot
> -1.663  x
> 17 opls_145 1   LIG   C16   17-0.049800 12.01100 ; qtot
> -1.713  CA
> 18 opls_145 1   LIG   C17   18-0.119500 12.01100 ; qtot
> -1.833  CA
> 19 opls_145 1   LIG   C18   19-0.119000 12.01100 ; qtot
> -1.952  CA
> 20 opls_145 1   LIG   C19   20-0.050800 12.01100 ; qtot
> -2.002  CA
> 21 opls_145 1   LIG   C20   21-0.119000 12.01100 ; qtot
> -2.121  CA
> 22 opls_145 1   LIG   C21   22-0.119500 12.01100 ; qtot
> -2.241  CA
> 23 opls_x 1   LIG   C22   23-0.102200  0.0 ; qtot
> -2.343  x
> 24 opls_x 1   LIG   C23   24-0.102200  0.0 ; qtot
> -2.445  x
> 25 opls_145 1   LIG   C24   25-0.050800 12.01100 ; qtot
> -2.496  CA
> 26 opls_145 1   LIG   C25   26-0.119000 12.01100 ; qtot
> -2.615  CA
> 27 opls_145 1   LIG   C26   27-0.119000 12.01100 ; qtot
> -2.734  CA
> 28 opls_145 1   LIG   C27   28-0.050800 12.01100 ; qtot
> -2.785  CA
> 29 opls_145 1   LIG   C28   29-0.119000 12.01100 ; qtot
> -2.904  CA
> 30 opls_145 1   LIG   C29   30-0.119000 12.01100 ; qtot
> -3.023  CA
> 31 opls_x 1   LIG   C30   31-0.102200  0.0 ; qtot
> -3.125  x
> 32 opls_x 1   LIG   C31   32-0.102200  0.0 ; qtot
> -3.227  x
> 33 opls_145 1   LIG   C32   33-0.050800 12.01100 ; qtot
> -3.278  CA
> 34 opls_145 1   LIG   C33   34-0.119000 12.01100 ; qtot
> -3.397  CA
> 35 opls_145 1   LIG   C34   35-0.119000 12.01100 ; qtot
> -3.516  CA
> 36 opls_145 1   LIG   C35   36-0.050800 12.01100 ; qtot
> -3.567  CA
> 37 opls_145 1   LIG   C36   37-0.119000 12.01100 ; qtot
> -3.686  CA
> 38 opls_145 1   LIG   C37   38-0.119000 12.01100 ; qtot
> -3.805  CA
> 39 opls_x 1   LIG   C38   39-0.102200  0.0 ; qtot
> -3.907  x
> 40 opls_x 1   LIG   C39   40-0.102200  0.0 ; qtot
> -4.009  x
> 41 opls_145 1   LIG   C40   41-0.050800 12.01100 ; qtot
> -4.060  CA
> 42 opls_145 1   LIG   C41   42-0.119000 12.01100 ; qtot
> -4.179  CA
> 43 opls_145 1   LIG   C42   43-0.119500 12.01100 ; qtot
> -4.299  CA
> 44 opls_145 1   LIG   C43   44-0.049800 12.01100 ; qtot
> -4.348  CA
> 45 opls_145 1   LIG   C44   45-0.119500 12.01100 ; qtot
> -4.468  CA
> 46 opls_145 1   LIG   C45   46-0.119000 12.01100 ; qtot
> -4.587  CA
> 47 opls_x 1   LIG   C46   47-0.103200  0.0 ; qtot
> -4.690  x
> 48 opls_x 1   LIG   C47   48-0.101200  0.0 ; qtot
> -4.791  x
> --
> gmx-users mailing listgmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/

Re: [gmx-users] Question about make_ndx and g_angle

2013-11-08 Thread Riccardo Concu
Dear Changwoon,
why are you using ./make_ndx instead of make_ndx? Did you try to use the
command without ./?
Regards
El jue, 07-11-2013 a las 14:12 -0800, Chang Woon Jang escribió:
> Dear Users, 
> 
>  I am using openSUSE 12.3 and try to use make_ndx and g_angle. When I try 
> the following command, there is an error message. 
> 
> > ./make.ndx -f data.pdb
> 
> ./make_ndx: error while loading shared libraries: libcudart.so.4:cannot open 
> shared object file: No such file or directory
> 
> Do I need cuda library in order to use "make_ndx" and "g_angle" ?
> 
> Thanks. 
> 
> Best regards,
> Changwoon Jang


-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists