Re: [gmx-users] Error: Atomtype CH2 not found

2019-12-15 Thread paul buscemi
Thank you for the clarification.  There’s a good chance there is more to this 
scenario   but I recall making polymers ( with Avogagdo ) , using pdb2gmx to 
create the top ( then itp )  and sometimes having the same error as the OP 
mentioned. The errors were eliminated by adding the unfound type with 
atomname2type.n2t

I should be able to post the sequence of events if you think it useful to 
others.

Paul
> On Dec 15, 2019, at 11:39 AM, Justin Lemkul  wrote:
> 
> 
> 
> On 12/14/19 1:14 PM, Paul Buscemi wrote:
>> If the itp is correct then modify atomnames2types to add the correct bonds 
>> and bond lengths
> 
> The atomname2type.n2t file is only relevant with x2top, but the OP already 
> has a topology, so this is not the issue. If you mean the .atp file, then 
> this too is not relevant because pdb2gmx is the only program that reads this.
> 
> -Justin
> 
>> PB
>> 
>>> On Dec 13, 2019, at 7:52 PM, Justin Lemkul  wrote:
>>> 
>>> 
>>> 
>>>> On 12/13/19 2:18 AM, Muthusankar wrote:
>>>> Dear Gromacs users,
>>>> I am simulating a protein-ligand complex and performing the grompp command
>>>> before adding ions to the system. I got the error.
>>>> *Fatal error: *(file: ligand.itp)
>>>> Atomtype *CH2 not found*.
>>>> *command used:*
>>>> gmx grompp -f ions.mdp -c protein_box.gro -p protein.top -o ions.tpr
>>>> 
>>>> Please guide me, How to rectify the problem.
>>>> 
>>> This means you're trying to use an atom type that your force field doesn't 
>>> recognize. Either you're mixing and matching force fields (never do this) 
>>> or your ligand topology relies on new atom types that should be introduced 
>>> into the force field, in which case the source of the ligand topology 
>>> (server, etc.) should provide that information.
>>> 
>>> -Justin
>>> 
>>> -- 
>>> ==
>>> 
>>> Justin A. Lemkul, Ph.D.
>>> Assistant Professor
>>> Office: 301 Fralin Hall
>>> Lab: 303 Engel Hall
>>> 
>>> Virginia Tech Department of Biochemistry
>>> 340 West Campus Dr.
>>> Blacksburg, VA 24061
>>> 
>>> jalem...@vt.edu | (540) 231-3129
>>> http://www.thelemkullab.com
>>> 
>>> ==
>>> 
>>> -- 
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at 
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
>>> a mail to gmx-users-requ...@gromacs.org.
> 
> -- 
> ==
> 
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
> 
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
> 
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
> 
> ==
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Young's modulus

2019-12-15 Thread Paul Buscemi
Use surface tension in one direction. Measure increase in box size in that 
direction for  YM. Restrain the bottom layer apply ST to top for SM

PB

> On Dec 15, 2019, at 3:25 AM, David van der Spoel  wrote:
> 
>> Den 2019-12-15 kl. 09:09, skrev Iman Katouzian:
>> Good day,
>> How can I calculate Young and shear modulus using GROMACS package?
>> Thanks.
> Gromacs is typically used for liquid state simulations although it is 
> possible to simulate solid state as well. For the liquid state you can 
> compute the related compressibility, see e.g. Eqn 11 in
> J. Chem. Theor. Comput. 8 pp. 61-74 (2012)
> This can likely be extended to solids as well.
> 
> -- 
> David van der Spoel, Ph.D., Professor of Biology
> Head of Department, Cell & Molecular Biology, Uppsala University.
> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
> http://www.icm.uu.se
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem with GROMAC 2019.4

2019-12-14 Thread Paul Buscemi
Great!  What magic did you conjure up?

PB

> On Dec 13, 2019, at 12:56 PM, Avi Hundal  wrote:
> 
> Hi Paul,
> 
> Yes sir, I am up and running.  Thank you all!
> 
> Regards,
> 
> Avneel S. Hundal
> 
> Email: havn...@gmail.com
> 
> 
>> On Sun, Dec 8, 2019 at 3:48 AM Paul Buscemi  wrote:
>> 
>> Are you up And running?
>> 
>> PB
>> 
>>> On Dec 8, 2019, at 12:46 AM, Avi Hundal  wrote:
>>> 
>>> Hey Paul,
>>> 
>>> Thanks for looking out.  I found that out the hard way, I have gcc 8.0
>>> installed and configured it with '-config gcc'.
>>> 
>>> Regards,
>>> 
>>> Avneel S. Hundal
>>> 
>>> Email: havn...@gmail.com
>>> 
>>> 
>>>> On Sat, Dec 7, 2019 at 6:39 PM Paul Buscemi  wrote:
>>>> 
>>>> A shot in the dark. Cuda may not work with latest version of gcc. There
>> is
>>>> lit on this issue   Try the repository version for cuda toolbox
>>>> 
>>>> PB
>>>> 
>>>>> On Dec 4, 2019, at 4:25 AM, Christian Blau  wrote:
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error: Atomtype CH2 not found

2019-12-14 Thread Paul Buscemi
If the itp is correct then modify atomnames2types to add the correct bonds and 
bond lengths

PB

> On Dec 13, 2019, at 7:52 PM, Justin Lemkul  wrote:
> 
> 
> 
>> On 12/13/19 2:18 AM, Muthusankar wrote:
>> Dear Gromacs users,
>> I am simulating a protein-ligand complex and performing the grompp command
>> before adding ions to the system. I got the error.
>> *Fatal error: *(file: ligand.itp)
>> Atomtype *CH2 not found*.
>> *command used:*
>> gmx grompp -f ions.mdp -c protein_box.gro -p protein.top -o ions.tpr
>> 
>> Please guide me, How to rectify the problem.
>> 
> 
> This means you're trying to use an atom type that your force field doesn't 
> recognize. Either you're mixing and matching force fields (never do this) or 
> your ligand topology relies on new atom types that should be introduced into 
> the force field, in which case the source of the ligand topology (server, 
> etc.) should provide that information.
> 
> -Justin
> 
> -- 
> ==
> 
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
> 
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
> 
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
> 
> ==
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU performance, regarding

2019-12-12 Thread Paul Buscemi
What does nvidia-smi tell you?

PB

> On Dec 12, 2019, at 7:30 AM, John Whittaker  
> wrote:
> 
> Hi,
> 
>> Hi Users.
>> 
>> I am simulating a peptide of 40 residues with small molecules using oplsaa
>> ff in Gromacs 2018.20 installed in CUDA environment.. The workstation has
>> 16 Cores and 2 1080Ti card On execution of command gmx_mpi mdrun -v
>> -deffnm
>> md for 100ns it shows no usage of gpu card. For the command  gmx_mpi mdrun
>> -v -deffnm md -gputasks 01 -nb gpu, the job is terminated with note
>> " NB interaction on the gpu were required but not supported for these
>> simulation settings. change your settings or do not require using gpus."
> 
> You should provide the content of your .mdp file. According to the error
> message, some of your settings are not compatible with GPU acceleration.
> Without some information about your settings, no one can really help.
> 
> - John
> 
>> 
>> Could anyone explain a solution on this issue?
>> 
>> Thank you
>> 
>> --
>> *Regards,*
>> *Rahul *
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
>> a mail to gmx-users-requ...@gromacs.org.
>> 
> 
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding system shrunk

2019-12-09 Thread Paul Buscemi


You did not mention the type of surface, but in real life an extruded polymer 
is under stress and you must restrain the ends. As in real life a heated 
uncrosslinked polymer will shrink. The system probably behaved appropriately

PB

> On Dec 9, 2019, at 11:31 AM, Mijiddorj B  wrote:
> 
> Dear GMX users,
> 
> I study the interaction between polymer and surface interactions. Polymer
> parameters were prepared by cgenff, and the parameters of the surface were
> solved by INTERFACEFF. I performed short simulations, however, the system
> was shrunk. I used the standard mdp file of charmm-gui membrane builder
> excluding the smaller time steps and no-constraints.
> 
> If you have any experience, please advise me on how to solve this problem.
> 
> Best regards,
> 
> Mijiddorj
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem with GROMAC 2019.4

2019-12-08 Thread Paul Buscemi
Are you up And running?

PB

> On Dec 8, 2019, at 12:46 AM, Avi Hundal  wrote:
> 
> Hey Paul,
> 
> Thanks for looking out.  I found that out the hard way, I have gcc 8.0
> installed and configured it with '-config gcc'.
> 
> Regards,
> 
> Avneel S. Hundal
> 
> Email: havn...@gmail.com
> 
> 
>> On Sat, Dec 7, 2019 at 6:39 PM Paul Buscemi  wrote:
>> 
>> A shot in the dark. Cuda may not work with latest version of gcc. There is
>> lit on this issue   Try the repository version for cuda toolbox
>> 
>> PB
>> 
>>> On Dec 4, 2019, at 4:25 AM, Christian Blau  wrote:
>>> 
>>> HI Avneel,
>>> 
>>> 
>>> In general, using the latest stable version is always the first thing to
>> recommend, because this is where your issue might have already been fixed.
>>> 
>>> 
>>> Do you also get the same "unsafe srcdir value" when running make? It'd
>> be interesting to know what system you are using.
>>> 
>>> 
>>> I assume you tried a bunch of things already in the past two weeks, but
>> on the chance that  you did not already you might try:
>>> 
>>>  - try building in a directory with no whitespaces or other special
>> characters in the directory name (usually should not be an issue, but
>> "unsafe srcdir value" hints at this)
>>> 
>>>  - using ninja as a build system (use cmake -GNinja , then type ninja
>> instead of make)
>>> 
>>>  - trying different fft libraries (fftw, mkl, fftpack)
>>> 
>>> 
>>> Also, it'd be great to hear back to know what solved the issue.
>>> 
>>> 
>>> Best,
>>> 
>>> Christian
>>> 
>>>> On 2019-12-04 08:42, Avi Hundal wrote:
>>>> Hi all,
>>>> 
>>>> I've been trying to get GROMAC 2019.4 to work on my computer with GPU
>>>> acceleration for over 2 weeks, without success and without trying to
>> bother
>>>> you all here.  Scouring the archives, someone else had the same exact
>> issue
>>>> I have (
>>>> 
>> https://www.mail-archive.com/gromacs.org_gmx-users@maillist.sys.kth.se/msg33675.html
>>>> ).
>>>> 
>>>> The problem occurs after entering "make".  I haven't been able to find a
>>>> solution, and the person who previously had this problem was suggested
>> to
>>>> upgrade to the newest stable version at that time.  Any suggestions?
>> Thank
>>>> you!
>>>> 
>>>> Regards,
>>>> 
>>>> Avneel S. Hundal
>>>> 
>>>> Email: havn...@gmail.com
>>> --
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem with GROMAC 2019.4

2019-12-07 Thread Paul Buscemi
A shot in the dark. Cuda may not work with latest version of gcc. There is lit 
on this issue   Try the repository version for cuda toolbox

PB

> On Dec 4, 2019, at 4:25 AM, Christian Blau  wrote:
> 
> HI Avneel,
> 
> 
> In general, using the latest stable version is always the first thing to 
> recommend, because this is where your issue might have already been fixed.
> 
> 
> Do you also get the same "unsafe srcdir value" when running make? It'd be 
> interesting to know what system you are using.
> 
> 
> I assume you tried a bunch of things already in the past two weeks, but on 
> the chance that  you did not already you might try:
> 
>   - try building in a directory with no whitespaces or other special 
> characters in the directory name (usually should not be an issue, but "unsafe 
> srcdir value" hints at this)
> 
>   - using ninja as a build system (use cmake -GNinja , then type ninja 
> instead of make)
> 
>   - trying different fft libraries (fftw, mkl, fftpack)
> 
> 
> Also, it'd be great to hear back to know what solved the issue.
> 
> 
> Best,
> 
> Christian
> 
>> On 2019-12-04 08:42, Avi Hundal wrote:
>> Hi all,
>> 
>> I've been trying to get GROMAC 2019.4 to work on my computer with GPU
>> acceleration for over 2 weeks, without success and without trying to bother
>> you all here.  Scouring the archives, someone else had the same exact issue
>> I have (
>> https://www.mail-archive.com/gromacs.org_gmx-users@maillist.sys.kth.se/msg33675.html
>> ).
>> 
>> The problem occurs after entering "make".  I haven't been able to find a
>> solution, and the person who previously had this problem was suggested to
>> upgrade to the newest stable version at that time.  Any suggestions?  Thank
>> you!
>> 
>> Regards,
>> 
>> Avneel S. Hundal
>> 
>> Email: havn...@gmail.com
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] CA ions

2019-12-07 Thread Paul Buscemi
Can you not use pdb2gmx with a pdb of Ca. Then create the itp from the new top. 
Add that as an include. Need to do the same with Cl for charge neutrality 

PB

> On Dec 6, 2019, at 7:34 PM, Tasneem Kausar  wrote:
> 
> You are going to the right way. There are more options given in gmx genion.
> Check -pname and -nname. These options will help you to select the name of
> positive and negative ions.
> 
> 
>> On Sat, 7 Dec 2019, 5:05 am Iman Katouzian,  wrote:
>> 
>> Good day,
>> 
>> I want to simulate my protein in gromacs and in this simulation I need to
>> add a certain concentration of CA ions to my system. However, I have no
>> idea about how to do this so first I have to neutralize my system and then
>> need to add the needed CA ions which act as binding agents* (necessary
>> factor in experiments) *in my protein. I have heard that with -neutral and
>> -conc for adding the certain concentration I can do this thing.
>> I would appreciate it if somebody can help me with this issue.
>> 
>> Thanks.
>> --
>> 
>> *Iman Katouzian*
>> 
>> *Ph.D.** candidate of Food Process Engineering*
>> 
>> *Faculty of Food Science and Technology*
>> 
>> *University of Agricultural Sciences and Natural Resources, Gorgan, Iran*
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] c2075 is not detected by gmx

2019-11-27 Thread paul buscemi
take a look at

https://docs.nvidia.com/cuda/cuda-memcheck/index.html 
<https://docs.nvidia.com/cuda/cuda-memcheck/index.html>

make sure the GPU is functioning correctly with CUDA

> On Nov 24, 2019, at 4:32 PM, paul buscemi  wrote:
> 
> Did you install the CUDA toolbox and drivers ?  
> What is the output from "nvidia-smi" ?
> 
>> On 24,Nov 2019, at 1:31 PM, Mahmood Naderan  wrote:
>> 
>> Hi
>> I have build 2018.3 in order to test that with c2075 GPU.
>> I used this command to build it
>> $ cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=../single 
>> -DGMX_BUILD_OWN_FFTW=ON 
>> 
>> $ make
>> $ make install
>> 
>> I have to say that the device is detected according to deviceQuery. However, 
>> when I run 
>> 
>> 
>> $ gmx mdrun -nb gpu -v -deffnm nvt_5k
>> 
>> 
>> I get this error
>> 
>> Fatal error:
>> Cannot run short-ranged nonbonded interactions on a GPU because there is none
>> detected.
>> 
>> 
>> That is weird, because I also see this message
>> 
>> WARNING: An error occurred while sanity checking device #0; 
>> cudaErrorMemoryAllocation: out of memory
>> 
>> 
>> 
>> The device has 6GB of memory and I am sure that my input file doesn't need 
>> that because I have run it on a GPU with 4GB of memory/
>> 
>> Any idea?
>> 
>> Regards,
>> Mahmood
>> -- 
>> Gromacs Users mailing list
>> 
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] c2075 is not detected by gmx

2019-11-24 Thread paul buscemi
Did you install the CUDA toolbox and drivers ?  
 What is the output from "nvidia-smi" ?

> On 24,Nov 2019, at 1:31 PM, Mahmood Naderan  wrote:
> 
> Hi
> I have build 2018.3 in order to test that with c2075 GPU.
> I used this command to build it
> $ cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=../single 
> -DGMX_BUILD_OWN_FFTW=ON 
> 
> $ make
> $ make install
> 
> I have to say that the device is detected according to deviceQuery. However, 
> when I run 
> 
> 
> $ gmx mdrun -nb gpu -v -deffnm nvt_5k
> 
> 
> I get this error
> 
> Fatal error:
> Cannot run short-ranged nonbonded interactions on a GPU because there is none
> detected.
> 
> 
> That is weird, because I also see this message
> 
> WARNING: An error occurred while sanity checking device #0; 
> cudaErrorMemoryAllocation: out of memory
> 
> 
> 
> The device has 6GB of memory and I am sure that my input file doesn't need 
> that because I have run it on a GPU with 4GB of memory/
> 
> Any idea?
> 
> Regards,
> Mahmood
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] How to build and simulate POPE/POPG bilayer with gromos54a7 force field

2019-11-04 Thread paul buscemi
You might try the ATB website

http://atb.uq.edu.au/molecule.py?molid=368385#panel-md

you can  create the lipid and submit your pdb ,but most likely it already 
exists in their data base. Use the  modified Gromos54a7 ff and the pdb and  itp 
provided by ATB.. 

Good luck !

Paul


> On Nov 4, 2019, at 8:05 AM, daniel depope  wrote:
> 
> I want to simulate POPE/POPG bilayer with gromos54a7 ff (expanded with
> lipids parts, as explained in Lemkul's kalp tutorial).
> 
> 1. I tried CHARMM-gui website - I got bilayer in step 5, but can't finalize
> because there are no gromos ff option. So I looked for pope and popg
> gromos54a7 topology, but with no succes so I stacked. Any suggestions how
> to proceed?
> 
> 2.  Another option was to build that bilayer with packmol, but I can't find
> a way to do so (I know how to build homogenuos bilayer).
> 
> 3. POPE molecule from CHARMM-gui website have 125 atoms. I have some old
> POPE molecule with only 52 atoms. Last one has only two hydrogen atoms. Can
> anyone explain why is that a case.
> 
> Obvioiusly I missed some basics, I apologize for that, but I need
> clarification to move on..
> 
> Thanks
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] simulating glass materials using GROMACS

2019-10-12 Thread paul buscemi
Use this site for starters  https://erastova.xyz/    it 
will move you toward your goal

Paul

> On Oct 12, 2019, at 2:03 PM, Alex Mathew  wrote:
> 
> Dear experts,
> I would like to simulate NASICON type glass using GROMACS. The paper I
> referred to here used LAMPP (https://pubs.acs.org/doi/abs/10.1021/jp5094349).
> How should I proceed for this kind of study with GROMACS?
> What kind of forcefield I can use in GROMACS? can anyone provide a starting
> point towards the simulation of materials using gromacs? (All the tutorials
> are devoted towards biological molecules).
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Application of External Forces on Lipid Membrane

2019-09-06 Thread paul buscemi
Would the application of surface tension work for you ?

> On 6,Sep 2019, at 5:45 PM, Shivam Suthendran  wrote:
> 
> Hi there,
> 
>  I just completed the tutorial for KALP in DPPC membrane. I'm wondering
> how I would go about the application forces on the lipid membrane. Any and
> all input would be greatly appreciated. Thank you
> 
> -- 
> Shivam Suthendran
> 
> "Diamonds are forever. E-mail come close"
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] best performance on GPU

2019-08-04 Thread paul buscemi
whoops  ---not moo  ( iphone took over ) but mpi….

> On Aug 2, 2019, at 5:09 PM, Paul Buscemi  wrote:
> 
> Why run moo on a single node ?
> 
> PB
> 
>> On Aug 1, 2019, at 5:53 PM, Mark Abraham  wrote:
>> 
>> Hi,
>> 
>> We can't tell whether or what the problem is without more information.
>> Please upload your .log file to a file sharing service and post a link.
>> 
>> Mark
>> 
>>> On Fri, 2 Aug 2019 at 01:05, Maryam  wrote:
>>> 
>>> Dear all
>>> I want to run a simulation of approximately 12000 atoms system in gromacs
>>> 2016.6 on GPU with the following machine structure:
>>> Precision: single Memory model: 64 bit MPI library: thread_mpi OpenMP
>>> support: enabled (GMX_OPENMP_MAX_THREADS = 32) GPU support: CUDA SIMD
>>> instructions: AVX2_256 FFT library:
>>> fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512 RDTSCP usage: enabled TNG
>>> support: enabled Hwloc support: disabled Tracing support: disabled Built
>>> on: Fri Jun 21 09:58:11 EDT 2019 Built by: julian@BioServer [CMAKE] Build
>>> OS/arch: Linux 4.15.0-52-generic x86_64 Build CPU vendor: AMD Build CPU
>>> brand: AMD Ryzen 7 1800X Eight-Core Processor Build CPU family: 23 Model: 1
>>> Stepping: 1
>>> Number of GPUs detected: 1 #0: NVIDIA GeForce RTX 2080 Ti, compute cap.:
>>> 7.5, ECC: no, stat: compatible
>>> i used different commands to get the best performance and i dont know which
>>> point i am missing. the quickest time possible is got by this command:gmx
>>> mdrun -s md.tpr -nb gpu -deffnm MD -tunepme -v
>>> which is 10 ns/day! and it takes 2 months to end.
>>> though i used several commands to tune it like: gmx mdrun -ntomp 6 -pin on
>>> -resethway -nstlist 20 -s md.tpr -deffnm md -cpi md.cpt -tunepme -cpt 15
>>> -append -gpu_id 0 -nb auto.  In the gromacs website it is mentioned that
>>> with this properties I should be able to run it in  295 ns/day!
>>> could you help me find out what point i am missing that i can not reach the
>>> best performance level?
>>> Thank you
>>> --
>>> --
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-requ...@gromacs.org.
>>> 
>> -- 
>> Gromacs Users mailing list
>> 
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] simulation on 2 gpus

2019-08-03 Thread paul buscemi
Stefano,

A recent run with 14 atoms, including 1 isopropanol  molecules on top 
of  an end restrained PDMS surface of  74000 atoms  in a 20 20 30 nm box ran at 
67 ns/d nvt with the mdrun conditions I posted. It took 120 ns for 100 
molecules of an adsorbate  to go from solution to the surface.   I don't think 
this will set the world ablaze with any benchmarks but it is acceptable to get 
some work done.

Linux Mint Mate 18, AMD Threadripper 32 core 2990wx 4.2Ghz, 32GB DDR4, 2x RTX 
2080TI gmx2019 in the simplest gmx configuration for gpus,  CUDA version 10, 
Nvidia 410.7p loaded  from the repository

Paul

> On Aug 3, 2019, at 12:58 PM, paul buscemi  wrote:
> 
> Stefano,
> 
> Here is a typical run
> 
> fpr minimization mdrun -deffnm   grofile. -nn gpu 
> 
> and for other runs for a 32 core
> 
> gmx -deffnm grofile.nvt  -nb gpu -pme gpu -ntomp  8  -ntmpi 8  -npme 1 
> -gputasks   -pin on   
> 
> Depending on the molecular system/model   -ntomp -4 -ntmpi 16  may be faster  
>  - of course adjusting -gputasks
> 
> Rarely do I find that not using ntomp and ntpmi is faster, but it is never bad
> 
> Let me know how it goes.
> 
> Paul
> 
>> On Aug 3, 2019, at 4:41 AM, Stefano Guglielmo  
>> wrote:
>> 
>> Hi Paul,
>> thanks for the reply. Would you mind posting the command you used or
>> telling how did you balance the work between cpu and gpu?
>> 
>> What about pinning? Does anyone know how to deal with a cpu topology like
>> the one reported in my previous post and if it is relevant for performance?
>> Thanks
>> Stefano
>> 
>> Il giorno sabato 3 agosto 2019, Paul Buscemi  ha scritto:
>> 
>>> I run the same system and setup but no nvlink. Maestro runs both gpus at
>>> 100 percent. Gromacs typically 50 --60 percent can do 600ns/d on 2
>>> atoms
>>> 
>>> PB
>>> 
>>>> On Jul 25, 2019, at 9:30 PM, Kevin Boyd  wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> I've done a lot of research/experimentation on this, so I can maybe get
>>> you
>>>> started - if anyone has any questions about the essay to follow, feel
>>> free
>>>> to email me personally, and I'll link it to the email thread if it ends
>>> up
>>>> being pertinent.
>>>> 
>>>> First, there's some more internet resources to checkout. See Mark's talk
>>> at
>>>> -
>>>> https://bioexcel.eu/webinar-performance-tuning-and-
>>> optimization-of-gromacs/
>>>> Gromacs development moves fast, but a lot of it is still relevant.
>>>> 
>>>> I'll expand a bit here, with the caveat that Gromacs GPU development is
>>>> moving very fast and so the correct commands for optimal performance are
>>>> both system-dependent and a moving target between versions. This is a
>>> good
>>>> thing - GPUs have revolutionized the field, and with each iteration we
>>> make
>>>> better use of them. The downside is that it's unclear exactly what sort
>>> of
>>>> CPU-GPU balance you should look to purchase to take advantage of future
>>>> developments, though the trend is certainly that more and more
>>> computation
>>>> is being offloaded to the GPUs.
>>>> 
>>>> The most important consideration is that to get maximum total throughput
>>>> performance, you should be running not one but multiple simulations
>>>> simultaneously. You can do this through the -multidir option, but I don't
>>>> recommend that in this case, as it requires compiling with MPI and limits
>>>> some of your options. My run scripts usually use "gmx mdrun ... &" to
>>>> initiate subprocesses, with combinations of -ntomp, -ntmpi, -pin
>>>> -pinoffset, and -gputasks. I can give specific examples if you're
>>>> interested.
>>>> 
>>>> Another important point is that you can run more simulations than the
>>>> number of GPUs you have. Depending on CPU-GPU balance and quality, you
>>>> won't double your throughput by e.g. putting 4 simulations on 2 GPUs, but
>>>> you might increase it up to 1.5x. This would involve targeting the same
>>> GPU
>>>> with -gputasks.
>>>> 
>>>> Within a simulation, you should set up a benchmarking script to figure
>>> out
>>>> the best combination of thread-mpi ranks and open-mp threads - this can
>>>> have pretty drastic effects on performa

Re: [gmx-users] simulation on 2 gpus

2019-08-03 Thread paul buscemi
Stefano,

Here is a typical run

fpr minimization mdrun -deffnm   grofile. -nn gpu 

and for other runs for a 32 core

gmx -deffnm grofile.nvt  -nb gpu -pme gpu -ntomp  8  -ntmpi 8  -npme 1 
-gputasks   -pin on   

Depending on the molecular system/model   -ntomp -4 -ntmpi 16  may be faster   
- of course adjusting -gputasks

Rarely do I fine that not using ntomp and ntpmi is faster, but it is never bad

Let me know how it goes.

Paul

> On Aug 3, 2019, at 4:41 AM, Stefano Guglielmo  
> wrote:
> 
> Hi Paul,
> thanks for the reply. Would you mind posting the command you used or
> telling how did you balance the work between cpu and gpu?
> 
> What about pinning? Does anyone know how to deal with a cpu topology like
> the one reported in my previous post and if it is relevant for performance?
> Thanks
> Stefano
> 
> Il giorno sabato 3 agosto 2019, Paul Buscemi  ha scritto:
> 
>> I run the same system and setup but no nvlink. Maestro runs both gpus at
>> 100 percent. Gromacs typically 50 --60 percent can do 600ns/d on 2
>> atoms
>> 
>> PB
>> 
>>> On Jul 25, 2019, at 9:30 PM, Kevin Boyd  wrote:
>>> 
>>> Hi,
>>> 
>>> I've done a lot of research/experimentation on this, so I can maybe get
>> you
>>> started - if anyone has any questions about the essay to follow, feel
>> free
>>> to email me personally, and I'll link it to the email thread if it ends
>> up
>>> being pertinent.
>>> 
>>> First, there's some more internet resources to checkout. See Mark's talk
>> at
>>> -
>>> https://bioexcel.eu/webinar-performance-tuning-and-
>> optimization-of-gromacs/
>>> Gromacs development moves fast, but a lot of it is still relevant.
>>> 
>>> I'll expand a bit here, with the caveat that Gromacs GPU development is
>>> moving very fast and so the correct commands for optimal performance are
>>> both system-dependent and a moving target between versions. This is a
>> good
>>> thing - GPUs have revolutionized the field, and with each iteration we
>> make
>>> better use of them. The downside is that it's unclear exactly what sort
>> of
>>> CPU-GPU balance you should look to purchase to take advantage of future
>>> developments, though the trend is certainly that more and more
>> computation
>>> is being offloaded to the GPUs.
>>> 
>>> The most important consideration is that to get maximum total throughput
>>> performance, you should be running not one but multiple simulations
>>> simultaneously. You can do this through the -multidir option, but I don't
>>> recommend that in this case, as it requires compiling with MPI and limits
>>> some of your options. My run scripts usually use "gmx mdrun ... &" to
>>> initiate subprocesses, with combinations of -ntomp, -ntmpi, -pin
>>> -pinoffset, and -gputasks. I can give specific examples if you're
>>> interested.
>>> 
>>> Another important point is that you can run more simulations than the
>>> number of GPUs you have. Depending on CPU-GPU balance and quality, you
>>> won't double your throughput by e.g. putting 4 simulations on 2 GPUs, but
>>> you might increase it up to 1.5x. This would involve targeting the same
>> GPU
>>> with -gputasks.
>>> 
>>> Within a simulation, you should set up a benchmarking script to figure
>> out
>>> the best combination of thread-mpi ranks and open-mp threads - this can
>>> have pretty drastic effects on performance. For example, if you want to
>> use
>>> your entire machine for one simulation (not recommended for maximal
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at http://www.gromacs.org/
>> Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> 
> 
> -- 
> Stefano GUGLIELMO PhD
> Assistant Professor of Medicinal Chemistry
> Department of Drug Science and Technology
> Via P. Giuria 9
> 10125 Turin, ITALY
> ph. +39 (0)11 6707178
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] simulation on 2 gpus

2019-08-02 Thread Paul Buscemi
I run the same system and setup but no nvlink. Maestro runs both gpus at 100 
percent. Gromacs typically 50 --60 percent can do 600ns/d on 2 atoms 

PB

> On Jul 25, 2019, at 9:30 PM, Kevin Boyd  wrote:
> 
> Hi,
> 
> I've done a lot of research/experimentation on this, so I can maybe get you
> started - if anyone has any questions about the essay to follow, feel free
> to email me personally, and I'll link it to the email thread if it ends up
> being pertinent.
> 
> First, there's some more internet resources to checkout. See Mark's talk at
> -
> https://bioexcel.eu/webinar-performance-tuning-and-optimization-of-gromacs/
> Gromacs development moves fast, but a lot of it is still relevant.
> 
> I'll expand a bit here, with the caveat that Gromacs GPU development is
> moving very fast and so the correct commands for optimal performance are
> both system-dependent and a moving target between versions. This is a good
> thing - GPUs have revolutionized the field, and with each iteration we make
> better use of them. The downside is that it's unclear exactly what sort of
> CPU-GPU balance you should look to purchase to take advantage of future
> developments, though the trend is certainly that more and more computation
> is being offloaded to the GPUs.
> 
> The most important consideration is that to get maximum total throughput
> performance, you should be running not one but multiple simulations
> simultaneously. You can do this through the -multidir option, but I don't
> recommend that in this case, as it requires compiling with MPI and limits
> some of your options. My run scripts usually use "gmx mdrun ... &" to
> initiate subprocesses, with combinations of -ntomp, -ntmpi, -pin
> -pinoffset, and -gputasks. I can give specific examples if you're
> interested.
> 
> Another important point is that you can run more simulations than the
> number of GPUs you have. Depending on CPU-GPU balance and quality, you
> won't double your throughput by e.g. putting 4 simulations on 2 GPUs, but
> you might increase it up to 1.5x. This would involve targeting the same GPU
> with -gputasks.
> 
> Within a simulation, you should set up a benchmarking script to figure out
> the best combination of thread-mpi ranks and open-mp threads - this can
> have pretty drastic effects on performance. For example, if you want to use
> your entire machine for one simulation (not recommended for maximal

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] best performance on GPU

2019-08-02 Thread Paul Buscemi
Why run moo on a single node ?

PB

> On Aug 1, 2019, at 5:53 PM, Mark Abraham  wrote:
> 
> Hi,
> 
> We can't tell whether or what the problem is without more information.
> Please upload your .log file to a file sharing service and post a link.
> 
> Mark
> 
>> On Fri, 2 Aug 2019 at 01:05, Maryam  wrote:
>> 
>> Dear all
>> I want to run a simulation of approximately 12000 atoms system in gromacs
>> 2016.6 on GPU with the following machine structure:
>> Precision: single Memory model: 64 bit MPI library: thread_mpi OpenMP
>> support: enabled (GMX_OPENMP_MAX_THREADS = 32) GPU support: CUDA SIMD
>> instructions: AVX2_256 FFT library:
>> fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512 RDTSCP usage: enabled TNG
>> support: enabled Hwloc support: disabled Tracing support: disabled Built
>> on: Fri Jun 21 09:58:11 EDT 2019 Built by: julian@BioServer [CMAKE] Build
>> OS/arch: Linux 4.15.0-52-generic x86_64 Build CPU vendor: AMD Build CPU
>> brand: AMD Ryzen 7 1800X Eight-Core Processor Build CPU family: 23 Model: 1
>> Stepping: 1
>> Number of GPUs detected: 1 #0: NVIDIA GeForce RTX 2080 Ti, compute cap.:
>> 7.5, ECC: no, stat: compatible
>> i used different commands to get the best performance and i dont know which
>> point i am missing. the quickest time possible is got by this command:gmx
>> mdrun -s md.tpr -nb gpu -deffnm MD -tunepme -v
>> which is 10 ns/day! and it takes 2 months to end.
>> though i used several commands to tune it like: gmx mdrun -ntomp 6 -pin on
>> -resethway -nstlist 20 -s md.tpr -deffnm md -cpi md.cpt -tunepme -cpt 15
>> -append -gpu_id 0 -nb auto.  In the gromacs website it is mentioned that
>> with this properties I should be able to run it in  295 ns/day!
>> could you help me find out what point i am missing that i can not reach the
>> best performance level?
>> Thank you
>> --
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Paul Buscemi
Run with a maxwarn 1. If it runs then there is a deeper problem. If does not go 
it's probably a typo. Bet it's the latter

PB

> On Aug 1, 2019, at 2:52 PM, Justin Lemkul  wrote:
> 
> 
> 
>> On 8/1/19 3:50 PM, Mohammed I Sorour wrote:
>> Dear Gromacs users,
>> 
>> I'm running MD simulation on a couple of DNA systems that only vary in
>> sequence. Most of the runs worked just fine, but surprisingly I have one
>> system that I got an error in the NVT equilibration step.
>> I'm following the
>> tutorialhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/06_equil.html
>> 
>> rogram: gmx mdrun, version 2016.3
>> Source file: src/gromacs/options/options.cpp (line 258)
>> Function:void gmx::Options::finish()
>> 
>> Error in user input:
>> Invalid input values
>>   In option s
>> Required option was not provided, and the default file 'topol' does not
>> exist or is not accessible.
>> The following extensions were tried to complete the file name:
>>   .tpr
>> 
>> I'm pretty sure that I have the .tpr  files in the local directory. I have
>> read the previous reviews of the Gromacs mailing list, and I know that
>> it would be a problem with the toplogy file. The toplogy files looks
>> so far good to me.
> 
> There's a typo in your command or the input file you think is there is not. 
> You didn't provide your mdrun command (please always do this) but I suspect 
> the former. If mdrun does not find the file you specify, it looks for the 
> default file name, which is topol.tpr. That's also not there, so you get a 
> fatal error.
> 
>> Here is the only thing I can susbect, but I don't know if this is the
>> cause, but also I'm still wondering why: So When I generated my ssytem
>> topology using pdb2gmx
>> 
>> 
>> "Now there are 3969 atoms and 124 residues

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Self-interaction across periodic boundaries

2019-07-09 Thread Paul Buscemi
Possibly turn pbc off and use nvt 

PB

> On Jul 8, 2019, at 1:31 PM, Salman Zarrini  wrote:
> 
> Thanks Mark.
> The problem is that I would like to keep the overall concentration
> constant, so, using a larger box e.g. with 2 times larger lateral box
> dimensions,
> needs me to increase the number of molecules 8 times, so even more likely
> to lead to percolating structures in a larger simulation box.
> 
> Best regards,
> Salman Zarrini
> 
> On Mon, Jul 8, 2019 at 6:24 AM Mark Abraham 
> wrote:
> 
>> Hi,
>> 
>> If you're trying to model something like it was at infinite dilution using
>> a periodic box, then the size of the box needs to be at least as large as
>> the size of the structure and its effective interaction radius. It seems
>> like your simulation is suggesting at least one of those is larger than you
>> first thought it was :-)
>> 
>> Mark
>> 
>> On Mon, 8 Jul 2019 at 11:58, Salman Zarrini 
>> wrote:
>> 
>>> Dear all,
>>> Using MD simulations I expect to observe aggregation among some molecules
>>> solvated in water to have ultimately a droplet out of the molecules. The
>>> aggregates form to some extent in the course of simulation time, however,
>>> after a while the system become kinetically trapped in artificial
>>> percolating aggregates in which the molecules are self-interacting across
>>> the periodic boundaries.
>>> I wonder if there is any possibility to prevent aggregates
>> self-interaction
>>> across periodic boundaries?
>>> 
>>> Thank you,
>>> Salman
>>> --
>>> Best regards,
>>> 
>>> Salman Zarrini
>>> --
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-requ...@gromacs.org.
>>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] clustering of ions during NPT simulation

2019-06-28 Thread Paul Buscemi
Are you sure you want 1m And not 0.1?

PB

> On Jun 27, 2019, at 2:44 PM, Netaly Khazanov  wrote:
> 
> Hi All,
> I perform simulation of a transmembrane protein in a membrane in 1M
> concentration of NaCl.
> During the simulation, I noticed that ions began to cluster and they are
> not evenly spread after 10 ns. Is it a problem or I can proceed further?
> Thanks in advance,
> Netaly
> 
> -- 
> Netaly
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] appropriate force fielf

2019-06-25 Thread Paul Buscemi
Do it  the easy way. Find some literature that approximates your simulation 
and-after verifying their results with more lit- replicate their work

PB

> On Jun 25, 2019, at 9:38 AM, starlight  wrote:
> 
> Hi, I want to perform some simulations to study the interaction of 2 very
> small peptides with each other in the water. I want to put these peptides
> separately in water to give a structure and then do a simulation of them
> with each other in water to study the peptide-peptide interaction. I need
> to know the position of the hydrogen bonds that form between these peptides
> in water.
> So I want to know about the true force field and water molecule model for
> these simulations. I find few articles that try some force fields to such
> simulations but they don't say which ff is more appropriate than the
> others. Would you please help me to know about it and recommend some
> articles. Thank you
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] dragging gro files into VMD

2019-06-19 Thread paul buscemi
Dear Users,

this is a semi-gromacs question.  I can use a command line “ vmd  protein.gro” 
to open a gro file,  but if I use linux’s  “open with vmd “  the file will 
flash on ( within vmd )  then off and vmd closes.  I can drag the files to 
Pymol with no problems.
Has anyone run across this issue ?

thanks
Paul


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Membrane protein simulation isotropic vs semiisotropic

2019-06-15 Thread paul buscemi
The pressure on a (real-life)  membrane is not isotropic, edges are under 
tension. so using the p-couple “surface-tension”  - with water layers is 
appropriate. 

If you use p-couple  = isotropic  you should end up with a micelle because the 
hydrophobic effects are significant.


p

> On Jun 14, 2019, at 11:18 PM, Prasanth G, Research Scholar 
>  wrote:
> 
> Dear Bratin,
> 
> When I am using a semiisotropic condition the pbc box is
> deforming/compressing pushing the lipid bilayer apart. I am attaching the
> screenshots of the system at the beginning(normal.png) of production run as
> well as at the end of 30ns simulation (elongated.png) for your reference.
> 
> This was my production mdp (md.mdp) file:
> 
> title   = pro-DPP-LIG  Production MD
> ; Run parameters
> integrator  = md; leap-frog integrator
> nsteps  = 1500; 2 * 1500 = 3 ps (1 ns)
> dt  = 0.002 ; 2 fs
> ; Output control
> nstxout = 1000  ; save coordinates every 2 ps
> nstvout = 1000  ; save velocities every 2 ps
> nstxtcout   = 1000  ; xtc compressed trajectory output every 2 ps
> nstenergy   = 1000  ; save energies every 2 ps
> nstlog  = 1000  ; update log file every 2 ps
> ; Bond parameters
> continuation= yes   ; Restarting after NPT
> constraint_algorithm= lincs ; holonomic constraints
> constraints = all-bonds ; all bonds (even heavy atom-H bonds)
> constrained
> lincs_iter  = 1 ; accuracy of LINCS
> lincs_order = 4 ; also related to accuracy
> ; Neighborsearching
> ns_type = grid  ; search neighboring grid cels
> nstlist = 5 ; 10 fs
> rlist   = 1.2   ; short-range neighborlist cutoff (in nm)
> rcoulomb= 1.2   ; short-range electrostatic cutoff (in nm)
> rvdw= 1.2   ; short-range van der Waals cutoff (in nm)
> ; Electrostatics
> coulombtype = PME   ; Particle Mesh Ewald for long-range electrostatics
> pme_order   = 4 ; cubic interpolation
> fourierspacing  = 0.16  ; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl  = Nose-Hoover   ; More accurate thermostat
> tc-grps = Protein_LIG_DPP   Water_and_ions  ;
> tau_t   = 0.5   0.5 ; time constant, in ps
> ref_t   = 323   323 ; reference temperature, one
> for each group, in K
> ; Pressure coupling is on
> pcoupl  = Parrinello-Rahman ; Pressure coupling on in NPT
> pcoupltype  = semiisotropic ; uniform scaling of x-y box vectors,
> independent z
> tau_p   = 2.0   ; time constant, in ps
> ref_p   = 1.0   1.0 ; reference pressure, x-y, z (in bar)
> compressibility = 4.5e-54.5e-5  ; isothermal compressibility, bar^-1
> ; Periodic boundary conditions
> pbc = xyz   ; 3-D PBC
> ; Dispersion correction
> DispCorr= EnerPres  ; account for cut-off vdW scheme
> ; Velocity generation
> gen_vel = no; Velocity generation is off
> ; COM motion removal
> ; These options remove motion of the protein/bilayer relative to the
> solvent/ions
> nstcomm = 1
> comm-mode   = Linear
> comm-grps   = Protein_LIG_DPP  Water_and_ions
> ---
> *LIG is ligand and
> DPP is DPPC.
> 
> Thank you.
> 
> normal.png
> 
> 
> elongated.png
> 
> 
> On Fri, Jun 14, 2019 at 12:09 PM Prasanth G, Research Scholar <
> prasanthgha...@sssihl.edu.in> wrote:
> 
>> Dear all,
>> 
>> Can someone please tell me if it is okay to use isotropic pcoupltype for a
>> membrane protein simulation? Are there any disadvantages?
>> 
>> Also, why is semiisotropic preferred over isotropic, in membrane protein
>> simulations..
>> 
>> Thank you.
>> --
>> Regards,
>> Prasanth.
>> 
> 
> 
> -- 
> Regards,
> Prasanth.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Xeon W family vs scalable

2019-06-09 Thread paul buscemi
Hi,
Unless you are studying large systems ( 10e6 atoms ) do as Carsten suggests.   
Get a good Intel  or AMD 8 core and the best GPU you can afford.  Don’t worry 
too much about the speed of the CPU either.  3.2 Ghz is fine.

p

> On Jun 9, 2019, at 6:43 AM, Kutzner, Carsten  wrote:
> 
> Hi,
> 
> don’t spend all your money on a CPU - for high GROMACS performance
> the GPU is as important. I would recommend to add an RTX 2070/80 GPU
> to the workstation, and get a cheaper CPU. This will most likely
> give you a significantly higher GROMACS performance. 
> See https://arxiv.org/abs/1903.05918
> 
> Best,
>  Carsten
> 
> 
>> On 9. Jun 2019, at 05:32, 강동우  wrote:
>> 
>> Dear all gromacs users,
>> 
>> I'm now about to configure workstation for gromacs.
>> 
>> I searched on the internet, and I could find some information from
>> servethehome.
>> 
>> But in here (
>> https://www.realworldtech.com/forum/?threadid=181985&curpostid=182058)
>> 
>> Somebody said that Xeon W-2145 will be faster than anything in the chart
>> (gold 6136, 6138, etc)
>> 
>> Xeon W family is relatvely cheaper than scalable family, but I cannot sure
>> whethere this is true or not, because I cannot find any benchmark data of
>> W-family in gromacs
>> 
>> Does anybody know about the performance of xeon w-family in gromacs? My
>> system will not be large (<80K atoms with C1, C2, C3, and water molecules).
>> 
>> Dong Woo Kang
>> -- 
>> Gromacs Users mailing list
>> 
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] proper use of restraints

2019-05-29 Thread paul buscemi

Dear Users,

I’ve modeled a simple polymer ( nylon ), 600 atoms aligned in the x direction  
.   I create a restraint file for one molecule using the top file indices and 
fx,fy fz = 1 0 0  then place 10 to 100 copies in a box  On equilibration, 
they  exhibit the expected hydrogen bonding using the gromos54a7 ff.   Under 
npt pcouple - surface tension, the molecules remain elongated  and some 
coalesce  into groups but do not fully group  with ref p = 1 1  and comp = 4e-5 
 0 .  Just why the y direction does not shrink is a puzzle but not the main 
concern.   The x dimension fo the box remains constant too but I assumed that 
is because the ends of the polymers are fixed,  But there is room for packing 
in the y dimension.

Under p couple = isotropic, the ends re-enter the opoosite  (x) sides as the 
box collapses in x, y x directions  ( the original xyz  300 300 40 A.   I was 
under the impression that under relatively strong x restraint, the x direction 
would not collapse because the x direction is fixed at the ends  ???  Or is the 
way to look at it, the end ARE fixed, but the world around them is shrinking ?

Any comments ( well most any good ones ) would be appreciate.


Reference is made to the restraint file both in the molecule itp, and the base 
top file as an include statement - which is probably redundant if not outright 
wrong.

Paul
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Use of Restraint itps

2019-05-22 Thread paul buscemi
Justin,  Certainly appreciate the help on this basic issue.

getting closer….  I understand your response.  

Now, to clarify — how does the mdp ( or grompp )  know which itp file among the 
#include’s  is to be used as the restraint file.  Is that identified with the 
define = statement in the mdp ? so if my restraint is MOL.posres.itp , I would 
state “define  = DMOL.posres.itp “ ?

Paul

> On May 22, 2019, at 9:16 PM, Justin Lemkul  wrote:
> 
> On Wed, May 22, 2019 at 9:53 PM paul buscemi  wrote:
> 
>> Justin,
>> 
>> Thank you very much for the response.
>> 
>> Obviously  grompp will apply the restraint to the argument of -r
>> molecule.gro  and the  restraint.itp location is  in the top file  #include
>> statement.  ( my (ITP directory ).but how does grompp make the
>> association between the -r argument and the itp ?It would be more clear
>> to me if the -r argument was the restraint.itp file itself.  Where the (
>> missing ) link ?
>> 
> 
> The .itp specifies which atoms are restrained and how strongly. The
> coordinates passed to -r are the origin of the biasing potential, I.e. “if
> the atoms are here, the restraint force is zero.”
> 
> The coordinates and topology thus serve distinct but complementary
> functions.
> 
> -Justin
> 
> 
>> Paul
>> 
>>> On May 22, 2019, at 12:14 PM, Justin Lemkul  wrote:
>>> 
>>> 
>>> 
>>> On 5/22/19 10:41 AM, p buscemi wrote:
>>>> Dear Users,
>>>> In using restrain files, I place the restraint itp in a separated
>> directory in which there may be other restraint files.
>>>> I notice that within the restraint itp there is no specific reference
>> to the molecule used to create the itp. I've run into an instance in which
>> other than the intended itp was targeted by the molecule specified in the
>> grompp statement.
>>>> 
>>>> Is this as expected. ? Is there a mechanism to make the restraint itp
>> specific for the called-out molecule ?
>>>> Or - not completely out of the question - am I missing something ?
>>> 
>>> grompp is doing what you tell it. You specify which file to #include and
>> as long as grompp finds atom indices that are in range (e.g. start from 1
>> and do not exceed the number of atoms in the relevant [moleculetype]), then
>> it will happily do exactly what you're telling it to do.
>>> 
>>> If you're going to accumulate a lot of files in one place, you need to
>> be judicious in your file naming and what hacks you make to your .top file.
>>> 
>>> -Justin
>>> 
>>> --
>>> ==
>>> 
>>> Justin A. Lemkul, Ph.D.
>>> Assistant Professor
>>> Office: 301 Fralin Hall
>>> Lab: 303 Engel Hall
>>> 
>>> Virginia Tech Department of Biochemistry
>>> 340 West Campus Dr.
>>> Blacksburg, VA 24061
>>> 
>>> jalem...@vt.edu | (540) 231-3129
>>> http://www.thelemkullab.com
>>> 
>>> ==
>>> 
>>> --
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> 
> ==
> 
> Justin A. Lemkul, Ph.D.
> 
> Assistant Professor
> 
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
> 
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
> 
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
> 
> 
> ==
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Use of Restraint itps

2019-05-22 Thread paul buscemi
Justin,

Thank you very much for the response.  

Obviously  grompp will apply the restraint to the argument of -r  molecule.gro  
and the  restraint.itp location is  in the top file  #include statement.  ( my 
(ITP directory ).but how does grompp make the association between the -r 
argument and the itp ?It would be more clear to me if the -r argument was 
the restraint.itp file itself.  Where the ( missing ) link ?

Paul

> On May 22, 2019, at 12:14 PM, Justin Lemkul  wrote:
> 
> 
> 
> On 5/22/19 10:41 AM, p buscemi wrote:
>> Dear Users,
>> In using restrain files, I place the restraint itp in a separated directory 
>> in which there may be other restraint files.
>> I notice that within the restraint itp there is no specific reference to the 
>> molecule used to create the itp. I've run into an instance in which other 
>> than the intended itp was targeted by the molecule specified in the grompp 
>> statement.
>> 
>> Is this as expected. ? Is there a mechanism to make the restraint itp 
>> specific for the called-out molecule ?
>> Or - not completely out of the question - am I missing something ?
> 
> grompp is doing what you tell it. You specify which file to #include and as 
> long as grompp finds atom indices that are in range (e.g. start from 1 and do 
> not exceed the number of atoms in the relevant [moleculetype]), then it will 
> happily do exactly what you're telling it to do.
> 
> If you're going to accumulate a lot of files in one place, you need to be 
> judicious in your file naming and what hacks you make to your .top file.
> 
> -Justin
> 
> -- 
> ==
> 
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
> 
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
> 
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
> 
> ==
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] number of coordinates in coordinate file does not match topology

2019-05-16 Thread paul buscemi
Gromacs is telling you what to do.Make a new top file

I do not think you can start a pdb from other than position 1 and the top must 
match exactly. it is probably more straightforward to  delete the chain and 
save the pdb. Use pdb2gmx to recreate the top.  
 

good luck

> On 16,May 2019, at 1:13 PM, mary ko  wrote:
> 
> Hello all
> I want to run a simulation of a protein from PDB data bank with a ligand. It 
> has two chains and I need only chain A. when I delete chain B in CHIMERA and 
> try to run the simulation, it stops at the gmx_mpi grompp -f ions.mdp -c 
> solve.pdb -p topol.top -o ions.tpr step with the error of number of 
> coordinates in the solve.pdb (143982) does not match the topol.top (143983). 
> I checked the .pdb file and it starts from residue 13. Do I get the error 
> because of that residues not being sorted from 1, since I use the same method 
> for the sorted files and they run without errors.Thank you
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Energy from a subgroup of molecules

2019-04-12 Thread paul buscemi
Thank you Justin.  

Using energy groups is not really that bad.  

By using  gmx select ‘atomname Cx and resname ADSORBATE and within 0.5 of 
resname SURFACE’ -on near.ndx

I can find  the atom/s of the adsorbate that is proximal to the surface and  
can track the LJ pot’l by frame and use VMD for further analysis

Paul

> On Apr 11, 2019, at 5:57 PM, Justin Lemkul  wrote:
> 
> 
> 
> On 4/11/19 5:39 PM, paul buscemi wrote:
>> Thank you for the response, Mark.
>> 
>> I do use the rerun tactic, and this is not too bad for a small number of 
>> molecules
>> 
>>  but is there a way to include the index information within the mdrun (rerun 
>> ) … something like
>> 
>> gmx mdrun  -s adsorb.ener_gp.tpr  -rerun adsorb,npt.trr   - n use_only.ndx  ?
>> 
>> or use the  indices within the grompp command ?
>> 
> 
> The only solution is what Mark proposed - create a .tpr file with the 
> required energygrps and use mdrun -rerun. mdrun does not accept an index file.
> 
> -Justin
> 
>> Paul
>> 
>>> On Apr 11, 2019, at 1:48 AM, Mark Abraham  wrote:
>>> 
>>> Hi,
>>> 
>>> You can't do that with gmx energy, as you need mdrun to understand the new
>>> grouping. But making a new .tpr with the energy groups so defined permits
>>> you to use gmx mdrun -rerun for such a single point energy evaluation.
>>> 
>>> Mark
>>> 
>>> On Wed., 10 Apr. 2019, 22:24 p buscemi,  wrote:
>>> 
>>>> Dear Users,
>>>> I've performed an adsorption experiment in which a fraction of molecules
>>>> in solution adsorb to a surface. I can extract the index of those adsorbed,
>>>> and I can obtain the total interaction ( LJ ) of the energy group with the
>>>> surface.
>>>> I can estimate the average interaction of the adsorbed molecules by
>>>> dividing the total energy by the number of molecules within a certain
>>>> distance ( the index number )
>>>> How might I use gmx energy to recalculate the interaction using the
>>>> original surface but only the adsorbed molecules specified in the index
>>>> file... something like
>>>> "gmx energy -f starting.gro -n index.ndx"
>>>> 
>>>> A single point calculation would be quite satisfactory.
>>>> thanks
>>>> Paul
>>>> --
>>>> Gromacs Users mailing list
>>>> 
>>>> * Please search the archive at
>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>>> posting!
>>>> 
>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>> 
>>>> * For (un)subscribe requests visit
>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>>> send a mail to gmx-users-requ...@gromacs.org.
>>>> 
>>> -- 
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at 
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
>>> a mail to gmx-users-requ...@gromacs.org.
> 
> -- 
> ==
> 
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
> 
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
> 
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
> 
> ==
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Energy from a subgroup of molecules

2019-04-11 Thread paul buscemi
Thank you for the response, Mark.

I do use the rerun tactic, and this is not too bad for a small number of 
molecules

 but is there a way to include the index information within the mdrun (rerun ) 
… something like 

gmx mdrun  -s adsorb.ener_gp.tpr  -rerun adsorb,npt.trr   - n use_only.ndx  ?

or use the  indices within the grompp command ?


Paul

> On Apr 11, 2019, at 1:48 AM, Mark Abraham  wrote:
> 
> Hi,
> 
> You can't do that with gmx energy, as you need mdrun to understand the new
> grouping. But making a new .tpr with the energy groups so defined permits
> you to use gmx mdrun -rerun for such a single point energy evaluation.
> 
> Mark
> 
> On Wed., 10 Apr. 2019, 22:24 p buscemi,  wrote:
> 
>> 
>> Dear Users,
>> I've performed an adsorption experiment in which a fraction of molecules
>> in solution adsorb to a surface. I can extract the index of those adsorbed,
>> and I can obtain the total interaction ( LJ ) of the energy group with the
>> surface.
>> I can estimate the average interaction of the adsorbed molecules by
>> dividing the total energy by the number of molecules within a certain
>> distance ( the index number )
>> How might I use gmx energy to recalculate the interaction using the
>> original surface but only the adsorbed molecules specified in the index
>> file... something like
>> "gmx energy -f starting.gro -n index.ndx"
>> 
>> A single point calculation would be quite satisfactory.
>> thanks
>> Paul
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] PDB file that can be read in Gromacs

2019-03-15 Thread paul buscemi
As Justin has pointed out , this process is well documented and UNL is not 
often found in molecules. No matter what - you will have to do some leg work.   
Because I run into this almost daily, this may at least get you started. First 
try using x2top with a selected force field.  If your molecule is not too 
strange, there is a good chance  the force field will recognize it.  If not, 
then modify or  generate the n2t file  ( found in the ff folder ) and again, if 
the molecule is ‘normal’ only a few atom types may need to be made. Then 
convert the top file to itp if you wish.   If you can use gromos54a7 ff  then 
try  using ATB to generate an itp ( which may not be a full QM file )  

good luck
Paul 

> On Mar 14, 2019, at 2:27 PM, Justin Lemkul  wrote:
> 
> 
> 
> On 3/14/19 3:21 PM, Phuong Chau wrote:
>> Hello everyone,
>> 
>> I want to generate gromacs topology of a substance (a single chemical)
>> which has a pdb file generated by RDKIT from SMILES representation of that
>> substance (MolToPDBFile). However, when I input the pdb file generated by
>> RDKit, it showed the error of "Residue 'UNL' not found in residue topology
>> database".
>> 
>> The general idea is:
>> Input: Name of a substance (single chemical)
>> Output: pdb file of the substance (does not have to be generated by RDKit)
>> and the topology file of its susbtance that is generated by Gromacs.
>> 
>> Could anyone tell me any possible solution to solve this problem?
> 
> pdb2gmx isn't magic :)
> 
> http://manual.gromacs.org/current/user-guide/run-time-errors.html#residue-xxx-not-found-in-residue-topology-database
> 
> -Justin
> 
> -- 
> ==
> 
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
> 
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
> 
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
> 
> ==
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] info about gpus

2019-03-10 Thread paul buscemi
There is significant information on the web regarding this subject, but in 
general speed scales nicely with the umber of cores.  Massive memory is not 
required.

> On 31,Jan 2019, at 11:14 AM, Stefano Guglielmo  
> wrote:
> 
> Dear all,
> I am tryin to set a new workstation and I would like to know if there is a
> significant improvement in performance with two gpus (gtx 1080 ti or rtx
> 2080) rather than just one, and eventually with which cpu/ram requisite.
> 
> Thanks in advance for any advice and suggestions
> Stefano
> 
> -- 
> Stefano GUGLIELMO PhD
> Assistant Professor of Medicinal Chemistry
> Department of Drug Science and Technology
> Via P. Giuria 9
> 10125 Turin, ITALY
> ph. +39 (0)11 6707178
> 
> 
> 
> Mail
> priva di virus. www.avast.com
> 
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Itp for a longer molecule out of a shorter one

2019-03-06 Thread paul buscemi
Alex,  is your PPO polypropylene oxide or poly phenylene oxide   ??

> On Mar 6, 2019, at 6:36 PM, Alex  wrote:
> 
> Thanks Paul,
> 
> On Wed, Mar 6, 2019 at 5:00 PM  wrote:
> 
>> Alex,
>> 
>> Having the itp for the shorter molecule you have most of what you need. Use
>> x2top to create the top file for the longer molecule. Adjust, if necessary,
>> the atomname2type.n2t file  in the ff  file to create any necessary atom
>> types being sure to select the proper ff.  Charges, bond lengths can be
>> taken from the existing pdb and itp when needed.  Use Avogadro for a quick
>> reference to model parameters.  I've made various models of Pebax , nylon
>> to
>> 100k's MW using this method.
>> 
> I am using gromos54a7 and there is no "atomname2type.n2t" in the gromos54a7
> directory which causes crashing the gmx x2top.
> 
>> 
>> Also ATB can get the itp for polymers up to 600 atoms if you use
>> gromos54a7
> 
> ff.
>> 
> Indeed I got the gromos54a7 FF for the shorter molecule (< 50 atoms) from
> ATB, however for larger system (> 50 atoms) the ATB just gives a
> semi-empirical parameterized FF which basically is just a TOP file template
> for large molecules.
> Best regards,
> Alex
> 
>> 
>> Hope this helps
>> 
>> Paul Buscemi, Ph.D.
>> UMN BICB
>> 
>> -Original Message-
>> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
>>  On Behalf Of Alex
>> Sent: Wednesday, March 06, 2019 2:54 PM
>> To: gmx-us...@gromacs.org
>> Subject: [gmx-users] Itp for a longer molecule out of a shorter one
>> 
>> Dear all,
>> I have the itp file for a molecule (OH-[PPE]1-[PPO]2-[PPE]1-H   it is a
>> short surfactant), out of that itp, I am trying to create an itp file for a
>> longer molecule in the form of OH-[PPE]2-[PPO]16-[PPE]2-H where the PPE and
>> PPO parts are being repeated 2 and 16 times each. For each extra PPO, 10
>> atoms, 10 bonds 24 pairs, 19 angles and 4 dihedral entries would be added
>> to
>> the itp file. Doing that for a longer molecule is so tedious, so, I wonder
>> if anybody has already a script or tools for doing that?
>> I would be really appreciated.
>> Regards,
>> Alex
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a
>> mail to gmx-users-requ...@gromacs.org.
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Itp for a longer molecule out of a shorter one

2019-03-06 Thread paul buscemi
Alex,

Just create the n2t file.  Gromacs will tell x2top to read it.  The 
documentation explains this. It’s a very simple format.

How long is >50 for you ??   ATB can handle up to at least 600 atoms - it may 
have moved to 1000 by now.. So if you can construct the polymer ATB should 
provide the itp for you.  From there it’s plug and play.   I’ve had good luck 
with ~ 100 molecules. Just be sure to add hydrogens and minimize it in your 
molecular modeler


the x2top route is a bit more cumbersome, but it works.   



> On Mar 6, 2019, at 6:36 PM, Alex  wrote:
> 
> Thanks Paul,
> 
> On Wed, Mar 6, 2019 at 5:00 PM  wrote:
> 
>> Alex,
>> 
>> Having the itp for the shorter molecule you have most of what you need. Use
>> x2top to create the top file for the longer molecule. Adjust, if necessary,
>> the atomname2type.n2t file  in the ff  file to create any necessary atom
>> types being sure to select the proper ff.  Charges, bond lengths can be
>> taken from the existing pdb and itp when needed.  Use Avogadro for a quick
>> reference to model parameters.  I've made various models of Pebax , nylon
>> to
>> 100k's MW using this method.
>> 
> I am using gromos54a7 and there is no "atomname2type.n2t" in the gromos54a7
> directory which causes crashing the gmx x2top.
> 
>> 
>> Also ATB can get the itp for polymers up to 600 atoms if you use
>> gromos54a7
> 
> ff.
>> 
> Indeed I got the gromos54a7 FF for the shorter molecule (< 50 atoms) from
> ATB, however for larger system (> 50 atoms) the ATB just gives a
> semi-empirical parameterized FF which basically is just a TOP file template
> for large molecules.
> Best regards,
> Alex
> 
>> 
>> Hope this helps
>> 
>> Paul Buscemi, Ph.D.
>> UMN BICB
>> 
>> -Original Message-
>> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
>>  On Behalf Of Alex
>> Sent: Wednesday, March 06, 2019 2:54 PM
>> To: gmx-us...@gromacs.org
>> Subject: [gmx-users] Itp for a longer molecule out of a shorter one
>> 
>> Dear all,
>> I have the itp file for a molecule (OH-[PPE]1-[PPO]2-[PPE]1-H   it is a
>> short surfactant), out of that itp, I am trying to create an itp file for a
>> longer molecule in the form of OH-[PPE]2-[PPO]16-[PPE]2-H where the PPE and
>> PPO parts are being repeated 2 and 16 times each. For each extra PPO, 10
>> atoms, 10 bonds 24 pairs, 19 angles and 4 dihedral entries would be added
>> to
>> the itp file. Doing that for a longer molecule is so tedious, so, I wonder
>> if anybody has already a script or tools for doing that?
>> I would be really appreciated.
>> Regards,
>> Alex
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a
>> mail to gmx-users-requ...@gromacs.org.
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] x2top finds 10+ bonds on atoms

2019-02-07 Thread paul buscemi
Found the problem, 

In using the -ff select option,  x2top was being directed to an incorrect  
gromos54a7 .ff directory - not the one in the local directory - with an 
incorrect n2t.  x2top was apparently doing its best to make something.

> On Feb 7, 2019, at 7:07 PM, paul buscemi  wrote:
> 
> Dear gmx users,
> 
> I’ve been using x2top for some time with success..  I recently taken a 
> polymer whose itp was created with x2top and using the same 55a7 ff and 
> re-ran it as a demo-tutorial. 
> 
> In leaning how to use x2to and atomname2type I expected  the error “ Cannot 
> find forcefield for atom x”  This time I have many such errors on this 
> previously rune molecule as “Can not find force field  for atom C  with 
> bonds or atom N555 with 37 bonds.  The pdb  and gro files open just fine with 
> VMD and Avogadro.
> 
> Any hints what might cause such weird outcome ?  There is no log file to 
> include, and the pdb text looks normal - has the name “HETATM”  in the first 
> column
> 
> 
> Gromac 18.3  Linxux mint  Mat 20.1
> 
> 
> thanks
> Paul
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] x2top finds 10+ bonds on atoms

2019-02-07 Thread paul buscemi
Dear gms users,

I’ve been using x2top for some time with success..  I recently taken a polymer 
whose itp was created with x2top and using the same 55a7 ff and re-ran it as a 
demo-tutorial. 

 In leaning how to use x2to and atomname2type I expected  the error “ Cannot 
find forcefield for atom x”  This time I have many such errors on this 
previously rune molecule as “Can not find force field  for atom C  with 
bonds or atom N555 with 37 bonds.  The pdb  and gro files open just fine with 
VMD and Avogadro.

Any hints what might cause such weird outcome ?  There is no log file to 
include, and the pdb text looks normal - has the name “HETATM”  in the first 
column


Gromac 18.3  Linxux mint  Mat 20.1


thanks
Paul

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] info about gpus

2019-02-01 Thread paul buscemi
I’ve run these conditions recently. The scale from as single 1080ti to 2080ti 
is proportional to the core number. the increase I’ve seen with 1080+1080 is 
1.3-1.5 times as fast  and perhaps 1.3 -1.6 times as fast depending on the 
model.  In a million atom membrane model the two 1080ti’s ran at 12 ns/d in an 
equilibrated npt and the 1080ti +2080ti at ~15 ns day.  In either case more 
than enough time to go get some coffee.

Personally  i feel that unless you  are building really large systems ( 10+ 
GPU’s ) and are working with models under about 200k atoms, then the two 1080ti 
are just fine.  Don’t expect a doubling on any case. Unless you are really 
strapped for time using two 1080’s may work.  going from 150 ns/day to 200 is a 
good time saver for smaller systems.

If you Google  gromac benchmarks you will find several articles on the issue.

PB
> On Feb 1, 2019, at 5:58 AM, Szilárd Páll  wrote:
> 
> Hi,
> 
> It greatly depends what your use-case is, i.e. simulation system and
> type of study (but if you want to scale I assume you want few longer
> trajectories).
> 
> Cheers,
> --
> Szilárd
> 
> On Thu, Jan 31, 2019 at 6:17 PM Stefano Guglielmo
>  wrote:
>> 
>> Dear all,
>> I am tryin to set a new workstation and I would like to know if there is a
>> significant improvement in performance with two gpus (gtx 1080 ti or rtx
>> 2080) rather than just one, and eventually with which cpu/ram requisite.
>> 
>> Thanks in advance for any advice and suggestions
>> Stefano
>> 
>> --
>> Stefano GUGLIELMO PhD
>> Assistant Professor of Medicinal Chemistry
>> Department of Drug Science and Technology
>> Via P. Giuria 9
>> 10125 Turin, ITALY
>> ph. +39 (0)11 6707178
>> 
>> 
>> 
>> Mail
>> priva di virus. www.avast.com
>> 
>> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Density of a droplet in spherical coordinate

2019-01-24 Thread paul buscemi
You MIGHT be able to use the vmd plugin  membplugin to measure the density 
https://sourceforge.net/p/membplugin/wiki/Home/ 
  treating the sphere as a 
membrane or if you know the total density  and the radius you should be able to 
construct an integral to fit the data  —   similar to the following 
https://math.boisestate.edu/~jaimos/classes/m175-45-summer2014/notes/notes1-4.pdf
 

 

or - since you say each molecule type - make a block or sphere of each one and 
use gmx energy

Paul

> On Jan 24, 2019, at 11:25 AM, Alex  wrote:
> 
> Dear gmx user,
> I have a droplet of some short molecules (it is not perfectly spherical
> though), I was wondering how to calculate the density of each molecule type
> respect to center of the droplet. In other words the question is how the
> density of each molecule type varies in the spherical coordinate along
> radius of droplet (r), theta and phi. I am not sure about gmx density as
> the output is just a function of r apparently.
> Any help or idea is highly appreciated.
> 
> Thanks
> Alex
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] different nvidia-smi/gmx GPU_IDs

2019-01-18 Thread paul buscemi
Szilard,

Is the environmental variable set at build  ?

thanks
Paul

> On Jan 18, 2019, at 12:36 PM, Szilárd Páll  wrote:
> 
> Hi,
> 
> The CUDA runtime tries (and AFAIK has always tried) to be smart about
> device order which is what GROMACS will see in its detection. The
> nvidia-smi monitoring tools however uses a different mechanism for
> enumeration that will always respect the PCI identifier of the devices (~
> the order of cards/slots in the box).
> 
> This can of course cause some headache in mixed setups, but you can set the
> CUDA_DEVICE_ORDER=PCI_BUS_ID environment variable to tell the runtime to
> avoid reordering the GPUs and expose them ordered by bus ID.
> 
> Cheers,
> --
> Szilárd
> 
> 
> On Sun, Jan 13, 2019 at 2:27 PM Tamas Hegedus  wrote:
> 
>> Hi,
>> 
>> I have a node with 4 nvidia GPUs.
>> From nvidia-smi output:
>>  0  Quadro P6000
>>  1  GeForce RTX 208
>>  2  GeForce GTX 108
>>  3  GeForce RTX 208
>> 
>> However, the order of GPUs is differently detected by gmx 2018.3
>> Number of GPUs detected: 4
>> #0: NVIDIA GeForce RTX 2080 Ti
>> #1: NVIDIA GeForce RTX 2080 Ti
>> #2: NVIDIA Quadro P6000
>> #3: NVIDIA GeForce GTX 1080 Ti
>> 
>> Why is this? This makes difficult to plan/check the running of two gmx
>> jobs on the same node.
>> 
>> Thanks for your suggestion.
>> 
>> Tamas
>> 
>> --
>> Tamas Hegedus, PhD
>> Senior Research Fellow
>> MTA-SE Molecular Biophysics Research Group
>> Hungarian Academy of Sciences  | phone: (36) 1-459 1500/60233
>> Semmelweis University  | fax:   (36) 1-266 6656
>> Tuzolto utca 37-47 | mailto:ta...@hegelab.org
>> Budapest, 1094, Hungary| http://www.hegelab.org
>> 
>> ---
>> This email has been checked for viruses by AVG.
>> https://www.avg.com
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gmx 2019 running problems

2019-01-14 Thread paul buscemi
One  other  suggestion:

 from the PPA repository, install the Nvidia 410 driver.   Your Cuda 9 install  
may work with the 410 driver but more likely you will  need to reinstall. cuda

 If so, install the CUDA 10 toolkit, but DO NOT use the toolkit to install the 
driver when asked,  it will revert to the Nvidia  v384 driver

Hope it works out for your monster 

Paul

> On Jan 14, 2019, at 2:06 PM, Tamas Hegedus  wrote:
> 
> Hi,
> 
> I tried to install and use gmx 2019 on a single node computer with 4 GPUs.
> 
> I think that the build was ok, but the running is...
> There is only workload on 4 cores (-nt 16) and
> there is no workload on the GPUs at all.
> 
> gmx 2018 was deployed on the same computer with the same tools and libraries.
> 
> CPU 16cores + 16threads
> GPU 1080Ti
> 
> cmake -j 16 -DCMAKE_C_COMPILER=gcc-6 -DCMAKE_CXX_COMPILER=g++-6 
> -DCMAKE_INSTALL_PREFIX=$HOME/opt/gromacs-2019-gpu -DGMX_GPU=ON 
> -DCMAKE_PREFIX_PATH=$HOME/opt/OpenBLAS-0.2.20 
> -DFFTWF_LIBRARY=$HOME/opt/fftw-3.3.7/lib/libfftw3f.so 
> -DFFTWF_INCLUDE_DIR=$HOME/opt/fftw-3.3.7/include ../ | tee out.cmake
> 
> -- Looking for NVIDIA GPUs present in the system
> -- Number of NVIDIA GPUs detected: 4
> -- Found CUDA: /usr (found suitable version "9.1", minimum required is "7.0")
> 
> make -j16
> make -j16 install # note: a lot of building happened also in this step
> 
> **
> gmx mdrun -nt 16 -ntmpi 4 -gputasks 0123 -nb gpu -bonded gpu -pme gpu -npme 1 
> -pin on -v -deffnm md_2 -s md_2_500ns.tpr -cpi md_2.1.cpt -noappend
> 
> +-+
> | NVIDIA-SMI 390.48 Driver Version: 390.48  |
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC 
> |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
> |===+==+==|
> |   0  GeForce GTX 108...  Off  | :02:00.0 Off |  N/A |
> |  0%   28CP819W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   1  GeForce GTX 108...  Off  | :03:00.0 Off |  N/A |
> |  0%   28CP8 8W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   2  GeForce GTX 108...  Off  | :83:00.0 Off |  N/A |
> |  0%   28CP8 9W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   3  GeForce GTX 108...  Off  | :84:00.0 Off |  N/A |
> |  0%   27CP8 9W / 250W |237MiB / 11178MiB |  0% Default |
> +---+--+--+
> 
> +-+
> | Processes:   GPU Memory 
> |
> |  GPU   PID   Type   Process name Usage  
> |
> |=|
> |0 20243  C   gmx 161MiB |
> |1 20243  C   gmx 161MiB |
> |2 20243  C   gmx 161MiB |
> |3 20243  C   gmx 219MiB |
> +-+
> 
> Thanks for your suggestions,
> Tamas
> 
> -- 
> Tamas Hegedus, PhD
> Senior Research Fellow
> MTA-SE Molecular Biophysics Research Group
> Hungarian Academy of Sciences  | phone: (36) 1-459 1500/60233
> Semmelweis University  | fax:   (36) 1-266 6656
> Tuzolto utca 37-47 | mailto:ta...@hegelab.org
> Budapest, 1094, Hungary| http://www.hegelab.org
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx 2019 running problems

2019-01-14 Thread paul buscemi
Tamas

I’ve the same build as you (almost… 2 gpus )  I found good  results  using one 
change :   from  -nt 16   to-ntomp 4  which should map the GPU’s and tasks 
in the same manner, but may be handled differently by mdrun.  These two 
versions run with different efficiency on my rig.

Paul

> On Jan 14, 2019, at 2:06 PM, Tamas Hegedus  wrote:
> 
> Hi,
> 
> I tried to install and use gmx 2019 on a single node computer with 4 GPUs.
> 
> I think that the build was ok, but the running is...
> There is only workload on 4 cores (-nt 16) and
> there is no workload on the GPUs at all.
> 
> gmx 2018 was deployed on the same computer with the same tools and libraries.
> 
> CPU 16cores + 16threads
> GPU 1080Ti
> 
> cmake -j 16 -DCMAKE_C_COMPILER=gcc-6 -DCMAKE_CXX_COMPILER=g++-6 
> -DCMAKE_INSTALL_PREFIX=$HOME/opt/gromacs-2019-gpu -DGMX_GPU=ON 
> -DCMAKE_PREFIX_PATH=$HOME/opt/OpenBLAS-0.2.20 
> -DFFTWF_LIBRARY=$HOME/opt/fftw-3.3.7/lib/libfftw3f.so 
> -DFFTWF_INCLUDE_DIR=$HOME/opt/fftw-3.3.7/include ../ | tee out.cmake
> 
> -- Looking for NVIDIA GPUs present in the system
> -- Number of NVIDIA GPUs detected: 4
> -- Found CUDA: /usr (found suitable version "9.1", minimum required is "7.0")
> 
> make -j16
> make -j16 install # note: a lot of building happened also in this step
> 
> **
> gmx mdrun -nt 16 -ntmpi 4 -gputasks 0123 -nb gpu -bonded gpu -pme gpu -npme 1 
> -pin on -v -deffnm md_2 -s md_2_500ns.tpr -cpi md_2.1.cpt -noappend
> 
> +-+
> | NVIDIA-SMI 390.48 Driver Version: 390.48  |
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC 
> |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
> |===+==+==|
> |   0  GeForce GTX 108...  Off  | :02:00.0 Off |  N/A |
> |  0%   28CP819W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   1  GeForce GTX 108...  Off  | :03:00.0 Off |  N/A |
> |  0%   28CP8 8W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   2  GeForce GTX 108...  Off  | :83:00.0 Off |  N/A |
> |  0%   28CP8 9W / 250W |179MiB / 11178MiB |  0% Default |
> +---+--+--+
> |   3  GeForce GTX 108...  Off  | :84:00.0 Off |  N/A |
> |  0%   27CP8 9W / 250W |237MiB / 11178MiB |  0% Default |
> +---+--+--+
> 
> +-+
> | Processes:   GPU Memory 
> |
> |  GPU   PID   Type   Process name Usage  
> |
> |=|
> |0 20243  C   gmx 161MiB |
> |1 20243  C   gmx 161MiB |
> |2 20243  C   gmx 161MiB |
> |3 20243  C   gmx 219MiB |
> +-+
> 
> Thanks for your suggestions,
> Tamas
> 
> -- 
> Tamas Hegedus, PhD
> Senior Research Fellow
> MTA-SE Molecular Biophysics Research Group
> Hungarian Academy of Sciences  | phone: (36) 1-459 1500/60233
> Semmelweis University  | fax:   (36) 1-266 6656
> Tuzolto utca 37-47 | mailto:ta...@hegelab.org
> Budapest, 1094, Hungary| http://www.hegelab.org
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Results of villin headpiece with AMD 8 core

2019-01-12 Thread paul buscemi
Mirco,on the modification - nicely done.On the system speed,  running Maestro-Desmond  one core ) the 1080ti is pegged and at usually 90% power.  them folks at Schrodinger know what they are doing. So the base speed is apparently sufficient, its some other factor  e.g. the work load distribution that is not optimized.I’ll work with your files tomorrow and let you know how it turns out— thanks Have a a great weekendPaulOn Jan 12, 2019, at 3:11 PM, Wahab Mirco  wrote:Hi Paul,thanks for your reply.On 11.01.2019 23:20, paul buscemi wrote:Getting the ion and SOL concentration correct in the top is trickier ( for me ) than it should have been,   If you happen to  reuse  both  solvate and genion during  the build  keeping track of the top is like using a digital rubics cube..!  The charge  the villin was +1 because after I downloaded it from the pdb I removed all other water and ions - it just made pdb2gmx easier to work with.I simply hand-edited the .gro by making up two ions and put themsomewhere near the corners and added a short energy minimization.Then, I added one line in the .top for the ions.The 1080 scaled nicely with the 1080 ti,  these are really nice pieces of hardware. and you are correct, given the choice of increased processors vs  faster processors - choose the latter. I have the AMD OC to 4.0 GH and it runs the same model almost as fast as as 32 core AMD at 3.7 GHz.Your system is possibly too slow to saturate the 1080Ti at this smallsystem size. In a much larger system, the lead of the 1080 Ti over the1080 may possibly reach the theoretical expectation.I've run 300k DPPC models ( ~300 DPPC molecules ) and they run at ~15 ns/day in NPT.  And yes,  if you can send the pdb, top, and itps I’t would be interesting to compare the two AMDs.I did upload the stuff here + a readme-file. This system is much toolarge for a single box + GPU (for productive runs), but maybe in 5 yearsor so we can watch capillary waves through connected IMD/VMD in real-time ;)=> http://suwos.gibtsfei.net/d.dppc.4096.zipRegardsMirco-- Gromacs Users mailing list* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists* For (un)subscribe requests visithttps://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Results of villin headpiece with AMD 8 core

2019-01-12 Thread paul buscemi
Mirco,

on the modification - nicely done.
On the system speed,  running Maestro-Desmond  one core ) the 1080ti is pegged 
and at usually 90% power.  them folks at Schrodinger know what they are doing. 
So the base speed is apparently sufficient, its some other factor  e.g. the 
work load distribution that is not optimized.

I’ll work with your files tomorrow and let you know how it turns out— thanks 

Have a a great weekend

Paul

> On Jan 12, 2019, at 3:11 PM, Wahab Mirco  
> wrote:
> 
> Hi Paul,
> 
> thanks for your reply.
> 
> On 11.01.2019 23:20, paul buscemi wrote:
>> Getting the ion and SOL concentration correct in the top is trickier ( for 
>> me ) than it should have been,   If you happen to  reuse  both  solvate and 
>> genion during  the build  keeping track of the top is like using a digital 
>> rubics cube..!  The charge  the villin was +1 because after I downloaded it 
>> from the pdb I removed all other water and ions - it just made pdb2gmx 
>> easier to work with.
>> 
> 
> I simply hand-edited the .gro by making up two ions and put them
> somewhere near the corners and added a short energy minimization.
> Then, I added one line in the .top for the ions.
> 
>> The 1080 scaled nicely with the 1080 ti,  these are really nice pieces of 
>> hardware. and you are correct, given the choice of increased processors vs  
>> faster processors - choose the latter. I have the AMD OC to 4.0 GH and it 
>> runs the same model almost as fast as as 32 core AMD at 3.7 GHz.
> 
> Your system is possibly too slow to saturate the 1080Ti at this small
> system size. In a much larger system, the lead of the 1080 Ti over the
> 1080 may possibly reach the theoretical expectation.
> 
>> I've run 300k DPPC models ( ~300 DPPC molecules ) and they run at ~15 ns/day 
>> in NPT.  And yes,  if you can send the pdb, top, and itps I’t would be 
>> interesting to compare the two AMDs.
>> 
> 
> I did upload the stuff here + a readme-file. This system is much too
> large for a single box + GPU (for productive runs), but maybe in 5 years
> or so we can watch capillary waves through connected IMD/VMD in real-
> time ;)
> 
> => http://suwos.gibtsfei.net/d.dppc.4096.zip
> 
> Regards
> 
> Mirco
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Results of villin headpiece with AMD 8 core

2019-01-11 Thread paul buscemi
Dear M,

Yup the timestep is enormous, but the Gromacs demo used 0.005 ns.  At that 
point the 2700 and the Ti started flinging waters molecules off into the ozone. 
For the very short runs at 0.005 ~1400 ns/day was being hit.  Usually the ts I 
use  is 0.002 

Getting the ion and SOL concentration correct in the top is trickier ( for me ) 
than it should have been,   If you happen to  reuse  both  solvate and genion 
during  the build  keeping track of the top is like using a digital rubics 
cube..!  The charge  the villin was +1 because after I downloaded it from the 
pdb I removed all other water and ions - it just made pdb2gmx easier to work 
with.

 Interesting  that while adjusting ntmpi and ntomp made some difference the 
biggest influence came from rvdw, r coulomb cutoffs ,and rvwd-switch. They are 
apparently coupled in some manner. I was went as far as rvdw of 1.6 with much 
worse results. But not much is said about these parameters as compared to 
setting mpi and threads in regard to efficiency.  But this is just my 
inexperience showing

The 1080 scaled nicely with the 1080 ti,  these are really nice pieces of 
hardware. and you are correct, given the choice of increased processors vs  
faster processors - choose the latter. I have the AMD OC to 4.0 GH and it runs 
the same model almost as fast as as 32 core AMD at 3.7 GHz.

I've run 300k DPPC models ( ~300 DPPC molecules ) and they run at ~15 ns/day in 
NPT.  And yes,  if you can send the pdb, top, and itps I’t would be interesting 
to compare the two AMDs. 

Best
Paul






> On Jan 11, 2019, at 3:27 PM, Wahab Mirco  
> wrote:
> 
> On 11.01.2019 19:55, pbuscemi wrote:
>> For those of you considering a workstation build and wonder about AMD 
>> processors I have the following results using the included npt and log intro 
>> for the villin headpiece in ~ 8000 atoms spc/e. The npt was run from a 
>> similar nvt ( 10 steps ) . The best results were achieved with the 
>> simplest command line - letting Gromacs choose threads.
>> The system became unstable at dt =0.005 ns step. Note the close 
>> correspondence between rcoulomb, rvdw and cutoffswitch. Results compare 
>> favorably with the E5-2690+GTX Titan demo
>> http://on-demand.gputechconf.com/gtc/2013/webinar/gromacs-kepler-gpus-gtc-express-webinar.pdf
>>  
>> (https://link.getmailspring.com/link/1547231722.local-ad2d5ea3-b061-v1.5.2-31660...@getmailspring.com/0?redirect=http%3A%2F%2Fon-demand.gputechconf.com%2Fgtc%2F2013%2Fwebinar%2Fgromacs-kepler-gpus-gtc-express-webinar.pdf&recipient=Z214LXVzZXJzQGdyb21hY3Mub3Jn)
>> Core t (s) Wall t (s) (%)
>> Time: 112.643 14.080 800.0
>> (ns/day) (hour/ns)
>> Performance: 1288.622 0.019
> 
> Hi Paul,
> 
> I couldn't avoid to test this on my R2700X box which has a
> GTX 1080 because this allows me to see the difference
> to the GTX-1080 Ti. According to your md.log, the boxes
> and the OS are very similar, I only had some problems first
> to get the old villin benchmark to run with your mdp-file
> (I added two chloride ions and changed the solvent to SPC/E
> to the original configuration). BTW. you used a rather
> large timestep in your test.
> 
> So, this would be your 1080 Ti results:
> 
>   Number of GPUs detected: 1
>   #0: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC: no, stat: comp
>   [ ... ]
>   starting mdrun [ ... ]
>   5 steps,210.0 ps.
>   Writing final coordinates.
>  Core t (s)   Wall t (s)(%)
>  Time:  112.643   14.080  800.0
>(ns/day)(hour/ns)
>   Performance: 1288.6220.019
> 
> 
> 
> And this is the run on an almost identical system with
> a GTX 1080 Ti (Palit Super-Jetstream):
> 
>   Number of GPUs detected: 1
>   #0: NVIDIA GeForce GTX 1080, compute cap.: 6.1, ECC:  no, stat: comp
>   [ ... ]
>   starting mdrun 'VILLIN in water'
>   5 steps,210.0 ps.
>   Writing final coordinates.
>  Core t (s)   Wall t (s)(%)
>  Time:  147.840   18.480  800.0
>(ns/day)(hour/ns)
>   Performance:  981.8410.024
> 
> 
> On the (small) villin benchmark, the 1080 Ti would be
> about 13% faster than the GTX 1080. The raw SP float power
> of the 1080 Ti is about 40% higher than the 1080 (11,340
> GFLOPS vs 8,228 GFLOPS) which means a faster processor
> could possibly help here.
> 
> 
> BTW: I have a large membrane benchmark (DPPC/water, 1.2M atoms,
> 35x35x13A³ box) which runs at about 3 ns/d on the GTX 1080 with
> Parrinello-Rahman semiisotropic coupling, if you'd like to torture
> your box I can provide it ;)
> 
> Regards
> 
> M.
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to 

Re: [gmx-users] AMD 32 core TR

2019-01-09 Thread paul buscemi
Tomas,

Again, thanks for the response.

On re-reading the data Exxact Corp sent ( always helps to review) they did use 
a 2080ti and two zions.   

Your point on maxing out the GPU is interesting.  On the 8 core ( 16T) the GPU 
is maxed out as you inferred, but not on the 32 core (64T) with which the GPU 
runs at 60-85% depending on the model but still at greater efficiency than the 
8 core ( ~ 60 vs 50 ns/day. I’ve never been able to max out the GTX 1080 TI on 
the 32 core system.  

The primary reason for the 32 core is to support two GPUs and there the 
efficiency increases from (e.g ) 70 ns/day to 90 -100 ns/day  with  ~ 150k 
atoms. I never expected a tremendous increase by increasing core number

I feel a little uncomfortable  citing such  generalities. So for now I  will 
state that am satisfied with the outcome and that users who use single 
workstations can expect to match professionally assembled systems.  


> On Jan 6, 2019, at 5:07 AM, Tamas Hegedus  wrote:
> 
> It also comes into my mind that the AMD 32 core has not a double performance 
> of AMD 16 core. Because of manufacturing the 4 dies (4x8 cpu units), it does 
> not have the same bandwidth to the RAM. But this is told to affect only 
> memory hungry applications. Thus at this moment I think that this does not 
> effect much gmx runs. I hope to try an RT 2990WX (amd 32core) in 1-2 months.
> 
> On 2019. 01. 04. 4:18, paul buscemi wrote:
>> Tamas,  thanks for the response.
>> In previous posts I mention using a single gtx 1080ti, sorry for not making 
>> it clear in the last post.
>>On the 8 core AMD and an Intel 6 core I am running Cuda 10 with Gromacs 
>> 18.3 with no issues.  I believe the larger factor in the slowness of the 32 
>> core was in having the runtime Cuda 7 with the cuda 10 drivers.  On the 8 
>> core, runtime cuda 9.1 and cuda 10 drivers work well together  - all with 
>> Gromacs 18.3.  Now with  Gromacs v19 , Cuda 10 and the 410 nvidia drivers, 
>> the 8 core and 32 core systems seem quite content.
>> I have been tracing results from the log, and you are correct in what it can 
>> tell you.  It was the log file that actually brought my attention to the 
>> Cuda 7 runtime issue. Also the PP PME distributions were noted with the 
>> ntomp/ntmpi arrangements. I have been experimenting with those as suggested 
>> in the Gromas acceleration hints.
>> By 10% I meant that the 32 core unit ( in my hands ) ran 10% faster  in 
>> ns/day than the 8 core AMD using the same model system and the the same 
>> 1080ti  GPU.  Gromacs points out that  150k to 300k atom systems are on the 
>> rather small side and so not to expect tremendous differences from the CPU.  
>> The reason for using the 32 core is the eventual addition of a second GPU 
>> and the subsequent distribution of threads.
>> With a little OC and tweeking of the fourier spacing and vdw cutoffs in the 
>> npt I edged the 137k atom AHD model  to 57 ns/day,  but this falls short of  
>> the Exxact corp benchmarks of 80-90 ns/d —  assuming they are using a 
>> 1080ti.  Schrodinger’s Maestro- with the 8 core AMD and 1080ti -  runs a 
>> 300k membrane model at about 15 ns/d  but a  60k atom model at 150 ns/day 
>> implying  30 ns/day for 300k atoms.  . In general, if I can indeed maintain 
>> 20-25 ns/day for 300k atoms I’d be satisfied.  The original posts were made 
>> because I was frustrated seeing 6 to 8 ns/d with the 32core machine and the 
>> 8 core was producing 20 ns/day.   As I mentioned the wounds were self 
>> inflicted  with the installation of Cuda runtime 7 and at one point 
>> compilation with g++-5.   As far as I am concerned it’s imperative that the 
>> latest drivers and Gromacs versions be used or at least the same  genre of 
>> drivers and versions be assembled.
>> Again, I’d like to point out that in using four different machines, 4 
>> different Intel and AMD  CPU’s, 5 different MBs,  5 different GPU’s, now 4 
>> progressive versions of Gromacs, and model systems of 200-300 k particles, 
>> I’ve not run across a single problem associated with the software or 
>> hardware per se but rather was caused by the my models or my compilation 
>> methods.
>> Hope this addresses your questions and helps any other users contemplating 
>> using a Ryzen TR.
>> Paul
>>> On Jan 3, 2019, at 2:09 PM, Tamas Hegedus  wrote:
>>> 
>>> Please provide more information.
>>> 
>>> If you use gmx 2018 then I think that gmx limits the gcc version to 6 and 
>>> not cuda 10.
>>> 
>>> You did not specify what type of and how many GPUs you use.
>>> 
>>> In addition, the choice of gmx for distributing computation co

Re: [gmx-users] Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04; upgraded with GTX1080TI

2019-01-08 Thread paul buscemi


> On Jan 8, 2019, at 6:29 PM, paul buscemi  wrote:
> 
> I just built from a similar situation but also went to Ubuntu  Mint Tara 19 , 
> cuda runtime 10 ( used the Nvidia web site .run version not the deb  - do not 
> install the driver from the toolkit -- add  the 410 driver from the PPA)  The 
> system is quite happy.  forgot to add  use gcc-6. also Gromacs v 19.  Under 3 
> hrs for the entire installation. It’s really fairly  painless
> 
> I believe I ran across some information that suggests that the mixture of 
> Runtime 8, Cuda driver 9 and Ubuntu 16 is not a good mix.  I’ll try to look 
> for it later if you need further information
> 
> Paul
> 
>> On Jan 8, 2019, at 2:14 PM, David van der Spoel  wrote:
>> 
>> Den 2019-01-08 kl. 20:33, skrev Adarsh V. K.:
>>> Dear all,
>>> recently upgraded Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04 with a new
>>> GPU GTX1080TI. CUDA from 7.5 to 8. Driver 384.
>>> Problem: GPU not detected during MD run. Details are as follows:
>> 
>> Try upgrading to gromacs 2019.
>> 
>>> 1) Running on 1 node with total 8 cores, 8 logical cores, 0 compatible GPUs
>>> Hardware detected:
>>> But deviceQuery as follows
>>> 2) ./deviceQuery
>>> ./deviceQuery Starting...
>>> CUDA Device Query (Runtime API) version (CUDART static linking)
>>> Detected 1 CUDA Capable device(s)
>>> Device 0: "GeForce GTX 1080 Ti"
>>>  CUDA Driver Version / Runtime Version  9.0 / 8.0
>>>  CUDA Capability Major/Minor version number:6.1
>>>  Total amount of global memory: 11169 MBytes (11711807488
>>> bytes)
>>>  (28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores
>>>  GPU Max Clock rate:1658 MHz (1.66 GHz)
>>>  Memory Clock rate: 5505 Mhz
>>>  Memory Bus Width:  352-bit
>>>  L2 Cache Size: 2883584 bytes
>>>  Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072,
>>> 65536), 3D=(16384, 16384, 16384)
>>>  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
>>>  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048
>>> layers
>>>  Total amount of constant memory:   65536 bytes
>>>  Total amount of shared memory per block:   49152 bytes
>>>  Total number of registers available per block: 65536
>>>  Warp size: 32
>>>  Maximum number of threads per multiprocessor:  2048
>>>  Maximum number of threads per block:   1024
>>>  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
>>>  Max dimension size of a grid size(x,y,z): (2147483647, 65535, 65535)
>>>  Maximum memory pitch:  2147483647 bytes
>>>  Texture alignment: 512 bytes
>>>  Concurrent copy and kernel execution:  Yes with 2 copy engine(s)
>>>  Run time limit on kernels: Yes
>>>  Integrated GPU sharing Host Memory:No
>>>  Support host page-locked memory mapping:   Yes
>>>  Alignment requirement for Surfaces:Yes
>>>  Device has ECC support:Disabled
>>>  Device supports Unified Addressing (UVA):  Yes
>>>  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
>>>  Compute Mode:
>>> < Default (multiple host threads can use ::cudaSetDevice() with device
>>> simultaneously) >
>>> deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime
>>> Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1080 Ti
>>> Result = PASS
>> 
>> 
>> -- 
>> David van der Spoel, Ph.D., Professor of Biology
>> Head of Department, Cell & Molecular Biology, Uppsala University.
>> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
>> http://www.icm.uu.se
>> -- 
>> Gromacs Users mailing list
>> 
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04; upgraded with GTX1080TI

2019-01-08 Thread paul buscemi
I just built from a similar situation but also went to Ubuntu  Mint Tara 19 , 
cuda runtime 10 ( used the Nvidia web site .run version not the deb  - do not 
install the driver from the toolkit ) added  the 410 driver from the PPA and 
the system is quite happy.

I believe I ran across some information that suggests that the mixture of 
Runtime 8, Cuda driver 9 and Ubuntu 16 is not a good mix.  I’ll try to look for 
it later if you need further information

Paul

> On Jan 8, 2019, at 2:14 PM, David van der Spoel  wrote:
> 
> Den 2019-01-08 kl. 20:33, skrev Adarsh V. K.:
>> Dear all,
>> recently upgraded Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04 with a new
>> GPU GTX1080TI. CUDA from 7.5 to 8. Driver 384.
>> Problem: GPU not detected during MD run. Details are as follows:
> 
> Try upgrading to gromacs 2019.
> 
>> 1) Running on 1 node with total 8 cores, 8 logical cores, 0 compatible GPUs
>> Hardware detected:
>> But deviceQuery as follows
>> 2) ./deviceQuery
>> ./deviceQuery Starting...
>>  CUDA Device Query (Runtime API) version (CUDART static linking)
>> Detected 1 CUDA Capable device(s)
>> Device 0: "GeForce GTX 1080 Ti"
>>   CUDA Driver Version / Runtime Version  9.0 / 8.0
>>   CUDA Capability Major/Minor version number:6.1
>>   Total amount of global memory: 11169 MBytes (11711807488
>> bytes)
>>   (28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores
>>   GPU Max Clock rate:1658 MHz (1.66 GHz)
>>   Memory Clock rate: 5505 Mhz
>>   Memory Bus Width:  352-bit
>>   L2 Cache Size: 2883584 bytes
>>   Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072,
>> 65536), 3D=(16384, 16384, 16384)
>>   Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
>>   Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048
>> layers
>>   Total amount of constant memory:   65536 bytes
>>   Total amount of shared memory per block:   49152 bytes
>>   Total number of registers available per block: 65536
>>   Warp size: 32
>>   Maximum number of threads per multiprocessor:  2048
>>   Maximum number of threads per block:   1024
>>   Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
>>   Max dimension size of a grid size(x,y,z): (2147483647, 65535, 65535)
>>   Maximum memory pitch:  2147483647 bytes
>>   Texture alignment: 512 bytes
>>   Concurrent copy and kernel execution:  Yes with 2 copy engine(s)
>>   Run time limit on kernels: Yes
>>   Integrated GPU sharing Host Memory:No
>>   Support host page-locked memory mapping:   Yes
>>   Alignment requirement for Surfaces:Yes
>>   Device has ECC support:Disabled
>>   Device supports Unified Addressing (UVA):  Yes
>>   Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
>>   Compute Mode:
>>  < Default (multiple host threads can use ::cudaSetDevice() with device
>> simultaneously) >
>> deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime
>> Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1080 Ti
>> Result = PASS
> 
> 
> -- 
> David van der Spoel, Ph.D., Professor of Biology
> Head of Department, Cell & Molecular Biology, Uppsala University.
> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
> http://www.icm.uu.se
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gmx hbond

2019-01-06 Thread paul buscemi
Use VMD/extensions/hydrogen bonds

> On Jan 6, 2019, at 11:01 AM, rose rahmani  wrote:
> 
> hi,
> 
> I want to know the number of hydrogen bonds of amino acid with water in
> different distances above surface. for example in first 0.2nm, in second
> 0.2 nm(0.2-0.4 nm) above surface. how can i do it by gmx hbond? I couldn't
> find any proper option for that.
> 
> best
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] AMD 32 core TR

2019-01-03 Thread paul buscemi
Tamas,  thanks for the response.

In previous posts I mention using a single gtx 1080ti, sorry for not making it 
clear in the last post.
 
 On the 8 core AMD and an Intel 6 core I am running Cuda 10 with Gromacs 18.3 
with no issues.  I believe the larger factor in the slowness of the 32 core was 
in having the runtime Cuda 7 with the cuda 10 drivers.  On the 8 core, runtime 
cuda 9.1 and cuda 10 drivers work well together  - all with Gromacs 18.3.  Now 
with  Gromacs v19 , Cuda 10 and the 410 nvidia drivers, the 8 core and 32 core 
systems seem quite content.

I have been tracing results from the log, and you are correct in what it can 
tell you.  It was the log file that actually brought my attention to the Cuda 7 
runtime issue. Also the PP PME distributions were noted with the ntomp/ntmpi 
arrangements. I have been experimenting with those as suggested in the Gromas 
acceleration hints.

By 10% I meant that the 32 core unit ( in my hands ) ran 10% faster  in ns/day 
than the 8 core AMD using the same model system and the the same 1080ti  GPU.  
Gromacs points out that  150k to 300k atom systems are on the rather small side 
and so not to expect tremendous differences from the CPU.  The reason for using 
the 32 core is the eventual addition of a second GPU and the subsequent 
distribution of threads.

With a little OC and tweeking of the fourier spacing and vdw cutoffs in the npt 
I edged the 137k atom AHD model  to 57 ns/day,  but this falls short of  the 
Exxact corp benchmarks of 80-90 ns/d —  assuming they are using a 1080ti.  
Schrodinger’s Maestro- with the 8 core AMD and 1080ti -  runs a 300k membrane 
model at about 15 ns/d  but a  60k atom model at 150 ns/day implying  30 ns/day 
for 300k atoms.  . In general, if I can indeed maintain 20-25 ns/day for 300k 
atoms I’d be satisfied.  The original posts were made because I was frustrated 
seeing 6 to 8 ns/d with the 32core machine and the 8 core was producing 20 
ns/day.   As I mentioned the wounds were self inflicted  with the installation 
of Cuda runtime 7 and at one point compilation with g++-5.   As far as I am 
concerned it’s imperative that the latest drivers and Gromacs versions be used 
or at least the same  genre of drivers and versions be assembled.

Again, I’d like to point out that in using four different machines, 4 different 
Intel and AMD  CPU’s, 5 different MBs,  5 different GPU’s, now 4 progressive 
versions of Gromacs, and model systems of 200-300 k particles, I’ve not run 
across a single problem associated with the software or hardware per se but 
rather was caused by the my models or my compilation methods. 

Hope this addresses your questions and helps any other users contemplating 
using a Ryzen TR.

Paul



> On Jan 3, 2019, at 2:09 PM, Tamas Hegedus  wrote:
> 
> Please provide more information.
> 
> If you use gmx 2018 then I think that gmx limits the gcc version to 6 and not 
> cuda 10.
> 
> You did not specify what type of and how many GPUs you use.
> 
> In addition, the choice of gmx for distributing computation could be also 
> informative - you find this info in the log file.
> 
> It is also not clear what do you mean of 10% improvement: 8ns/day to 26ns/day 
> are the only numbers but it corresponds to 3x faster simulations and not 1.1x
> 
> In addition, I think if you have 49.5 ns/day for 137K atoms than 26ns/day 
> seems to be ok for 300K.
> 
> Bests, Tamas
> 
> 
> On 1/3/19 6:11 PM, pbusc...@q.com wrote:
>> Dear users,
>> 
>>  
>> I had trouble getting suitable performance from an AMD 32 core TR.  By
>> updating  all the cuda drivers and runtime to v10  and using gcc,g++ -6 from
>> v5  -- I did try gcc-7 but Cuda 10 did not appreciate the attempt  --  and
>> in particular removing  CUDA v7 runtime.), I was able to improve a 300k atom
>> nvt run from 8 ns/day to 26 ns/day .  I replicated  as far as possible the
>> Gromacs ADH benchmark with 137000 atoms-spc/e.  I could achieve an md of
>> 49.5 ns/day. I do not have a firm grasp if this is respectable or not (
>> comments ? )  but appears at least ok.   The input command was simply mdrun
>> ADH.md   -nb gpu  -pme gp   ( and not using -ntomp or ntmpi which in my
>> hands degraded performance ) .   To run the ADH  I replaced the two ZN ions
>> in  ADH file from PDB ( 2ieh.pdb ) with CA ions  since ZN was not found in
>> the OPLS data base in using pdb2gmx.
>> 
>>  
>> The points being ( 1) Gromacs appears reasonably happy with  the 8 core and
>> 32 core Ryzen although ( again in my hands ) for these  smallish systems
>> there is only about a 10% improvement between the two, and  (2) , as often
>> suggested in the Gromacs literature, use the latest drivers possible
>> 
>>  
>>  
> -- 
> Tamas Hegedus, PhD
> Senior Research Fellow
> MTA-SE Molecular Biophysics Research Group
> Hungarian Academy of Sciences  | phone: (36) 1-459 1500/60233
> Semmelweis University  | fax:   (36) 1-266 6656
> Tuzolto utca 37-47 | mailto:ta...@hegelab.org
> Budapest, 109

Re: [gmx-users] Surface Energy calculation of polymeric materials

2018-12-22 Thread paul buscemi
Consider using a test probe.  For instance,  a micelle of DPPC, or POPC may 
spread differently onto the various surfaces. Then contact area or and  
interaction energy may supply the necessary information.  You might get lucky 
with a contact angle.

Paul
busce...@umn.edu

> On Dec 22, 2018, at 1:29 AM, David van der Spoel  wrote:
> 
> Den 2018-12-21 kl. 11:07, skrev Maria Luisa:
>> Dear users,
>> I did simulations with Gromacs on different polymeric materials in contact
>> with salt solutions of NaCl.
>> In particular I performed crystallization tests and now I'd like to find
>> an energy parameter that could justify different behavior of systems
>> studied in nucleation time and also in crystallization growth.
>> What kind of calculation or command do you suggest to me? In particular
>> I'd like to individuate an energy factor of polymeric surfaces, that
>> implied changes in simulations.
> The surface tension of the solution may be somewhat useful, although this is 
> an equilibrium property that may be hard to relate to activation energies 
> that you are after. Note that nucleation time is also concentration dependent.
> 
> Finally, vacuum/liquid surface tensions of salt solutions are difficult to 
> get correct in simulations and the may need application of polarizable models.
>> Maria Luisa
>> Maria Luisa Perrotta
>> Ph.D Student, CNR-ITM
>> via P.Bucci, 87036 Rende (Cs)
>> Italy
>> email: ml.perro...@itm.cnr.it
> 
> 
> -- 
> David van der Spoel, Ph.D., Professor of Biology
> Head of Department, Cell & Molecular Biology, Uppsala University.
> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
> http://www.icm.uu.se
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error on opening gmx_mpi

2018-12-19 Thread paul buscemi
Shi,
Justin straightened me out regarding the command structure ;  Used  "mpirun -np 
8 gmx_mpi mdrun -deffnm  Run_file.nvt”

But for time being I’ve given up on two GPUs with the 32 core system.  I am now 
just trying to make the single GPU work well.

Paul

> On Dec 19, 2018, at 5:51 AM, Shi Li  wrote:
> 
>> 
>> 
>> --
>> 
>> Message: 3
>> Date: Tue, 18 Dec 2018 21:51:41 -0600
>> From: paul buscemi 
>> To: "gmx-us...@gromacs.org" 
>> Subject: Re: [gmx-users] error on opening gmx_mpi
>> Message-ID: 
>> Content-Type: text/plain;charset=utf-8
>> 
>> Shi,  Thanks fo the note
>> 
>> Yes, somehow - there is a version of gromacs 5 that is being summoned.  I?ve 
>> got clean up my act a bit.  
>> 
>> A suggestion was made to try to use the mpi version because of the CPU I am 
>> using.   gmx v18.3 was installed , but  I  removed its build  and built the 
>> 19.1 beta mpi  version in a separate directory.  Apparently there are  some 
>> remnants being  called.  But  v 5 has never been installed on this 
>> particular computer, so I have no idea were gromacs -5.1.2 is coming from.   
>> it may be easier purge everything and start again.  
>> 
>> Paul
> 
> Another way to solve this is to install the new version of GROMACS on a 
> prefix directory, instead of using the default. Then make an individual file 
> to source the new GMXRC in the prefixed directory as well as load all the 
> modules you used to install the program. So that you won’t have the problem 
> to confuse with different versions on your computer/cluster. 
> 
> Shi 
>> 
>>> On Dec 18, 2018, at 8:48 PM, Shi Li  wrote:
>>> 
>>>> 
>>>> Message: 3
>>>> Date: Tue, 18 Dec 2018 15:12:00 -0600
>>>> From: p buscemi 
>>>> To: "=?utf-8?Q?gmx-users=40gromacs.org?=" 
>>>> Subject: [gmx-users] error on opening gmx_mpi
>>>> Message-ID:
>>>><1545164001.local-b6243977-9380-v1.5.3-420ce...@getmailspring.com>
>>>> Content-Type: text/plain; charset="utf-8"
>>>> 
>>>> I installed 2019 beata gmx_mpi with:
>>>> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on 
>>>> -DCMAKE_CXX_COMPILER=/usr/bin/g++-7 -DCMAKE_C_COMPILER=/usr/bin/gcc-7 
>>>> -DGMX_MPI=ON -DGMX_USE_OPENCL=ON
>>>> 
>>>> The install completed with no errors.
>>>> I need to take this step by step: in running minim. For minimization I used
>>>> mpirun -np 8 mdrun_mpi -deffnm RUNname.em
>>>> with the output:
>>>> :-) GROMACS - mdrun_mpi, VERSION 5.1.2 (-:
>>>> etc etc
>>>> GROMACS: mdrun_mpi, VERSION 5.1.2
>>>> Executable: /usr/bin/mdrun_mpi.openmpi
>>>> Data prefix: /usr
>>> 
>>> It looked like you didn?t run the new installed GROMACS. What is the output 
>>> when you input gmx_mpi? It should be version 2018 instead of 5.1.2. 
>>> Have you put the gromacs in your PATH or source the GMXRC?
>>> 
>>> Shi
>>> 
>>> 
>>>> Command line:
>>>> mdrun_mpi -deffnm PVP20k1.em
>>>> 
>>>> Back Off! I just backed up PVP20k1.em.log to ./#PVP20k1.em.log.2#
>>>> Running on 1 node with total 64 cores, 64 logical cores
>>>> Hardware detected on host rgb2 (the node of MPI rank 0):
>>>> CPU info:
>>>> Vendor: AuthenticAMD
>>>> Brand: AMD Ryzen Threadripper 2990WX 32-Core Processor
>>>> SIMD instructions most likely to fit this hardware: AVX_128_FMA
>>>> SIMD instructions selected at GROMACS compile time: SSE2
>>>> 
>>>> Compiled SIMD instructions: SSE2, GROMACS could use AVX_128_FMA on this 
>>>> machine, which is better
>>>> Reading file PVP20k1.em.tpr, VERSION 2018.4 (single precision)
>>>> ---
>>>> Program mdrun_mpi, VERSION 5.1.2
>>>> Source code file: 
>>>> /build/gromacs-z6bPBg/gromacs-5.1.2/src/gromacs/fileio/tpxio.c, line: 3345
>>>> 
>>>> Fatal error:
>>>> reading tpx file (PVP20k1.em.tpr) version 112 with version 103 program
>>>> For more information and tips for troubleshooting, please check the GROMACS
>>>> website at http://www.gromacs.org/Documentation/Errors
>>>> ---
>>>> 
>>>> Halting parallel program mdrun_mpi on rank 0 out of 8
>>

Re: [gmx-users] Install Gromacs from Debian/Ubuntu repository vs build from source

2018-12-19 Thread paul buscemi
In addition to Justin’s comments, the repository version is not adapted to 
GPU/CUDA  use and as such is good for only very small systems and so loses one 
of the great advantages over other MD programs. I is not bad as an introduction 
 to Gromacs  so do not be  afraid of installing, working with it and later 
removing to then install the source version.  However, following the install 
instructions for the source version, installation is quite smooth.

Paul

> On Dec 19, 2018, at 5:58 PM, Zhang Shenqiu  wrote:
> 
> Dear Everyone,
> 
> I am a beginner to Gromacs, and found Gromacs can be installed with apt-get 
> install gromacs on Debian/Ubuntu. But I hesitate to use it because this 
> option is not listed or mentioned in the installation guide. 
> http://manual.gromacs.org/documentation/2018/install-guide/index.html
> 
> I wonder if I am at a disadvantage of using apt-get install, especially 
> regarding the CUDA functions. I have searched "The gromacs.org_gmx-users 
> Archives" from December 2018 to September 2016, and couldn't find any 
> discussion on it.
> 
> Many thanks,
> 
> Shenqiu
> 
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] error on opening gmx_mpi

2018-12-18 Thread paul buscemi
Shi,  Thanks fo the note

Yes, somehow - there is a version of gromacs 5 that is being summoned.  I’ve 
got clean up my act a bit.  

A suggestion was made to try to use the mpi version because of the CPU I am 
using.   gmx v18.3 was installed , but  I  removed its build  and built the 
19.1 beta mpi  version in a separate directory.  Apparently there are  some 
remnants being  called.  But  v 5 has never been installed on this particular 
computer, so I have no idea were gromacs -5.1.2 is coming from.   it may be 
easier purge everything and start again.  

Paul

> On Dec 18, 2018, at 8:48 PM, Shi Li  wrote:
> 
>> 
>> Message: 3
>> Date: Tue, 18 Dec 2018 15:12:00 -0600
>> From: p buscemi 
>> To: "=?utf-8?Q?gmx-users=40gromacs.org?=" 
>> Subject: [gmx-users] error on opening gmx_mpi
>> Message-ID:
>>  <1545164001.local-b6243977-9380-v1.5.3-420ce...@getmailspring.com>
>> Content-Type: text/plain; charset="utf-8"
>> 
>> I installed 2019 beata gmx_mpi with:
>> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on 
>> -DCMAKE_CXX_COMPILER=/usr/bin/g++-7 -DCMAKE_C_COMPILER=/usr/bin/gcc-7 
>> -DGMX_MPI=ON -DGMX_USE_OPENCL=ON
>> 
>> The install completed with no errors.
>> I need to take this step by step: in running minim. For minimization I used
>> mpirun -np 8 mdrun_mpi -deffnm RUNname.em
>> with the output:
>> :-) GROMACS - mdrun_mpi, VERSION 5.1.2 (-:
>> etc etc
>> GROMACS: mdrun_mpi, VERSION 5.1.2
>> Executable: /usr/bin/mdrun_mpi.openmpi
>> Data prefix: /usr
> 
> It looked like you didn’t run the new installed GROMACS. What is the output 
> when you input gmx_mpi? It should be version 2018 instead of 5.1.2. 
> Have you put the gromacs in your PATH or source the GMXRC?
> 
> Shi
> 
> 
>> Command line:
>> mdrun_mpi -deffnm PVP20k1.em
>> 
>> Back Off! I just backed up PVP20k1.em.log to ./#PVP20k1.em.log.2#
>> Running on 1 node with total 64 cores, 64 logical cores
>> Hardware detected on host rgb2 (the node of MPI rank 0):
>> CPU info:
>> Vendor: AuthenticAMD
>> Brand: AMD Ryzen Threadripper 2990WX 32-Core Processor
>> SIMD instructions most likely to fit this hardware: AVX_128_FMA
>> SIMD instructions selected at GROMACS compile time: SSE2
>> 
>> Compiled SIMD instructions: SSE2, GROMACS could use AVX_128_FMA on this 
>> machine, which is better
>> Reading file PVP20k1.em.tpr, VERSION 2018.4 (single precision)
>> ---
>> Program mdrun_mpi, VERSION 5.1.2
>> Source code file: 
>> /build/gromacs-z6bPBg/gromacs-5.1.2/src/gromacs/fileio/tpxio.c, line: 3345
>> 
>> Fatal error:
>> reading tpx file (PVP20k1.em.tpr) version 112 with version 103 program
>> For more information and tips for troubleshooting, please check the GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>> ---
>> 
>> Halting parallel program mdrun_mpi on rank 0 out of 8
>> --
>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>> with errorcode 1.
>> 
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> 
>> I see the fatal error but minim.ndp was used while in gmx_mpi - this is not 
>> covered in commor errors.
>> and I see the note on AVX_128_FM.. but that can wait. Is it the version of 
>> the MPI files ( 103 ) that is the at fault?
>> 
>> I need to create the proper tpr to continue
>> 
>> 
>> --
>> 
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] error on opening gmx_mpi

2018-12-18 Thread paul buscemi
Justin,  thank  you very much  for the rapid response.

I read that the same way, but I’m  a bit confused.  Is "mdrun_mpi version 5.1.2 
in /usr/bin”  a result of an install of v5 of gromacs ( have not used that 
version ) or a result of an install  of a v5 of mpi ?  i.e. not sure where that 
came from.  Do I simply need to reinstall the beta version ?

I do plan on reporting all of this when it’s up and running !

Regards
Paul

> On Dec 18, 2018, at 3:15 PM, Justin Lemkul  wrote:
> 
> On Tue, Dec 18, 2018 at 4:12 PM p buscemi  wrote:
> 

> There's nothing wrong with the .tpr file, but you're not using the mdrun
> binary you want to be. You've installed the 2019 beta but then you're using
> mdrun_mpi version 5.1.2 in /usr/bin. You should be calling the same GROMACS
> version for everything.
> 
> -Justin
>> I installed 2019 beata gmx_mpi with:
>> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on
>> -DCMAKE_CXX_COMPILER=/usr/bin/g++-7 -DCMAKE_C_COMPILER=/usr/bin/gcc-7
>> -DGMX_MPI=ON -DGMX_USE_OPENCL=ON
>> 
>> The install completed with no errors.
>> I need to take this step by step: in running minim. For minimization I used
>> mpirun -np 8 mdrun_mpi -deffnm RUNname.em
>> with the output:
>> :-) GROMACS - mdrun_mpi, VERSION 5.1.2 (-:
>> etc etc
>> GROMACS: mdrun_mpi, VERSION 5.1.2
>> Executable: /usr/bin/mdrun_mpi.openmpi
>> Data prefix: /usr
>> Command line:
>> mdrun_mpi -deffnm PVP20k1.em
>> 
>> Back Off! I just backed up PVP20k1.em.log to ./#PVP20k1.em.log.2#
>> Running on 1 node with total 64 cores, 64 logical cores
>> Hardware detected on host rgb2 (the node of MPI rank 0):
>> CPU info:
>> Vendor: AuthenticAMD
>> Brand: AMD Ryzen Threadripper 2990WX 32-Core Processor
>> SIMD instructions most likely to fit this hardware: AVX_128_FMA
>> SIMD instructions selected at GROMACS compile time: SSE2
>> 
>> Compiled SIMD instructions: SSE2, GROMACS could use AVX_128_FMA on this
>> machine, which is better
>> Reading file PVP20k1.em.tpr, VERSION 2018.4 (single precision)
>> ---
>> Program mdrun_mpi, VERSION 5.1.2
>> Source code file:
>> /build/gromacs-z6bPBg/gromacs-5.1.2/src/gromacs/fileio/tpxio.c, line: 3345
>> 
>> Fatal error:
>> reading tpx file (PVP20k1.em.tpr) version 112 with version 103 program
>> For more information and tips for troubleshooting, please check the GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>> ---
>> 
>> Halting parallel program mdrun_mpi on rank 0 out of 8
>> --
>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>> with errorcode 1.
>> 
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> 
>> I see the fatal error but minim.ndp was used while in gmx_mpi - this is
>> not covered in commor errors.
>> and I see the note on AVX_128_FM.. but that can wait. Is it the version of
>> the MPI files ( 103 ) that is the at fault?
>> 
>> 
> There's nothing wrong with the .tpr file, but you're not using the mdrun
> binary you want to be. You've installed the 2019 beta but then you're using
> mdrun_mpi version 5.1.2 in /usr/bin. You should be calling the same GROMACS
> version for everything.
> 
> -Justin
> 
> -- 
> 
> ==
> 
> Justin A. Lemkul, Ph.D.
> 
> Assistant Professor
> 
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
> 
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
> 
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
> 
> 
> ==
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Area compressibility modulus GMX

2018-12-12 Thread paul buscemi
John

Ain’t nothin’ silly about analyzing membranes.  It’s an artform .

Have you taken a look at  Membplugin   
https://sourceforge.net/p/membplugin/wiki/Home/ 
   it provides trajectory 
values for among other things thickness and area/lipid.

I’ve not read the paper but the “average of the squared fluctuation” certainly 
has the appearance of a variance. Your procedure seems reasonable just check 
from the lit that the DPPC values reach those found at equilibrium. In my 
simulations of DPPC  a 250x250 sq Ang bilaer  took ~ 100ns far longer than I 
initially expected.

Paul
UMN BICB

> On Dec 11, 2018, at 11:23 AM, John Whittaker 
>  wrote:
> 
> Hi all,
> 
> I have a weird, probably very basic question to ask and I hope it is
> appropriate for the mailing list.
> 
> I am trying to reproduce the pure DPPC bilayer data found in J. Chem.
> Theory Comput., 2016, 12 (1), pp 405–413 (10.1021/acs.jctc.5b00935) using
> the recommended protocol given in the paper.
> 
> I have calculated the area per lipid for my system and have an average
> value and am now attempting to calculate the area compressibility modulus,
> K, using the formula given in the paper in the subsection "Analysis"
> (which itself is taken from https://doi.org/10.1063/1.479313).
> 
> I am a bit confused by the wording when the authors describe the value in
> the denominator, . The paper calls this value "the average of
> the squared fluctuation of the area/lipid". I'm probably being silly, but
> am I right to assume that this is the variance of the area/lipid?
> 
> As in, to get this value I can:
> 
> 1) Use gmx analyze to find the standard deviation of the area/lipid over
> the course of my trajectory
> 
> 2) Square the standard deviation to find the variance of the area/lipid
> 
> Then, it's a straightforward process of plugging in and making sure
> everything comes out in dyn/cm.
> 
> Could anyone tell me if my process is correct? Thanks a lot and my
> apologies if this is too specific of a question for the mailing list!
> 
> John
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] using dual CPU's

2018-12-12 Thread paul buscemi
Carsten,THanks for the response.

  my mistake - it was the GTX 980 from fig 3. … I was recalling from memory…..  
I assume that similar results would be achieved with the 1060’s

No I did not reset , my results were a compilation of 4-5 runs each under 
slightly different conditions on two computers. All with the same outcome - 
that is ugh!. Mark had asked for the log outputs indicating some useful 
conclusions could be drawn from them.

Paul

> On Dec 12, 2018, at 9:02 AM, Kutzner, Carsten  wrote:
> 
> Hi Paul,
> 
>> On 12. Dec 2018, at 15:36, pbusc...@q.com wrote:
>> 
>> Dear users  ( one more try ) 
>> 
>> I am trying to use 2 GPU cards to improve modeling speed.  The computer 
>> described in the log files is used  to iron out models and am using to learn 
>> how to use two GPU cards before purchasing two new RTX 2080 ti's.  The CPU 
>> is a 8 core 16 thread AMD and the GPU's are two GTX 1060; there are 5 
>> atoms in the model
>> 
>> Using ntpmi and ntomp  settings of 1: 16,  auto  ( 4:4) and  2: 8 ( and any 
>> other combination factoring to 16)  the rating for ns/day are approx.   
>> 12-16  and  for any other setting ~6-8  i.e adding a card cuts efficiency by 
>> half.  The average load imbalance is less than 3.4% for the multicard setup .
>> 
>> I am not at this point trying to maximize efficiency, but only to show some 
>> improvement going from one to two cards.   According to a 2015 paper form 
>> the Gromacs group  “ Best bang for your buck: GPU nodes for GROMACS 
>> biomolecular simulations “  I should expect maybe (at best )  50% 
>> improvement for 90k atoms ( with  2x  GTX 970 )
> We did not benchmark GTX 970 in that publication.
> 
> But from Table 6 you can see that we also had quite a few cases with out 80k 
> benchmark
> where going from 1 to 2 GPUs, simulation speed did not increase much: E.g. 
> for the
> E5-2670v2 going from one to 2 GTX 980 GPUs led to an increase of 10 percent.
> 
> Did you use counter resetting for the benchnarks?
> 
> Carsten
> 
> 
>> What bothers me in my initial attempts is that my simulations became slower 
>> by adding the second GPU - it is frustrating to say the least. It's like 
>> swimming backwards.
>> 
>> I know am missing - as a minimum -  the correct setup for mdrun and 
>> suggestions would be welcome
>> 
>> The output from the last section of the log files is included below.
>> 
>> === ntpmi  1  ntomp:16 ==
>> 
>>  <==  ###  ==>
>>  <  A V E R A G E S  >
>>  <==  ###  ==>
>> 
>>  Statistics over 29301 steps using 294 frames
>> 
>>  Energies (kJ/mol)
>> Angle   G96AngleProper Dih.  Improper Dih.  LJ-14
>>   9.17533e+052.27874e+046.64128e+042.31214e+028.34971e+04
>>Coulomb-14LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.
>>  -2.84567e+07   -1.43385e+05   -2.04658e+031.33320e+071.59914e+05
>> Position Rest.  PotentialKinetic En.   Total EnergyTemperature
>>   7.79893e+01   -1.40196e+071.88467e+05   -1.38312e+073.00376e+02
>> Pres. DC (bar) Pressure (bar)   Constr. rmsd
>>  -2.88685e+003.75436e+010.0e+00
>> 
>>  Total Virial (kJ/mol)
>>   5.27555e+04   -4.87626e+021.86144e+02
>>  -4.87648e+024.04479e+04   -1.91959e+02
>>   1.86177e+02   -1.91957e+025.45671e+04
>> 
>>  Pressure (bar)
>>   2.22202e+011.27887e+00   -4.71738e-01
>>   1.27893e+006.48135e+015.12638e-01
>>  -4.71830e-015.12632e-012.55971e+01
>> 
>>T-PDMS T-VMOS
>>   2.99822e+023.32834e+02
>> 
>> 
>>  M E G A - F L O P S   A C C O U N T I N G
>> 
>> NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
>> RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
>> W3=SPC/TIP3p  W4=TIP4p (single or pairs)
>> V&F=Potential and force  V=Potential only  F=Force only
>> 
>> Computing:   M-Number M-Flops  % Flops
>> -
>> Pair Search distance check2349.753264   21147.779 0.0
>> NxN Ewald Elec. + LJ [F]   1771584.591744   116924583.05596.6
>> NxN Ewald Elec. + LJ [V&F]   17953.091840 1920980.827 1.6
>> 1,4 nonbonded interactions5278.575150  475071.763 0.4
>> Shift-X 22.173480 133.041 0.0
>> Angles4178.908620  702056.648 0.6
>> Propers879.909030  201499.168 0.2
>> Impropers5.2741801097.029 0.0
>> Pos. Restr. 42.1934402109.672 0.0
>> Virial  22.186710 399.361 0.0
>> Update2209.881420   68506.324 0.1
>> Stop-CM 22.248900

Re: [gmx-users] using dual CPU's

2018-12-11 Thread paul buscemi
Szilard,

Thank you vey much for the information and I apologize how the text appeared - 
internet demons at work.

The computer described in the log files is a basic test rig which we use to 
iron out models. The workhorse is a many core AMD with now one and hopefully 
soon to be two 2080ti’s,  It will have to handle several 100k particles and at 
the moment do not think the simulation could be divided. These are essentially 
of  a multi component ligand adsorption from solution onto a substrate  
including evaporation of the solvent.

I saw from a 2015 paper form your group  “ Best bang for your buck: GPU nodes 
for GROMACS biomolecular simulations “ that I should expect maybe a 50% 
improvement for 90k atoms ( with  2x  GTX 970 ) What bothered me in my initial 
attempts was that my simulations became slower by adding the second GPU - it 
was frustrating to say the least

I’ll give your suggestions a good workout, and report on the results when I 
hack it out..

Bes 
Paul

> On Dec 11, 2018, at 12:14 PM, Szilárd Páll  wrote:
> 
> Without having read all details (partly due to the hard to read log
> files), what I can certainly recommend is: unless you really need to,
> avoid running single simulations with only a few 10s of thousands of
> atoms across multiple GPUs. You'll be _much_ better off using your
> limited resources by running a few independent runs concurrently. If
> you really need to get maximum single-run throughput, please check
> previous discussions on the list on my recommendations.
> 
> Briefly, what you can try for 2 GPUs is (do compare against the
> single-GPU runs to see if it's worth it):
> mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING
> where typically N = 4, 6, 8 are worth a try (but N <= #cores) and the
> TASKSTRING should have N digits with either N-1 zeros and the last 1
> or N-2 zeros and the last two 1, i.e..
> 
> I suggest to share files using a cloud storage service like google
> drive, dropbox, etc. or a dedicated text sharing service like
> paste.ee, pastebin.com, or termbin.com -- especially the latter is
> very handy for those who don't want to leave the command line just to
> upload a/several files for sharing (i.e. try "echo "foobar" | nc
> termbin.com )
> 
> --
> Szilárd
> On Tue, Dec 11, 2018 at 2:44 AM paul buscemi  wrote:
>> 
>> 
>> 
>>> On Dec 10, 2018, at 7:33 PM, paul buscemi  wrote:
>>> 
>>> 
>>> Mark, attached are the tail ends of three  log files for
>>> the same system but run on an AMD 8  Core/16 Thread 2700x, 16G ram
>>> In summary:
>>> for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8 , and 
>>> 6.0 ns/day.
>>> Clearly, I do not have a handle on using 2 GPU's
>>> 
>>> Thank you again, and I'll keep probing the web for more understanding.
>>> I’ve propbably sent too much of the log, let me know if this is the case
>> Better way to share files - where is that friend ?
>>> 
>>> Paul
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] using dual CPU's

2018-12-10 Thread paul buscemi


> On Dec 10, 2018, at 7:33 PM, paul buscemi  wrote:
> 
> 
> Mark, attached are the tail ends of three  log files for
>  the same system but run on an AMD 8  Core/16 Thread 2700x, 16G ram
> In summary:
> for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8 , and 
> 6.0 ns/day.
> Clearly, I do not have a handle on using 2 GPU's
> 
> Thank you again, and I'll keep probing the web for more understanding.
> I’ve propbably sent too much of the log, let me know if this is the case   
Better way to share files - where is that friend ?
> 
> Paul
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] using dual CPU's

2018-12-10 Thread paul buscemi
d of
> the log file can be seen.
>
> Mark
> On Tue., 11 Dec. 2018, 01:25 p buscemi,  wrote:
> > Thank you, Mark, for the prompt response. I realize the limitations of the
> > system ( its over 8 yo ), but I did not expect the speed to decrease by 50%
> > with 12 available threads ! No combination of ntomp, ntmpi could raise
> > ns/day above 4 with two GPU, vs 6 with one GPU.
> >
> > This is actually a learning/practice run for a new build - an AMD 4.2 Ghz
> > 32 core TR, 64G ram. In this case I am trying to decide upon either a RTX
> > 2080 ti or two GTX 1080 TI. I'd prefer the two 1080's for the 7000 cores vs
> > the 4500 cores of the 2080. The model systems will have ~ million particles
> > and need the speed. But this is a major expense so I need to get it right.
> > I'll do as you suggest and report the results for both systems and I
> > really appreciate the assist.
> > Paul
> > UMN, BICB
> >
> > On Dec 9 2018, at 4:32 pm, paul buscemi  wrote:
> > >
> > > Dear Users,
> > > I have good luck using a single GPU with the basic setup.. However in
> >
> > going from one gtx 1060 to a system with two - 50,000 atoms - the rate
> > decrease from 10 ns/day to 5 or worse. The system models a ligand, solvent
> > ( water ) and a lipid membrane
> > > the cpu is a 6 core intel i7 970( 12 threads ) , 750W PS, 16G Ram.
> > > with the basic command " mdrun I get:
> > > ck Off! I just backed up sys.nvt.log to ./#.sys.nvt.log.10#
> > > Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
> > > Changing nstlist from 10 to 100, rlist from 1 to 1
> > >
> > > Using 2 MPI threads
> > > Using 6 OpenMP threads per tMPI thread
> > >
> > > On host I7 2 GPUs auto-selected for this run.
> > > Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
> > > PP:0,PP:1
> > >
> > > Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.10#
> > > Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.10#
> > > NOTE: DLB will not turn on during the first phase of PME tuning
> > > starting mdrun 'SR-TA'
> > > 10 steps, 100.0 ps.
> > > and ending with ^C
> > >
> > > Received the INT signal, stopping within 200 steps
> > > Dynamic load balancing report:
> > > DLB was locked at the end of the run due to unfinished PP-PME balancing.
> > > Average load imbalance: 0.7%.
> > > The balanceable part of the MD step is 46%, load imbalance is computed
> >
> > from this.
> > > Part of the total run time spent waiting due to load imbalance: 0.3%.
> > >
> > >
> > > Core t (s) Wall t (s) (%)
> > > Time: 543.475 45.290 1200.0
> > > (ns/day) (hour/ns)
> > > Performance: 1.719 13.963 before DBL is turned on
> > >
> > > Very poor performance. I have been following - or trying to follow -
> > "Performance Tuning and Optimization fo GROMACA ' M.Abraham andR Apsotolov
> > - 2016 but have not yet broken the code.
> > > 
> > > gmx mdrun -deffnm SR.sys.nvt -ntmpi 2 -ntomp 3 -gpu_id 01 -pin on.
> > >
> > >
> > > Back Off! I just backed up SR.sys.nvt.log to ./#SR.sys.nvt.log.13#
> > > Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
> > > Changing nstlist from 10 to 100, rlist from 1 to 1
> > >
> > > Using 2 MPI threads
> > > Using 3 OpenMP threads per tMPI thread
> > >
> > > On host I7 2 GPUs auto-selected for this run.
> > > Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
> > > PP:0,PP:1
> > >
> > > Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.13#
> > > Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.13#
> > > NOTE: DLB will not turn on during the first phase of PME tuning
> > > starting mdrun 'SR-TA'
> > > 10 steps, 100.0 ps.
> > >
> > > NOTE: DLB can now turn on, when beneficial
> > > ^C
> > >
> > > Received the INT signal, stopping within 200 steps
> > > Dynamic load balancing report:
> > > DLB was off during the run due to low measured imbalance.
> > > Average load imbalance: 0.7%.
> > > The balanceable part of the MD step is 46%, load imbalance is computed
> >
> > from this.
> > > Part of the total run time spent waiting due to load imbalance: 0.3%.
> > >
> > >
> > > Core t (s) Wall t (s) (%)
> > >

[gmx-users] using dual CPU's

2018-12-09 Thread paul buscemi
Dear Users,

I have good luck using a single GPU with the basic setup.. However in going 
from one gtx 1060 to a system with two - 50,000 atoms - the rate decrease from 
10 ns/day to 5 or worse. The system models a ligand, solvent ( water ) and a 
lipid membrane
the cpu is a 6 core intel i7 970( 12 threads ) , 750W PS, 16G Ram.
with the basic command " mdrun I get:
ck Off! I just backed up sys.nvt.log to ./#.sys.nvt.log.10#
Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
Changing nstlist from 10 to 100, rlist from 1 to 1

Using 2 MPI threads
Using 6 OpenMP threads per tMPI thread

On host I7 2 GPUs auto-selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:1

Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.10#
Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.10#
NOTE: DLB will not turn on during the first phase of PME tuning
starting mdrun 'SR-TA'
10 steps, 100.0 ps.
and ending with ^C

Received the INT signal, stopping within 200 steps

Dynamic load balancing report:
DLB was locked at the end of the run due to unfinished PP-PME balancing.
Average load imbalance: 0.7%.
The balanceable part of the MD step is 46%, load imbalance is computed from 
this.
Part of the total run time spent waiting due to load imbalance: 0.3%.

Core t (s) Wall t (s) (%)
Time: 543.475 45.290 1200.0
(ns/day) (hour/ns)
Performance: 1.719 13.963 before DBL is turned on

Very poor performance. I have been following - or trying to follow - 
"Performance Tuning and Optimization fo GROMACA ' M.Abraham andR Apsotolov - 
2016 but have not yet broken the code.

gmx mdrun -deffnm SR.sys.nvt -ntmpi 2 -ntomp 3 -gpu_id 01 -pin on.

Back Off! I just backed up SR.sys.nvt.log to ./#SR.sys.nvt.log.13#
Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
Changing nstlist from 10 to 100, rlist from 1 to 1

Using 2 MPI threads
Using 3 OpenMP threads per tMPI thread

On host I7 2 GPUs auto-selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:1

Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.13#
Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.13#
NOTE: DLB will not turn on during the first phase of PME tuning
starting mdrun 'SR-TA'
10 steps, 100.0 ps.

NOTE: DLB can now turn on, when beneficial
^C

Received the INT signal, stopping within 200 steps

Dynamic load balancing report:
DLB was off during the run due to low measured imbalance.
Average load imbalance: 0.7%.
The balanceable part of the MD step is 46%, load imbalance is computed from 
this.
Part of the total run time spent waiting due to load imbalance: 0.3%.

Core t (s) Wall t (s) (%)
Time: 953.837 158.973 600.0
(ns/day) (hour/ns)
Performance: 2.935 8.176


the beginning of the log file is
GROMACS version: 2018.3
Precision: single
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: CUDA
SIMD instructions: SSE4.1
FFT library: fftw-3.3.8-sse2
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
Built on: 2018-10-19 21:26:38
Built by: pb@Q4 [CMAKE]
Build OS/arch: Linux 4.15.0-20-generic x86_64
Build CPU vendor: Intel
Build CPU brand: Intel(R) Core(TM) i7 CPU 970 @ 3.20GHz
Build CPU family: 6 Model: 44 Stepping: 2
Build CPU features: aes apic clfsh cmov cx8 cx16 htt intel lahf mmx msr 
nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 
sse4.2 ssse3
C compiler: /usr/bin/gcc-6 GNU 6.4.0
C compiler flags: -msse4.1 -O3 -DNDEBUG -funroll-all-loops 
-fexcess-precision=fast
C++ compiler: /usr/bin/g++-6 GNU 6.4.0
C++ compiler flags: -msse4.1 -std=c++11 -O3 -DNDEBUG -funroll-all-loops 
-fexcess-precision=fast
CUDA compiler: /usr/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright 
(c) 2005-2017 NVIDIA Corporation;Built on Fri_Nov__3_21:07:56_CDT_2017;Cuda 
compilation tools, release 9.1, V9.1.85
CUDA compiler 
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;-D_FORCE_INLINES;;
 ;-msse4.1;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver: 9.10
CUDA runtime: 9.10

Running on 1 node with total 12 cores, 12 logical cores, 2 compatible GPUs
Hardware detected:
CPU info:
Vendor: Intel
Brand: Intel(R) Core(TM) i7 CPU 970 @ 3.20GHz
Family: 6 Model: 44 Stepping: 2
Features: aes apic clfsh cmov cx8 cx16 htt intel lahf mmx msr nonstop_tsc pcid 
pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
Hardware topology: Only logical processor count
GPU info:
Number of GPUs detected: 2
#0: NVIDIA GeForce GTX 1060 6GB, compute cap.: 6.1, ECC: no,

Re: [gmx-users] Building Gromacs

2018-11-12 Thread paul buscemi
This may be entirely misleading, but I have Gromacs 2018.3 loaded onto a SSD.  
This seems to be quite portable between machines I have access to ( during some 
builds ) ranging from an i5 4core, I7-8 core, i7-12 core and AMD 16 core 32 
threads  - all with expected scaling in speed and all cores used during MD.  So 
I am not sure why you must rebuild.

Paul

> On Nov 12, 2018, at 3:45 PM, Jasper Jordan  wrote:
> 
> I was sure that a month ago when I started down the path of building and
> making Gromacs available for our user community, that I read a statement
> that whenever there is a hardware change, you need to rebuild Gromacs. I
> took this quite literally. So I assume that if hardware isn't identical
> (make, model, etc) a rebuild is required.
> 
> Well, now I can't find the statement, and we're attempting to deploy to AWS
> where we might have 1 core, or 8, and we may have no GPUs, or 1, 8, or 16
> per instance. So people are asking why I need to rebuild, and I can't give
> them an answer because I can't find the statement any more.
> 
> Can somebody tell me if it is required? Was it required? Or is it just a
> misremembery that I need to just discard?
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] And Ryzen 8 core/16 thread use

2018-10-25 Thread paul buscemi


Dear Users,

Does anyone have concerns/comments ( other than heat output of the processor )  
in using the relatively new AMD Ryzen 7 2700 with Gromacs 18.3 or v19  and 
gromacs' ability to make use of the 16 threads along with a gtx1080ti ?

thanks
Paul
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] joining proteins at terminals

2018-10-10 Thread paul buscemi
thanks for the note. Yes I meant a single chain name.  
The minimization for those few bonds should not be too arduous.  Hope it works 
out.
Paul

> On Oct 10, 2018, at 1:22 PM, Kit Sang Chu  wrote:
> 
> Hi Paul,
> 
> Giving the same name you mean same "chain name"? I tried but the problem
> persists. For my specific case I cannot add extra amino acids. I just have
> to minimize the energy to pull them closer.
> 
> Regards,
> Simon Kit Sang Chu
> Ph.D. student
> Biophysics Graduate Group
> University of California Davis
> 
> 
> On Wed, Oct 10, 2018 at 3:48 AM paul buscemi  wrote:
> 
>> Try giving residues in the macro-macro structure the same name … or
>> possibly get rid of the 10A gap by using additional amino acids like
>> gly-gly gly ?
>> 
>> 
>>> On Oct 9, 2018, at 5:28 PM, Kit Sang Chu  wrote:
>>> 
>>> Hi everyone,
>>> 
>>> I have a macromolecular structure which contains multiple copies of
>>> proteins. Initially, there are separate monomers and now I have to join
>>> some of them through N/C-terminals.
>>> 
>>> However, editconf fails to recognize the merging part, possibly because
>>> they are separated by ~ 10A. All monomers supposed to be merged are given
>>> the same chain name in pdb. All hydrogen and terminal oxygens are
>> stripped
>>> out.
>>> 
>>> Are there any criteria specifically for GROMACS to recognize for merging?
>>> Is there any specification / flag to force merging terminals?
>>> 
>>> Thanks,
>>> Simon
>>> --
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] joining proteins at terminals

2018-10-10 Thread paul buscemi
Try giving residues in the macro-macro structure the same name … or possibly 
get rid of the 10A gap by using additional amino acids like gly-gly gly ?


> On Oct 9, 2018, at 5:28 PM, Kit Sang Chu  wrote:
> 
> Hi everyone,
> 
> I have a macromolecular structure which contains multiple copies of
> proteins. Initially, there are separate monomers and now I have to join
> some of them through N/C-terminals.
> 
> However, editconf fails to recognize the merging part, possibly because
> they are separated by ~ 10A. All monomers supposed to be merged are given
> the same chain name in pdb. All hydrogen and terminal oxygens are stripped
> out.
> 
> Are there any criteria specifically for GROMACS to recognize for merging?
> Is there any specification / flag to force merging terminals?
> 
> Thanks,
> Simon
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] topology problem with x2top

2018-10-04 Thread paul buscemi


> On Oct 4, 2018, at 8:30 AM, Paul Buscemi  wrote:
> 
> 
> 
> Maria
> 
> Regarding PVDF  (  in 54a7 ff) here is  some information that may be useful:

> ===  Input command to generate a 15 mer ===
> gb@RGB ~/Desktop/PVDF $ gmx x2top -f pdb/PVDF15.pdb -o pvdf.top -ff select
>:-) GROMACS - gmx x2top, 2018 (-:
> 
>   
> Command line:
>   gmx x2top -f pdb/PVDF15.pdb -o pvdf.top -ff select
> 
> 
> Select the Force Field:
> From current directory:
>  1: GROMOS96 54a7 force field (Eur. Biophys. J. (2011), 40,, 843-856, DOI: 
> 10.1007/s00249-011-0700-9)
> From '/usr/local/gromacs/share/gromacs/top':
>  2: AMBER03 protein, nucleic AMBER94 (Duan et al., J. Comp. Chem. 24, 
> 1999-2012, 2003)
>  3: AMBER94 force field (Cornell et al., JACS 117, 5179-5197, 1995)
>  4……
…...
> 1
> 
> There are 4 different atom types in your sample
> Generating angles and dihedrals from bonds...
> Before cleaning: 279 pairs
> Before cleaning: 279 dihedrals
> There are   31 proper dihedrals,0 impropers,  192 angles
>279 pairs,   97 bonds and98 atoms
> Total charge is -0.0759998, total mass is 990.64
> 
> 
> 
> == part of the itp from ATB used to obtain charges 
> and types from a 3-mer 
> 
> [ moleculetype ]
> ; Name   nrexcl
> 37YP 3
> [ atoms ]
> ;  nr  type  resnr  resid  atom  cgnr  chargemasstotal_charge
> 1HC137YPH2610.155   1.0080
> 2 C 137YPC231   -0.493  12.0110
> 3HC137YPH2410.155   1.0080
> 4HC137YPH2510.155   1.0080
> 5 C  137YPC2010.530  12.0110
> 6 F  137YPF211   -0.237  18.9984
> 7 F  137YPF221   -0.237  18.9984
> 8 C  137YPC171   -0.410  12.0110
> 9HC137YPH1810.157   1.0080
>10HC137YPH1910.157   1.0080 ; -0.068
>11 C 137YPC1420.627  12.0110
>12 F 137YPF152   -0.236  18.9984
>13 F 137YPF162   -0.236  18.9984
>14 C 137YPC112   -0.494  12.0110
>15HC137YPH1220.159   1.0080
>16HC137YPH1320.159   1.0080
>17 C 137YP C820.475  12.0110
>18 F 137YP F92   -0.227  18.9984
>19 F 137YPF102   -0.227  18.9984 ;  0.000
>20 C 137YP C530.017  12.0110
>21HC137YP H630.044   1.0080
>22HC137YP H730.044   1.0080
>23 C137YP C23   -0.280  12.0110
>24HC137YP H130.081   1.0080
>25HC137YP H330.081   1.0080
>26HC137YP H430.081   1.0080 ;  0.06
> 
> 
>  below -the n2t used for several polymers - added to the Gtomod54a7 
> directory = 
===  last two lines for PVDF   =
=  no guarantee at all that  charges are correct/optimal/best 
or even good 
>  
> 
> 
> ;atomtypechargemass
>HHC .019   1.0081  C 0.108
>CC   -.05312.0354  C .154H 0.108   H 0.108 
> H  0.108  ;  methyl
>CCH2   -0.0506 12.011   4  H 0.108   H 0.108   C 0.152   C 
> 0.152;  ethyl
>
>CC   .302   12.0011 4   C  .152   N  .144H .108
>   H .108; amide   
>NNT -.387   14.0067   3  C .139C .1537H .0985  
>  ;C-NH-C
>CC   .021   12.011   3C  .153   N  .139O .1207
>Cc   .021   12.001   3O  .127   O   .122  N  .138
> 
>NNT -.02714.0067 3 C  .139   HS14 .0985HS14 .0985  
> ;  C-NH2
>CC   .02112.011 4   C  .153   C.153 N
> .139H .113 
>CC  -.02112.0114   C  .153   O.139 H
> .109H .108 ; ether
> 
>OO  -.37115.999  1C  .1207 
>   ;  O=C  acid
>OOE -.287 15.9992C  .1204   C  .1204   
>   ; ether
>HHS14.05281.0081N  .0985   
>; N-H
>  
>FF  -.214   18.9984   1 C  .1386
>CC  0.435   12.0114 F  .138

Re: [gmx-users] topology problem with x2top

2018-10-03 Thread paul buscemi
Maria,
The procedure you are using is very much like I use.  While I use 54a7 ff you 
most likely are not but I know the procedure works with oplsaa.ft. 

 In using 54a7ff, I can use ATB to build a small version of the polymer - say 
50 atoms or less --. ATB has problems with polymers over  about 300 atoms 
otherwise you could use their tip directly  . From the generated tip you can 
obtain the charges and atom types.  I use Avogadro to generate the initial 
small model and can obtain bond lengths and another estimate of the charges.  
You have to be careful to make all of the bond types in the n2t and the bond 
lengths have to be accurate- grump will complain if these are not close to what 
it expects - within 10%.

 You probably know this as well, but in my experience  the charges on the full 
model will hardly ever total to  exactly 0, but they should be less than 
0.0001e  on the small model  so the total charge on the full polymer system is 
less than ~0.1e. with multiple polymers -e.g. a surface.I’ve found that  
PE, PVC, nylon, Me2SiO,  polymer behave well in vacuum and in solvents - 
despite the fact that -maxwarn 1  and I have had to come to a peaceful 
settlement for the small charges I usually end up with  ( 0.1e for ~5 
atoms) 

If x2top works with the small model it will work with the full polymer and it’s 
much easier to flush out the problems with the small model

I’ll see if I can generate the PVDF tomorrow and post the tip and n2t - It will 
be for 54a7 but maybe it will still provide some insight..

Not sure of how to approach the graphene, but I think Avogadro may have such a 
model.  Unfortunately Avo does not provide top files.

Paul Buscemi, Ph.D.
Bioinformatics and Computational Chemistry
University of Minnesota.

> On 3,Oct 2018, at 3:32 AM, Maria Luisa  wrote:
> 
> Dear users,
> I need help. I want to build the topology of a polymeric chain of PVDF of
> 602 atoms, and of a sheet of Graphene Oxide of 202 atoms. I use the
> version of gromacs 5.1.4. The problem is that it does not recognize some
> atoms in the polymeric chain, and not at all the graphene oxide atoms. I
> already worked with the x2top command, I had already created another
> executable .n2t file, to complete the missing atoms in the atomname file,
> and in the past I built topology for graphene oxide and graphene systems,
> but now, all this command with my previous challenge doesn't t work. I'm
> doing a thousand tests but I do not know how to do it.
> 
> I hope to get help.
> Thank you
> 
> Maria Luisa Perrotta
> Ph.D Student, CNR-ITM
> via P.Bucci, 87036 Rende (Cs)
> Italy
> email: ml.perro...@itm.cnr.it
> 
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Polymer topology and MD Gromacs

2018-10-01 Thread paul buscemi
Alex,
Thanks for your feedback.  

So far the critters seem to be compliant,  align when they should with H 
bonding, crumple when they should,  and grommp’s major complaint is when I 
forget to adjust the number of residues. But I am unseasoned in MD and 
following the discussions on gmx-users you come to realize that there is more 
to MD than just good looks.  Another set of eyes would do no harm.   I’ll do 
some more vetting and if the polymers react appropriately in polar and 
non-polar solvents, I will set it loose.
 Best 
Paul

> On Oct 1, 2018, at 1:25 PM, Alex  wrote:
> 
> I don't really work much with small organics or polymers (angle descriptions 
> should be quite important there), so my review would be at the level of what 
> the good old grumpy robot grompp already does. If you are sure of the 
> parameters and your tests are coming out okay, given some criteria, I say why 
> not share it..
> 
> Alex
> 
> 
> On 10/1/2018 11:46 AM, pbuscemi wrote:
>> Alex, Justin,
>> 
>> I've managed to make and run polymers using Avogadro ,modifying the n2t, 
>> then creating the top using  x2top under  54a7 ff.  The method may be useful 
>> for others but before presenting it to the user group,  it  should be 
>> reviewed so that  glaring mistakes/concepts are revised.   If you think it 
>> worthwhile, would either of you be agreeable to reviewing the process?
>> 
>> Thanks
>> Paul
>> 
>> -Original Message-
>> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
>> [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of Alex
>> Sent: Sunday, September 30, 2018 12:44 AM
>> To: gmx-us...@gromacs.org
>> Subject: Re: [gmx-users] force field not found
>> 
>> Yeah, if it is missing bonded parameters, you can always try to find
>> something similar, at least with OPLS-AA -- don't really know about the
>> other ff.
>> 
>> Alex
>> 
>> 
>> On 9/29/2018 8:58 PM, paul buscemi wrote:
>>> Alex,
>>> 
>>> I wanted to practice some more with x2top using  a simple CH3 -( CH2)14 
>>> -Ch3  pdb model.  oplsaa works fine, but not 54a7 FF generating the erro  “ 
>>> cannot find forcefield for C “  Th two CH3’s do not cause the error found 
>>> but the fourteen CH2’s.
>>> 
>>> In the ffbonded.itp bond angle types i see CH2-S-CH3  , C-CH2-C and  
>>> CH2-S_S, but  not C-CH2-C.  Can I add a new atomas ga 5_55   by anology or 
>>> hunt for the correct parameters  ? (I’ m trying his now )
>>>I am assuming the n2t  nor the rtp do not have to be modified since 
>>> x2top does not rely on the rtp.  This is a fairly basic but essential task, 
>>> and would surly like to master it.
>>> 
>>> Thanks,
>>> Paul
>>> 
>>> 
>>> 
>>> 
>>>> On Sep 27, 2018, at 5:47 PM, Alex  wrote:
>>>> 
>>>> Never dealt with TiO2, but the path to parameterizing forcefields for
>>>> solid-state structures in MD is becoming more and more straightforward,
>>>> e.g., J. Phys. Chem. C 2017. 121(16): p. 9022-9031.
>>>> 
>>>> Alex
>>>> 
>>>> On Thu, Sep 27, 2018 at 4:11 PM paul buscemi  wrote:
>>>> 
>>>>> Alex,
>>>>> 
>>>>> There are so many important  reactions / applications in which protein
>>>>> polymer interactions play a role that  the ability  to generate  of
>>>>> polymers should be part of gromacs repertoire. I’ll keep plugging away on
>>>>> this and report to the community if I can break the code  - other than
>>>>> using the very good but terribly expensive commercial programs.   I would
>>>>> not doubt that many have already accomplished this this task, but it is 
>>>>> not
>>>>> well tracked within this group.
>>>>> 
>>>>> I might not approach a Molysulfidnitride substrate , ( making turbine
>>>>> blades ??)  but TiO2 is indeed another surface very popular with proteins.
>>>>> Most every nitinol surface is essentially TiO2.  If you have some pointers
>>>>> on that,  I’m listening.
>>>>> 
>>>>> Thank you again for the assist.
>>>>> 
>>>>> Regards
>>>>> Paul
>>>>> 
>>>>> 
>>>> -- 
>>>> Gromacs Users mailing list
>>>> 
>>>> * Please search the archive at 
>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>>

Re: [gmx-users] force field not found

2018-09-29 Thread paul buscemi
Alex,

I wanted to practice some more with x2top using  a simple CH3 -( CH2)14 -Ch3  
pdb model.  oplsaa works fine, but not 54a7 FF generating the erro  “ cannot 
find forcefield for C “  Th two CH3’s do not cause the error found but the 
fourteen CH2’s.

In the ffbonded.itp bond angle types i see CH2-S-CH3  , C-CH2-C and  CH2-S_S, 
but  not C-CH2-C.  Can I add a new atomas ga 5_55   by anology or hunt for the 
correct parameters  ? (I’ m trying his now ) 
  I am assuming the n2t  nor the rtp do not have to be modified since x2top 
does not rely on the rtp.  This is a fairly basic but essential task, and would 
surly like to master it.

Thanks,
Paul




> On Sep 27, 2018, at 5:47 PM, Alex  wrote:
> 
> Never dealt with TiO2, but the path to parameterizing forcefields for
> solid-state structures in MD is becoming more and more straightforward,
> e.g., J. Phys. Chem. C 2017. 121(16): p. 9022-9031.
> 
> Alex
> 
> On Thu, Sep 27, 2018 at 4:11 PM paul buscemi  wrote:
> 
>> Alex,
>> 
>> There are so many important  reactions / applications in which protein
>> polymer interactions play a role that  the ability  to generate  of
>> polymers should be part of gromacs repertoire. I’ll keep plugging away on
>> this and report to the community if I can break the code  - other than
>> using the very good but terribly expensive commercial programs.   I would
>> not doubt that many have already accomplished this this task, but it is not
>> well tracked within this group.
>> 
>> I might not approach a Molysulfidnitride substrate , ( making turbine
>> blades ??)  but TiO2 is indeed another surface very popular with proteins.
>> Most every nitinol surface is essentially TiO2.  If you have some pointers
>> on that,  I’m listening.
>> 
>> Thank you again for the assist.
>> 
>> Regards
>> Paul
>> 
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] atom types not found

2018-09-27 Thread paul buscemi
Alex,

There are so many important  reactions / applications in which protein polymer 
interactions play a role that  the ability  to generate  of polymers should be 
part of gromacs repertoire. I’ll keep plugging away on this and report to the 
community if I can break the code  - other than using the very good but 
terribly expensive commercial programs.   I would not doubt that many have 
already accomplished this this task, but it is not well tracked within this 
group.

I might not approach a Molysulfidnitride substrate , ( making turbine blades 
??)  but TiO2 is indeed another surface very popular with proteins.  Most every 
nitinol surface is essentially TiO2.  If you have some pointers on that,  I’m 
listening.

Thank you again for the assist.

Regards
Paul

> On Sep 27, 2018, at 3:16 PM, Alex  wrote:
> 
> Hi Paul,
> 
> Glad x2top is working out for you. The rest of the things you're pointing 
> out, I hope others could comment. I haven't simulated any proteins in a long 
> time, but if you ever need to drop a protein on the surface of some sort of 
> an insane molybdenum disulfide-graphene-boron nitride heterostructure, I 
> could be of service. ;)
> 
> Alex
> 
> 
> On 9/27/2018 10:44 AM, pbuscemi wrote:
>> Alex,
>> This pertains the prior correspondence to building a polymer and is the 
>> process I've been developing.
>> 
>> To date I can  obtain an ITP and pdb from ATB for a monomer.  From there 
>> with information in those files, it is relatively easy to construct the n2t 
>> file to use in x2top.  (  I’d be happy to provide an example as a 'tutorial' 
>> of sorts).  X2top provides the monomer rtp for use in pdb2gmx. It has all 
>> the atom type information.  Thanks for the guidance on that.
>> 
>> The hangups are not associated with the rtp but of all things producing the 
>> pdb of the polymer specifically  positioning along,say, the x axis but more 
>> importantly, producing the pdb of the polymer that uses the same atom labels 
>> as the original pdb of  the monomer.  In the PE example from gromacs there  
>> are 3 mers of 2 atoms  so it is easy to manually keep track of the names, 
>> but not if you have 1000 mers.  Avogadro renames the added mers.
>> 
>> Since gromacs can build proteins, and I can tell gmx that the monomer is a 
>> protein  ( it wants to think that it is anyway),  I will try to use the same 
>> logic to build the  polymer.  More to come.
>> 
>> Paul
>> 
>> 
>> 
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] atom types not found

2018-09-24 Thread paul buscemi
Thank you for the really rapid reply.  I’ll work on it some more and report the 
outcome
Paul

> On 24,Sep 2018, at 9:04 PM, Alex  wrote:
> 
> I use x2top a whole lot, so here's an example to be considered in the
> context of what Justin just wrote:
> 
> CJ   opls_xxx0.012.011  3CJ  0.142   CJ  0.142   CJ  0.142
> 
> The total number of bonds is 3, then just list them in pairs of
> element-bond entries. If I want a different type assigned to an atom that
> only has two nearest neighbors, it'd look like:
> 
> CJ   opls_yyy0.012.011  2CJ  0.142   CJ  0.142
> 
> and so on. A very useful utility for doing solid-state stuff with gmx. Hope
> this helps.
> 
> Alex
> 
> 
> 
> The
> 
> On Mon, Sep 24, 2018 at 7:48 PM Justin Lemkul  wrote:
> 
>> 
>> 
>> On 9/24/18 9:42 PM, paul buscemi wrote:
>>> This is a version of a very old question
>>> 
>>> Using Avogadro, I’ve built an all atom version of nylon12, ( 45 atoms )
>> converted to a gro file with editconf.  I want to generate the rtp so I can
>> construct a polymer.  Using x2top, I’ve tried using both  gromos 54a7 ff
>> and oplsaaff.  there are two outcomes:
>>> 
>>> 1) if trying 54a7,  I am warned that the atomnames2types.n2t  is not
>> found ( and indeed it is not present in the ff subfolder ) .  I’ve done
>> what I think is an extensive search ( eg github, etc ), but have not found
>> a n2t for 54a7.  I tried to construct one following that found in oplsaa
>> but that has not worked out -yet.  Does 54a7ff require an n2t file  and if
>> so what is the format ?
>> 
>> x2top requires an .n2t file for any force field.
>> 
>> Sadly, my wiki page on .n2t files was somehow lost, so I will try to
>> repeat it here, in column numbers:
>> 
>> 1. Element (e.g. first character of the atom name)
>> 2. Atom type to be assigned
>> 3. Charge to be assigned
>> 4. Mass
>> 5. Number of bonds the atom forms
>> 6-onward. The element and reference bond length for N bonds (where N is
>> specified in column 5); x2top will assign a bond if the detected
>> interatomic distance is within 10% of the reference bond length
>> specified here.
>> 
>>> 2) In trying oplsaa,  I am warned  only 44 of 45 atom types are found..
>> It turns out that it is the Nitrogen that is the culprit.  If I convert the
>> nitrogen to carbon in the gro , file the top and rtp are completed.  It’s
>> hard to believe that an amide nitrogen is not in the force field.  Thinking
>> it may be  my model, I downloaded arginine from “aminoacidsguide.com” to
>> avoid Avogadro.  With Aginine only 19 of 26 atoms were found in the
>> oplassff. What ?  I can’t make an rtp for arginine without modifying the
>> ffbonded or n2t for oplsaa.Is  x2top simply not the right tool ?
>> 
>> It's not that the atom type isn't found, it's that x2top can't assign an
>> atom type because a given atom does not satisfy all of the requirements
>> of the .n2t file listed above. That means a bond tolerance likely isn't
>> being satisfied.
>> 
>> -Justin
>> 
>>> Note if I  submit the nylon pdb to ATB get back a usable itp,  and it is
>> possible to generate a small polymer this way, ( 20 mers or so ).  But I
>> should be able to construct a polymer similar to the example given  for PE
>> some 9 years ago using a beginning, middle mers.  But I need the rtp.
>>> 
>>> Thanks for any responses
>>> Paul Buscemi, Ph.D.
>>> UMN
>>> 
>>> 
>>> 
>>> 
>> 
>> --
>> ==
>> 
>> Justin A. Lemkul, Ph.D.
>> Assistant Professor
>> Virginia Tech Department of Biochemistry
>> 
>> 303 Engel Hall
>> 340 West Campus Dr.
>> Blacksburg, VA 24061
>> 
>> jalem...@vt.edu | (540) 231-3129
>> http://www.thelemkullab.com
>> 
>> ==
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] atom types not found

2018-09-24 Thread paul buscemi
This is a version of a very old question

Using Avogadro, I’ve built an all atom version of nylon12, ( 45 atoms ) 
converted to a gro file with editconf.  I want to generate the rtp so I can 
construct a polymer.  Using x2top, I’ve tried using both  gromos 54a7 ff  and 
oplsaaff.  there are two outcomes:

1) if trying 54a7,  I am warned that the atomnames2types.n2t  is not found ( 
and indeed it is not present in the ff subfolder ) .  I’ve done what I think is 
an extensive search ( eg github, etc ), but have not found a n2t for 54a7.  I 
tried to construct one following that found in oplsaa but that has not worked 
out -yet.  Does 54a7ff require an n2t file  and if so what is the format ?

2) In trying oplsaa,  I am warned  only 44 of 45 atom types are found.. It 
turns out that it is the Nitrogen that is the culprit.  If I convert the 
nitrogen to carbon in the gro , file the top and rtp are completed.  It’s hard 
to believe that an amide nitrogen is not in the force field.  Thinking it may 
be  my model, I downloaded arginine from “aminoacidsguide.com” to avoid 
Avogadro.  With Aginine only 19 of 26 atoms were found in the oplassff. What ?  
I can’t make an rtp for arginine without modifying the ffbonded or n2t for 
oplsaa.Is  x2top simply not the right tool ?

Note if I  submit the nylon pdb to ATB get back a usable itp,  and it is 
possible to generate a small polymer this way, ( 20 mers or so ).  But I should 
be able to construct a polymer similar to the example given  for PE  some 9 
years ago using a beginning, middle mers.  But I need the rtp.

Thanks for any responses
Paul Buscemi, Ph.D.
UMN




-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Center to center distance in cylindrical micelles

2018-08-11 Thread paul buscemi
Shan,

Load the .gro file into VMD , select the atoms/residues  of interestsee:  
https://www.youtube.com/watch?v=QV0_CJHBF6U  

For a plot , You could try converting the trr file to text readable format with 
trjconv.  From there select the atom numbers from each micelle that represent 
the center.  Then you write a small script in R to plot the difference between 
centers.  

Paul

> On Aug 11, 2018, at 5:26 AM, Mark Abraham  wrote:
> 
> Hi,
> 
> There is an axis of each cylinder but the two won't be parallel so there is
> no unique distance to measure. So I suggest thinking carefully about what
> you really want.
> 
> Mark
> 
> On Sat, Aug 11, 2018, 06:58 Shan Jayasinghe 
> wrote:
> 
>> Hi,
>> 
>> I tried to used gmx distance. However, I don't understand how can I define
>> the center of the circle face of the cylinder to the same in the other
>> cylindrical micelle.
>> 
>> Can anyone help me?
>> Thank you.
>> 
>> 
>> On Sat, Aug 11, 2018 at 9:33 AM Mark Abraham 
>> wrote:
>> 
>>> Hi,
>>> 
>>> Have you looked at the different tools available and considered what
>> might
>>> be useful for you?
>>> 
>>> Mark
>>> 
>>> On Fri, Aug 10, 2018, 07:34 Shan Jayasinghe <
>> shanjayasinghe2...@gmail.com>
>>> wrote:
>>> 
 Dear Gromacs Users,
 
 I want to calculate the center to center distance of two cylindrical
 micelles in my simulation. What gmx command should I use to calculate
>> the
 distance? Can anyone help me?
 
 Thank you.
 --
 Gromacs Users mailing list
 
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
 
>>> --
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> 
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-requ...@gromacs.org.
>>> 
>> 
>> 
>> --
>> Best Regards
>> Shan Jayasinghe
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Using multiple GPU's

2018-08-10 Thread paul buscemi
Mark, Kevin,

I’ve uploaded two representative log files to Google Drive.  gms users should 
get the invite.

thanks
Paul

> On Aug 10, 2018, at 6:27 PM, Mark Abraham  wrote:
> 
> Hi,
> 
> Kevin means you to upload the file to a file sharing service and share the
> link. Everything you've shown points to using both GPUs, but the relevant
> output about the decision is not shown.
> 
> Mark
> 
> On Sat, Aug 11, 2018, 00:58 paul buscemi  wrote:
> 
>> Thanks Kevin. for the rapid response.
>> 
>> I am using 2018. I’ll check the 2018 - make sure - I was look at the
>> proper docs.   I should not note that the  GPUs are not matched.   a gtx
>> 1080ti  and a gtx 1060
>> 
>> below is some pertinent information from the log file.  ( not too sure how
>> to post a link on the user group )
>> 
>> Paul
>> 
>>> On Aug 10, 2018, at 12:31 PM, Kevin Boyd  wrote:
>>> 
>>> Hi,
>>> 
>>> Can you post a link to your log file?
>>> 
>>> Also, what version of gromacs are you using? Make sure that the
>>> documentation you are following corresponds to the right version of
>>> gromacs. If you are using v 5.1 (as the link suggests), strongly consider
>>> upgrading to 2018.
>>> 
>>> Kevin
>>> 
>>> On Fri, Aug 10, 2018 at 12:57 PM, paul buscemi  wrote:
>>> 
>>>> 
>>>> Dear Uses,
>>>> 
>>>> I’ve been following the examples on https://na01.safelinks.
>>>> protection.outlook.com/?url=http%3A%2F%2Fmanual.gromacs.
>>>> org%2Fdocumentation%2F5.1%2Fuser-guide%2Fmdrun-
>>>> performance.html&data=02%7C01%7Ckevin.boyd%40uconn.edu%
>>>> 7Cb67f423cb92a4abac13508d5fee27522%7C17f1a87e2a254eaab9df9d439034
>>>> b080%7C0%7C1%7C636695170968044387&sdata=TBSbmbdU%
>>>> 2F5kxRGsaD1IqLXzEbcJUJ6Kqg7ub%2B%2B9m6xc%3D&reserved=0 for setting
>> up
>>>> multiple GP’s but have not succeeded.
>>>> Using a single GPU gmx runs well.  Nvidia and gmx see the 2 PGUs. and
>>>> I‘ve  tried   mdrun with   variations of  -gpu_id  01,and -ntmpi,
>> but
>>>> the GPUs are not put to use.
>>>> 
>>>> Would someone suggest the proper signals to mdrun for  two GPU’s. ?
>>>> 
>>>> Running on lilnix-min sarah,  I7
>>>> 
>>>> thanks
>>>> Paul
>>>> --
>>>> 
>> 
>> GROMACS:  gmx mdrun, version 2018
>> Executable:   /usr/local/gromacs/bin//gmx
>> Data prefix:  /usr/local/gromacs
>> Working dir:  /home/rgb/Desktop/PDMS
>> Command line:
>>  gmx mdrun -deffnm pdms.sys.nvt -ntmpi 4 -ntomp 2 -gpu_id 01
>> 
>> GROMACS version:2018
>> Precision:  single
>> Memory model:   64 bit
>> MPI library:thread_mpi
>> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
>> GPU support:CUDA
>> SIMD instructions:  AVX2_256
>> FFT library:fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512
>> RDTSCP usage:   enabled
>> TNG support:enabled
>> Hwloc support:  disabled
>> Tracing support:disabled
>> Built on:   2018-07-06 18:24:42
>> Built by:   rgb@RGB [CMAKE]
>> Build OS/arch:  Linux 4.4.0-21-generic x86_64
>> Build CPU vendor:   Intel
>> Build CPU brand:Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
>> Build CPU family:   6   Model: 158   Stepping: 9
>> Build CPU features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma hle htt
>> intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd
>> rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
>> C compiler: /usr/bin/gcc GNU 5.4.0
>> C compiler flags:-march=core-avx2 -O3 -DNDEBUG -funroll-all-loops
>> -fexcess-precision=fast
>> C++ compiler:   /usr/bin/c++ GNU 5.4.0
>> C++ compiler flags:  -march=core-avx2-std=c++11   -O3 -DNDEBUG
>> -funroll-all-loops -fexcess-precision=fast
>> CUDA compiler:  /usr/lib/nvidia-cuda-toolkit/bin/nvcc nvcc: NVIDIA (R)
>> Cuda compiler driver;Copyright (c) 2005-2015 NVIDIA Corporation;Built on
>> Tue_Aug_11_14:27:32_CDT_2015;Cuda compilation tools, release 7.5, V7.5.17
>> CUDA compiler
>> flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_52,code=compute_52;-use_fast_math;-D_FORCE_INLINES;;
>> ;-march=core-avx2;-std=c++11;-O3;-DNDEBUG;-f

Re: [gmx-users] Using multiple GPU's

2018-08-10 Thread paul buscemi
Thanks Kevin. for the rapid response.

I am using 2018. I’ll check the 2018 - make sure - I was look at the proper 
docs.   I should not note that the  GPUs are not matched.   a gtx 1080ti  and a 
gtx 1060

below is some pertinent information from the log file.  ( not too sure how to 
post a link on the user group ) 

Paul

> On Aug 10, 2018, at 12:31 PM, Kevin Boyd  wrote:
> 
> Hi,
> 
> Can you post a link to your log file?
> 
> Also, what version of gromacs are you using? Make sure that the
> documentation you are following corresponds to the right version of
> gromacs. If you are using v 5.1 (as the link suggests), strongly consider
> upgrading to 2018.
> 
> Kevin
> 
> On Fri, Aug 10, 2018 at 12:57 PM, paul buscemi  wrote:
> 
>> 
>> Dear Uses,
>> 
>> I’ve been following the examples on https://na01.safelinks.
>> protection.outlook.com/?url=http%3A%2F%2Fmanual.gromacs.
>> org%2Fdocumentation%2F5.1%2Fuser-guide%2Fmdrun-
>> performance.html&data=02%7C01%7Ckevin.boyd%40uconn.edu%
>> 7Cb67f423cb92a4abac13508d5fee27522%7C17f1a87e2a254eaab9df9d439034
>> b080%7C0%7C1%7C636695170968044387&sdata=TBSbmbdU%
>> 2F5kxRGsaD1IqLXzEbcJUJ6Kqg7ub%2B%2B9m6xc%3D&reserved=0 for setting up
>> multiple GP’s but have not succeeded.
>> Using a single GPU gmx runs well.  Nvidia and gmx see the 2 PGUs. and
>> I‘ve  tried   mdrun with   variations of  -gpu_id  01,and -ntmpi, but
>> the GPUs are not put to use.
>> 
>> Would someone suggest the proper signals to mdrun for  two GPU’s. ?
>> 
>>  Running on lilnix-min sarah,  I7
>> 
>> thanks
>> Paul
>> --
>> 

GROMACS:  gmx mdrun, version 2018
Executable:   /usr/local/gromacs/bin//gmx
Data prefix:  /usr/local/gromacs
Working dir:  /home/rgb/Desktop/PDMS
Command line:
  gmx mdrun -deffnm pdms.sys.nvt -ntmpi 4 -ntomp 2 -gpu_id 01

GROMACS version:2018
Precision:  single
Memory model:   64 bit
MPI library:thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:CUDA
SIMD instructions:  AVX2_256
FFT library:fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512
RDTSCP usage:   enabled
TNG support:enabled
Hwloc support:  disabled
Tracing support:disabled
Built on:   2018-07-06 18:24:42
Built by:   rgb@RGB [CMAKE]
Build OS/arch:  Linux 4.4.0-21-generic x86_64
Build CPU vendor:   Intel
Build CPU brand:Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
Build CPU family:   6   Model: 158   Stepping: 9
Build CPU features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma hle htt 
intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd 
rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
C compiler: /usr/bin/gcc GNU 5.4.0
C compiler flags:-march=core-avx2 -O3 -DNDEBUG -funroll-all-loops 
-fexcess-precision=fast  
C++ compiler:   /usr/bin/c++ GNU 5.4.0
C++ compiler flags:  -march=core-avx2-std=c++11   -O3 -DNDEBUG 
-funroll-all-loops -fexcess-precision=fast  
CUDA compiler:  /usr/lib/nvidia-cuda-toolkit/bin/nvcc nvcc: NVIDIA (R) Cuda 
compiler driver;Copyright (c) 2005-2015 NVIDIA Corporation;Built on 
Tue_Aug_11_14:27:32_CDT_2015;Cuda compilation tools, release 7.5, V7.5.17
CUDA compiler 
flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_52,code=compute_52;-use_fast_math;-D_FORCE_INLINES;;
 
;-march=core-avx2;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:9.0
CUDA runtime:   7.50

= latest attempt === 
Running on 1 node with total 4 cores, 8 logical cores, 2 compatible GPUs
Hardware detected:
  CPU info:
Vendor: Intel
Brand:  Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
Family: 6   Model: 158   Stepping: 9
Features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma hle htt intel lahf 
mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 
sse3 sse4.1 sse4.2 ssse3 tdt x2apic
  Hardware topology: Basic
Sockets, cores, and logical processors:
  Socket  0: [   0   4] [   1   5] [   2   6] [   3   7]
  GPU info:
Number of GPUs detected: 2
#0: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC:  no, stat: 
compatible
#1: NVIDIA GeForce GTX 1060 6GB, compute cap.: 6.1, ECC:  no, stat: 
compatible

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Using multiple GPU's

2018-08-10 Thread paul buscemi

Dear Uses,

I’ve been following the examples on 
http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-performance.html 
for setting up multiple GP’s but have not succeeded.  
Using a single GPU gmx runs well.  Nvidia and gmx see the 2 PGUs. and I‘ve  
tried   mdrun with   variations of  -gpu_id  01,and -ntmpi, but the GPUs 
are not put to use.

Would someone suggest the proper signals to mdrun for  two GPU’s. ?

  Running on lilnix-min sarah,  I7 

thanks
Paul
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Structure of polyvinyltoluene

2018-08-09 Thread paul buscemi
Genevieve,

Create a 3, 4 or 5 mer of the PVT and submit to  ATB ( the wonderful topology 
makers ) to obtain the itp.  ( 4A7 ff )  That wil give you the beginning middle 
and end mers for buiding a plymers as Justin has shown for PE.

A second route is to build  PVT under 500 atoms and submit that to ATB. It may 
fail, but you will still obtain and itp without charges.  Use the charges from 
the small mer to fill in

Paul

> On Aug 9, 2018, at 8:33 AM, Harrisson, Genevieve  
> wrote:
> 
> UNRESTRICTED / ILLIMITÉE
> Hello!
> 
> As part of my research, I need the structure of polyvinyltoluene to perform 
> molecular dynamics simulations.
> 
> I searched for it in Crystallography Open Database (COD) and in Cambridge 
> Crystallographic Data Centre (CCDC), but found nothing.
> 
> I'm wondering if someone could help me?
> 
> 
> Thank you,
> Best regards,
> ---
> Genevieve Harrisson, Ph.D.
> Applied physicist
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] topogen usage

2018-08-02 Thread paul buscemi
Dear Users,

Although topogen is a bit dated it seems to work well for ‘smaller’ polymers 
-500 atoms- and builds a single file itp. However in making a file for nylon 12 
1000 atoms , a ‘central’ itp was create and separate files were made for 
section_dihedrals, section_bonds etc.   Are these to be included as you would 
an itp,  or does the itp find them automatically ?

Regards
Paul
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Slab gets bended in NVT

2018-07-28 Thread paul buscemi
Alex, 

NPT appears to be doing exactly what it should esp if your slab is a membrane

1) are you using pcoupltype = surface tension  with compressability  =  4.5e-5  
 0  ??  

2) have your tried  xy restraints on the ends of the slab

Paul


> On Jul 28, 2018, at 5:10 AM, Alex  wrote:
> 
> Dear all,
> In modelling a slab and some molecules on top of it, the slab gets bended
> (Fig.2) during the NVT production or equilibration simulations while I do
> NpT equilibration (berendsen followed by Parrinello-Rahman) long enough
> (Fig.1) so that the box parameters, pressure and energies are stable enough
> prior staring the NVT one.
> 
> Fig1: After around 2 ns NpT equilibration
> https://drive.google.com/open?id=1lhDruLuSx0Qf5DT4cIriTX3uoi5w_AVS
> 
> Fig2: During NVT
> https://drive.google.com/open?id=1CyDBrM1Ks52ViG8fh0_pnoClqymJVjGg
> 
> The slab got equilibrated separately before I bring it to the new system
> and even I restrain the slab's atoms to their equilibrated initial
> positions using "7000 7000 5000" spring constants in X,Y and Z directions,
> respectively.
> The point is that the problem does not happen in a smaller (laterally
> around 7nm*7 nm) slab, but it happens when I use a larger slab (laterally
> around 20nm*20nm).
> Above I have shared two pictures of the system, would you please share your
> thoughts about the issue?
> 
> Thank you.
> Regards,
> Alex
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Fwd: "Solved the issue on annealing - so to speak "

2018-07-27 Thread paul buscemi
> 
> To work around the single point annealing I had to add three single groups.  
> Here I tried to use the first group as the ramp and the second two as constant
> 
> 
>  It turns out the number of groups the online definition gives and that mdp 
> wants  is not the number of temperature groups , but the number of molecular 
> groups.  see the grompp output below.
> 
> isolating ramps for individual molecule groups is a great advantage, but the 
> description in the mdp/options page is indeed confusing.
> 
> ==  work around with three single temperature groups ===
> 
> ; SIMULATED ANNEALING  
> ; Type of annealing for each temperature group (no/single/periodic)
> annealing   = single single single
> 
> ; Number of time points to use for specifying annealing in each group
> annealing-npoints  = 2  2  2
> 
> ; List of times at the annealing points for each group
> annealing_time   = 0 500 0 10 0 10
> 
> ; Temp. at each annealing point, for each group.
> annealing_temp   = 100 320 320 320 320 320
> 
>   grompp  output ==
> 
> Simulated annealing for group Z8G5: Single, 2 timepoints
> Time (ps)   Temperature (K)
>   0.0  100.0
> 500.0- 320.0
> Simulated annealing for group NIGR: Single, 2 timepoints
> Time (ps)   Temperature (K)
>   0.0  320.0
>  10.0- 320.0
> Simulated annealing for group ISOP: Single, 2 timepoints
> Time (ps)   Temperature (K)
>   0.0  320.0
>  10.0- 320.0
> Number of degrees of freedom in T-Coupling group Z8G5 is 94389.77
> Number of degrees of freedom in T-Coupling group NIGR is 16199.62
> Number of degrees of freedom in T-Coupling group ISOP is 15999.62
> 
> 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] possible bug in annealing

2018-07-27 Thread paul buscemi
Dear Users,

I’ve been trying to do a simple annealing with 1 group and two points.  I 
receive the error below. Apparently gmx thinks there are three groups. 
If I comment out the annealing steps, the run proceeds normally . Total time of 
the run is 2 ns


note:  I’ve found  the use of eg annealing_time  and annealing-time in gmx 
examples  ( _   vs dash )   but both versions give the same error

suggestions appreciated !
> 
> 
> ; SIMULATED ANNEALING  
> ; Type of annealing for each temperature group (no/single/periodic)
> annealing   = single 
> 
> ; Number of time points to use for specifying annealing in each group
> annealing_npoints  = 2.0
> 
> ; List of times at the annealing points for each group  ( ps) 
> annealing_time   = 0 1000 
> 
> ; Temp. at each annealing point, for each group.
> annealing_temp   = 100 320 
> 
> 
> 
> with error
> 
> ---
> Program: gmx grompp, version 2018
> Source file: src/gromacs/gmxpreprocess/readir.cpp (line 3435)
> 
> Fatal error:
> Not enough annealing values: 1 (for 3 groups)
> 
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors 
> 
> 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Compressing bundles of fibers into a surface

2018-07-19 Thread paul buscemi
Dear gmx users

I’m in the process of creating a PE polymer surface.

pdb and  itp files were obtained from ATB and were used to produce a   (xyz) 
142  x 6 x 60 A  box ( starting size ) with 300  150 A long molecules.

They were neatly aligned to start  and - using  pcoupletype = surface-tension 
and a restraint on the x direction ( fc=10) they stay aligned through md but 
are now in bundles - which is what they should do

But now I want to compress ( move them in the y direction ) the bundles into a 
142 x 6 x  say 20 A box. i.e. a  more or less complete surface

I’ve tried re-running npt on the first md result, with no change.  (10 ns md)  
Making a smaller box ( smaller y)  with confedit and running npt or md leads to 
very High PE.

Other than waiting for the cows to come home using md,, could a suggestion be 
made  to coerce the bundles to come together ?

regards
Paul
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] resize pre-equilibrated system

2018-07-13 Thread paul buscemi
I believe this point is covered in either the Lysozyme or Kalp tutorials.  
However, You might try a smaller test system with any small protein like Kalp 
from the tutorials . Solvate it in a large( iso) box, minimize/equilibrate it, 
then resolvate that box in a smaller box.  count the water before and after, 
check the equlibration time.   It will take you not time at all to answer your 
own question

let us know if it works

> On 13,Jul 2018, at 11:07 AM, Roman Sloutsky  wrote:
> 
> I have been simulating a system of protein in explicit water under PBC. When 
> I first prepared the system for simulation, the protein had an extended 
> flexible linker (it was modeled in). I surrounded it with a water box with 
> some padding past the edge of the extended linker.
> 
> During the simulation I’ve already performed the linker collapsed into a 
> globular conformation, and now I have an excess amount of water surrounding 
> the protein. I expect this collapsed configuration to persist for a long 
> time, so I would like to avoid simulating all that bulk water unnecessarily. 
> On the other hand, the system is already equilibrated, and I would rather not 
> re-solvate the new configuration of the protein from scratch.
> 
> Is it possible to use gmx tools to re-size the system and remove the excess 
> water molecules, maintaining the position of the protein at the center of the 
> unit box? Maintaining the velocities of the remaining molecules (to avoid any 
> additional equilibration) would be a nice bonus, but equilibrating from the 
> last frame of a well-behaved simulation would still be better than 
> equilibrating a new system.
> 
> I do have counter ions, some of which might end up betting excluded and would 
> need to be placed back into the new box. Therefore, some minimal 
> equilibration might still be required.
> 
> I would appreciate any help!
> 
> Roman
> 
> □===□
> || Roman Sloutsky, PhD   ●   Postdoctoral Research Associate ||
> || Stratton Lab ● Biochemistry and Molecular Biology ||
> || University of Massachusetts Amherst ●  sloutsky⏣umass.edu ||
> □===□
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] use of surface tension

2018-07-13 Thread paul buscemi


Dear users,

I have set up a lipid membrane with its normal in the z direction.  I would 
like to apply a surface tension only in the x ( or y) direction with the normal 
pressure at 1atm.  that is - stretch the membrane in one direction

I’ve modified  npt mdp equilibrium file  and have worked with how to apply a xy 
surface tension using: ( which seems similar to semi-isotropic ) 

pcoupletype  =surface tension

but this is not the objective. 

 Using anisotropic  is the proper setup as follows ?

pcoupletype  =surface anisotropic   ;from the guide —  6 values are 
needed for xx, yy, zz, xy/yx, xz/zx and yz/zy components <>
vectors
ref-p   =   -11  1  1  11   
  ;apply 1 atm to x direction
compressibility =      1e-4   5e-5   <>5e-5  <>5e-5   <>5e-5   <>5e-5   <>

 <>

 <>
This appears to work,   but is the format correct ?  
Regards
Paul




-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Updated tutorials and new website

2018-06-29 Thread paul buscemi
Justin,

Thank you for all the hard work !   What can we do in return ???

Paul

> On Jun 29, 2018, at 5:17 PM, Justin Lemkul  wrote:
> 
> 
> Hi All,
> 
> I have updated all of my tutorials for use with GROMACS 2018. They are now 
> hosted on a new site:
> 
> http://www.mdtutorials.com/gmx/
> 
> The tutorials currently hosted on bevanlab.biochem.vt.edu will be permanently 
> taken offline by the end of the summer. I realize that many other sites link 
> to these tutorials, so please update links and bookmarks if possible. In the 
> meantime, the old tutorials will redirect to the new ones.
> 
> I hope the new tutorials will be helpful - there are many new and improved 
> sections, and the protein-ligand tutorial has essentially been completely 
> rewritten with a newer approach and different force field. I apologize if 
> there are any difficulties due to links that will now break, but the 
> situation is unavoidable due to the permanent decommissioning of the Bevan 
> lab server.
> 
> Please let me know if there are any difficulties with the new tutorials or 
> website.
> 
> -Justin
> 
> -- 
> ==
> 
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
> 
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
> 
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
> 
> ==
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GTX 960 vs Tesla K40

2018-06-13 Thread paul buscemi
 flops trumps clock speed….. 

> On Jun 13, 2018, at 3:45 PM, Alex  wrote:
> 
> Hi all,
> 
> I have an old "prototyping" box with a 4-core Xeon and an old GTX 960. We
> have a Tesla K40 laying around and there's only one PCIE slot available in
> this machine. Would it make sense to swap the cards, or is it already
> bottlenecked by the CPU? I compared the specs and 960 has a higher clock
> speed, while K40's FP performance is better. Should I swap the GPUs?
> 
> Thanks,
> 
> Alex
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-06 Thread paul buscemi
Mark

Yes, it was your suggestions that finally set me on the right $PATH.  The 
examples and analyses work as intended.  
Thanks
Paul

> On May 6, 2018, at 2:24 PM, Mark Abraham  wrote:
> 
> Hi,
> 
> I already referred you to the install guide for ideas on how to access the
> version of GROMACS that you want. Did you look there?
> 
> Mark
> 
> On Sun, May 6, 2018, 02:52 paul buscemi  wrote:
> 
>> Mark, Justin
>> 
>> I was able to access the GPU  using simply :
>> 
>> cmake .. -DGMX_BUILD_OWN_FFTW=ON \
>> -DREGRESSIONTEST_DOWNLOAD=ON \
>> -DGMX_MPI=on \
>> -DGMX_GPU=on
>> 
>> the result for the lysozyme MD run ( with the appropriate quote ) was:
>> 
>> 
>> Core t (s)   Wall t (s)(%)
>>   Time:  281.756   35.220  800.0
>>   (ns/day)(hour/ns)
>> Performance:  245.3230.098
>> 
>> GROMACS reminds you: "You still have to climb to the shoulders of the
>> giants" (Vedran Miletic)
>> 
>> 
>> You were correct, the problem was that the tutorial was accessing an
>> earlier install  of gromacs 5.1
>> and I had to access gmx-2018 MD by adding the path to the mdrun command
>> 
>> /home/rgb/Desktop/gromacs-gpu-2018/build/bin/gmx mdrun -deffnm md_0_1
>> 
>> which is not ideal. This is more of a linux question, but can you suggest
>> a way to clean up older installations, or is it sufficient to
>> ensure the PATH points to the correct version ?
>> 
>> thanks for your help
>> Paul
>> 
>> 
>>> On May 5, 2018, at 12:00 PM, Mark Abraham 
>> wrote:
>>> 
>>> Hi,
>>> 
>>> It's also GROMACS 5.1.2 not the 2018 you reported trying to install. You
>>> need to make sure your terminal has been given access to the GROMACS that
>>> you want to use (see that part of the install guode.).
>>> 
>>> Also, your CMake line tried to use OpenCL which is not what you want for
>>> running on an Nvidia GPU (even though you can get it to work).
>>> 
>>> Mark
>>> 
>>> On Sat, May 5, 2018, 00:55 Justin Lemkul  wrote:
>>> 
>>>> 
>>>> 
>>>> On 5/4/18 6:53 PM, paul buscemi wrote:
>>>>> Justin,
>>>>> 
>>>>> Here is the install script  and a snippit from the log file .
>>>>> 
>>>>> Gromacs runs normally with this ( fresh ) install but without GPU use
>>>>> 
>>>>> Paul
>>>>> 
>>>>> cmake .. -DGMX_BUILD_OWN_FFTW=ON \
>>>>> -DGMX_GPU=on   \
>>>>> -DCUDA_TOOLKIT_ROOT_DIR=/usr/lib/nvidia-cuda-toolkit \
>>>>>-DGMX_USE_OPENCL=on
>>>>> 
>>>>> Command line:
>>>>>  gmx mdrun -deffnm md_0_1
>>>>> 
>>>>> GROMACS version:VERSION 5.1.2
>>>>> Precision:  single
>>>>> Memory model:   64 bit
>>>>> MPI library:thread_mpi
>>>>> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
>>>>> GPU support:disabled
>>>> 
>>>> Well, here's what you need to know. Something failed in trying to enable
>>>> GPU acceleration. Take a look at the cmake output.
>>>> 
>>>> -Justin
>>>> 
>>>> --
>>>> ==
>>>> 
>>>> Justin A. Lemkul, Ph.D.
>>>> Assistant Professor
>>>> Virginia Tech Department of Biochemistry
>>>> 
>>>> 303 Engel Hall
>>>> 340 West Campus Dr.
>>>> Blacksburg, VA 24061
>>>> 
>>>> jalem...@vt.edu | (540) 231-3129
>>>> http://www.thelemkullab.com
>>>> 
>>>> ==
>>>> 
>>>> --
>>>> Gromacs Users mailing list
>>>> 
>>>> * Please search the archive at
>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>>> posting!
>>>> 
>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>> 
>>>> * For (un)subscribe requests visit
>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>>> send a mail to gmx-users-requ...@gromacs.org.
>>>> 
>>> --
>>> Gromacs Users mailing li

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-05 Thread paul buscemi
Mark, Justin

I was able to access the GPU  using simply : 

cmake .. -DGMX_BUILD_OWN_FFTW=ON \
 -DREGRESSIONTEST_DOWNLOAD=ON \
 -DGMX_MPI=on \
 -DGMX_GPU=on  

the result for the lysozyme MD run ( with the appropriate quote ) was: 


 Core t (s)   Wall t (s)(%)
   Time:  281.756   35.220  800.0
   (ns/day)(hour/ns)
Performance:  245.3230.098

GROMACS reminds you: "You still have to climb to the shoulders of the giants" 
(Vedran Miletic)


You were correct, the problem was that the tutorial was accessing an earlier 
install  of gromacs 5.1
and I had to access gmx-2018 MD by adding the path to the mdrun command

/home/rgb/Desktop/gromacs-gpu-2018/build/bin/gmx mdrun -deffnm md_0_1

which is not ideal. This is more of a linux question, but can you suggest a way 
to clean up older installations, or is it sufficient to 
ensure the PATH points to the correct version ?

thanks for your help
Paul


> On May 5, 2018, at 12:00 PM, Mark Abraham  wrote:
> 
> Hi,
> 
> It's also GROMACS 5.1.2 not the 2018 you reported trying to install. You
> need to make sure your terminal has been given access to the GROMACS that
> you want to use (see that part of the install guode.).
> 
> Also, your CMake line tried to use OpenCL which is not what you want for
> running on an Nvidia GPU (even though you can get it to work).
> 
> Mark
> 
> On Sat, May 5, 2018, 00:55 Justin Lemkul  wrote:
> 
>> 
>> 
>> On 5/4/18 6:53 PM, paul buscemi wrote:
>>> Justin,
>>> 
>>> Here is the install script  and a snippit from the log file .
>>> 
>>> Gromacs runs normally with this ( fresh ) install but without GPU use
>>> 
>>> Paul
>>> 
>>> cmake .. -DGMX_BUILD_OWN_FFTW=ON \
>>>  -DGMX_GPU=on   \
>>>  -DCUDA_TOOLKIT_ROOT_DIR=/usr/lib/nvidia-cuda-toolkit \
>>> -DGMX_USE_OPENCL=on
>>> 
>>> Command line:
>>>   gmx mdrun -deffnm md_0_1
>>> 
>>> GROMACS version:VERSION 5.1.2
>>> Precision:  single
>>> Memory model:   64 bit
>>> MPI library:thread_mpi
>>> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
>>> GPU support:disabled
>> 
>> Well, here's what you need to know. Something failed in trying to enable
>> GPU acceleration. Take a look at the cmake output.
>> 
>> -Justin
>> 
>> --
>> ==
>> 
>> Justin A. Lemkul, Ph.D.
>> Assistant Professor
>> Virginia Tech Department of Biochemistry
>> 
>> 303 Engel Hall
>> 340 West Campus Dr.
>> Blacksburg, VA 24061
>> 
>> jalem...@vt.edu | (540) 231-3129
>> http://www.thelemkullab.com
>> 
>> ==
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread paul buscemi
Justin,

Here is the install script  and a snippit from the log file .  

Gromacs runs normally with this ( fresh ) install but without GPU use

Paul

cmake .. -DGMX_BUILD_OWN_FFTW=ON \
 -DGMX_GPU=on   \
 -DCUDA_TOOLKIT_ROOT_DIR=/usr/lib/nvidia-cuda-toolkit \
-DGMX_USE_OPENCL=on  

Command line:
  gmx mdrun -deffnm md_0_1

GROMACS version:VERSION 5.1.2
Precision:  single
Memory model:   64 bit
MPI library:thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
GPU support:disabled
OpenCL support: disabled
invsqrt routine:gmx_software_invsqrt(x)
SIMD instructions:  SSE2
FFT library:fftw-3.3.4-sse2-avx
RDTSCP usage:   enabled
C++11 compilation:  enabled
TNG support:enabled
Tracing support:disable…..



> On May 4, 2018, at 1:12 PM, Justin Lemkul  wrote:
> 
> 
> 
> On 5/4/18 2:11 PM, paul buscemi wrote:
>> Thank you Justin.
>> 
>> Not at the linux system at the moment, but is there anything in particular I 
>> should look for in the log file ?
> 
> Just look for "GPU" and you'll find it.
> 
> -Justin
> 
>> Paul
>> 
>>> On 4,May 2018, at 12:55 PM, Justin Lemkul  wrote:
>>> 
>>> 
>>> 
>>> On 5/4/18 1:48 PM, paul buscemi wrote:
>>>> thank you for the prompt response.  I will check out the link.  Regarding 
>>>> 2)  One can immediately determine if the GPU is/is  running since Gromacs 
>>>> tells you at the beginning - in my case - that the CPU’s being used and 
>>>> linux system monitor tells you that all CPU’s are running, - not to 
>>>> mention the fan speed, and nvidia-smi registers no GPU running
>>>> 
>>>> What I was asking was for the proper format for invoking the GPU .  I used
>>>> gmx mdrun -deffnm md_0_1 -nb gpu ##   this is from the lysozyme 
>>>> tutorial MD page
>>>> but this starts Gromacs using the CPU’s and was asking if this is a proper 
>>>> format for a single GPU.
>>> With version 2018, you don't even need to use "-nb gpu," mdrun will 
>>> automatically run on the GPU if it is detected properly. As Mark said, 
>>> check your .log file for a full breakdown of how mdrun configured the 
>>> simulation using the available hardware.
>>> 
>>> -Justin
>>> 
>>>> thanks
>>>> Paul
>>>> 
>>>>> On 4,May 2018, at 11:57 AM, Mark Abraham  wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> On Fri, May 4, 2018 at 6:43 PM paul buscemi >>>> <mailto:pbusc...@q.com>> wrote:
>>>>> 
>>>>>> I’ve been struggling for a for several days to get Gromacs-2018 to use my
>>>>>> GPU.  I followed the INSTALL instructions ( several times ! ) that are
>>>>>> provided in the 2018 tarball
>>>>>> 
>>>>>> I know that the GPU ( GTX1080)  is installed properly in that it works
>>>>>> with Schrodinger and the Nvidia self tests.  Gromacs runs the  MD from 
>>>>>> the
>>>>>> University of Baltimore  lysozyme  example normally
>>>>>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/
>>>>>> <
>>>>>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/>
>>>>>> but only on the 8 threads of the  CPU
>>>>>> 
>>>>> The tutorial explicitly addresses this. See
>>>>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/08_MD.html
>>>>>  
>>>>> <http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/08_MD.html>
>>>>> 
>>>>> 
>>>>>> Make test  indicates that it sees the GPU, but
>>>>>> 
>>>>>> 1)  is there a way to definitively determine if the make commands were 
>>>>>> run
>>>>>> properly or that Gromacs was installed properly with the GPU
>>>>>> 
>>>>> To know that a simulation ran on the GPU, you must inspect the log file
>>>>> that it wrote.
>>>>> 
>>>>> 
>>>>>> 2) for a linux desktop system with one GPU,  a/the  proper command to run
>>>>>> the lysozyme .  I ask since there is indication that with one GPU  (0) it
>>>>>> will be dedicated to graphics.
>>>>>> 
>>>>> If the 

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread paul buscemi
Thank you Justin. 

Not at the linux system at the moment, but is there anything in particular I 
should look for in the log file ?

Paul

> On 4,May 2018, at 12:55 PM, Justin Lemkul  wrote:
> 
> 
> 
> On 5/4/18 1:48 PM, paul buscemi wrote:
>> thank you for the prompt response.  I will check out the link.  Regarding 2) 
>>  One can immediately determine if the GPU is/is  running since Gromacs tells 
>> you at the beginning - in my case - that the CPU’s being used and linux 
>> system monitor tells you that all CPU’s are running, - not to mention the 
>> fan speed, and nvidia-smi registers no GPU running
>> 
>> What I was asking was for the proper format for invoking the GPU .  I used
>> gmx mdrun -deffnm md_0_1 -nb gpu ##   this is from the lysozyme tutorial 
>> MD page
>> but this starts Gromacs using the CPU’s and was asking if this is a proper 
>> format for a single GPU.
> 
> With version 2018, you don't even need to use "-nb gpu," mdrun will 
> automatically run on the GPU if it is detected properly. As Mark said, check 
> your .log file for a full breakdown of how mdrun configured the simulation 
> using the available hardware.
> 
> -Justin
> 
>> thanks
>> Paul
>> 
>>> On 4,May 2018, at 11:57 AM, Mark Abraham  wrote:
>>> 
>>> Hi,
>>> 
>>> On Fri, May 4, 2018 at 6:43 PM paul buscemi >> <mailto:pbusc...@q.com>> wrote:
>>> 
>>>> 
>>>> I’ve been struggling for a for several days to get Gromacs-2018 to use my
>>>> GPU.  I followed the INSTALL instructions ( several times ! ) that are
>>>> provided in the 2018 tarball
>>>> 
>>>> I know that the GPU ( GTX1080)  is installed properly in that it works
>>>> with Schrodinger and the Nvidia self tests.  Gromacs runs the  MD from the
>>>> University of Baltimore  lysozyme  example normally
>>>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/
>>>> <
>>>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/>
>>>> but only on the 8 threads of the  CPU
>>>> 
>>> The tutorial explicitly addresses this. See
>>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/08_MD.html
>>>  
>>> <http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/08_MD.html>
>>> 
>>> 
>>>> Make test  indicates that it sees the GPU, but
>>>> 
>>>> 1)  is there a way to definitively determine if the make commands were run
>>>> properly or that Gromacs was installed properly with the GPU
>>>> 
>>> To know that a simulation ran on the GPU, you must inspect the log file
>>> that it wrote.
>>> 
>>> 
>>>> 2) for a linux desktop system with one GPU,  a/the  proper command to run
>>>> the lysozyme .  I ask since there is indication that with one GPU  (0) it
>>>> will be dedicated to graphics.
>>>> 
>>> If the display is sharing the GPU, then it shares the GPU (and performance
>>> of either the display or the simulation might be affected). With only one
>>> GPU, there's no option (unless your motherboard has a built-in GPU that
>>> you'd prefer to use for the display).
>>> 
>>> 
>>>> 3) is there an up to data set of install instructions for  Gromacs 2018
>>>> and Nvidia 9.1 toolkit,  384.11 drivers ?
>>>> 
>>> No, there's nothing unusual about that, so the generic instructions apply.
>>> 
>>> Mark
>>> 
>>> 
>>>> Regards
>>>> Paul
>>>> --
>>>> Gromacs Users mailing list
>>>> 
>>>> * Please search the archive at
>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>>> posting!
>>>> 
>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>> 
>>>> * For (un)subscribe requests visit
>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>>> send a mail to gmx-users-requ...@gromacs.org.
>>> -- 
>>> Gromacs Users mailing list
>>> 
>>> * Please search the archive at 
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List 
>>> <http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List> before 
>>> posting!
>>> 
>>> * Can't post? Read http://www.gromacs.org/Supp

Re: [gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread paul buscemi
thank you for the prompt response.  I will check out the link.  Regarding 2)  
One can immediately determine if the GPU is/is  running since Gromacs tells you 
at the beginning - in my case - that the CPU’s being used and linux system 
monitor tells you that all CPU’s are running, - not to mention the fan speed, 
and nvidia-smi registers no GPU running

What I was asking was for the proper format for invoking the GPU .  I used 
gmx mdrun -deffnm md_0_1 -nb gpu ##   this is from the lysozyme tutorial MD 
page
but this starts Gromacs using the CPU’s and was asking if this is a proper 
format for a single GPU.

thanks
Paul

> On 4,May 2018, at 11:57 AM, Mark Abraham  wrote:
> 
> Hi,
> 
> On Fri, May 4, 2018 at 6:43 PM paul buscemi  <mailto:pbusc...@q.com>> wrote:
> 
>> 
>> 
>> I’ve been struggling for a for several days to get Gromacs-2018 to use my
>> GPU.  I followed the INSTALL instructions ( several times ! ) that are
>> provided in the 2018 tarball
>> 
>> I know that the GPU ( GTX1080)  is installed properly in that it works
>> with Schrodinger and the Nvidia self tests.  Gromacs runs the  MD from the
>> University of Baltimore  lysozyme  example normally
>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/
>> <
>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/>
>> but only on the 8 threads of the  CPU
>> 
> 
> The tutorial explicitly addresses this. See
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/08_MD.html
>  
> <http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/08_MD.html>
> 
> 
>> Make test  indicates that it sees the GPU, but
>> 
>> 1)  is there a way to definitively determine if the make commands were run
>> properly or that Gromacs was installed properly with the GPU
>> 
> 
> To know that a simulation ran on the GPU, you must inspect the log file
> that it wrote.
> 
> 
>> 2) for a linux desktop system with one GPU,  a/the  proper command to run
>> the lysozyme .  I ask since there is indication that with one GPU  (0) it
>> will be dedicated to graphics.
>> 
> 
> If the display is sharing the GPU, then it shares the GPU (and performance
> of either the display or the simulation might be affected). With only one
> GPU, there's no option (unless your motherboard has a built-in GPU that
> you'd prefer to use for the display).
> 
> 
>> 3) is there an up to data set of install instructions for  Gromacs 2018
>> and Nvidia 9.1 toolkit,  384.11 drivers ?
>> 
> 
> No, there's nothing unusual about that, so the generic instructions apply.
> 
> Mark
> 
> 
>> 
>> Regards
>> Paul
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List 
> <http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List> before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists 
> <http://www.gromacs.org/Support/Mailing_Lists>
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users 
> <https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users> or send 
> a mail to gmx-users-requ...@gromacs.org 
> <mailto:gmx-users-requ...@gromacs.org>.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Gromacs 2018 and GPU

2018-05-04 Thread paul buscemi


I’ve been struggling for a for several days to get Gromacs-2018 to use my GPU.  
I followed the INSTALL instructions ( several times ! ) that are provided in 
the 2018 tarball

I know that the GPU ( GTX1080)  is installed properly in that it works with 
Schrodinger and the Nvidia self tests.  Gromacs runs the  MD from the 
University of Baltimore  lysozyme  example normally  
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/
 

  but only on the 8 threads of the  CPU

Make test  indicates that it sees the GPU, but 

1)  is there a way to definitively determine if the make commands were run 
properly or that Gromacs was installed properly with the GPU

2) for a linux desktop system with one GPU,  a/the  proper command to run the 
lysozyme .  I ask since there is indication that with one GPU  (0) it will be 
dedicated to graphics.


3) is there an up to data set of install instructions for  Gromacs 2018 and 
Nvidia 9.1 toolkit,  384.11 drivers ?

Regards
Paul
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.