[gmx-users] remd error

2019-07-17 Thread Bratin Kumar Das
Hi, I am running remd simulation in gromacs-2016.5. After generating the multiple .tpr file in each directory by the following command *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done* I run *mpirun -np 80 gmx_mpi mdrun -s

Re: [gmx-users] Warning

2019-07-17 Thread Bratin Kumar Das
Hi Mark, It came from grompp command...in mdp file I have constraint all-bonds. is it coming due to that. On Mon, Jul 15, 2019 at 4:57 PM Quyen Vu wrote: > Hi, > I think he got this warning due to he did not constraint the H-bond in his > simulation while he used a timestep of 2fs >

Re: [gmx-users] Drude force field

2019-07-17 Thread Justin Lemkul
On Wed, Jul 17, 2019 at 8:58 PM Myunggi Yi wrote: > Thank you Dr. Lemkel, > > I don't have ions in my simulation. It's a neutral system with a protein in > membrane bilayer with solvent. > I have downloaded the force field (Drude FF for charmm FF in Gromacs > format). to run the simulation with c

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Alex
Perfect, thanks a lot! We are less constrained by cost, so we'll go straight to 2080Ti. You guys already saved us a few grand here. ;) Alex On 7/17/2019 7:34 PM, Moir, Michael (MMoir) wrote: Alex, The motherboard I am using with the 9900K is the ASUS WS Z390 PRO. The PRO version has the extr

Re: [gmx-users] Drude force field

2019-07-17 Thread Myunggi Yi
I've got the following Error. It seems this version of Gromacs does not recognize the Drude force field format distributed by MacKerell. There are additional Terms like [anisotropic_polarization] , etc.. Gromacs read this as a residue name. Program gmx, VERSION 5.0.7 Source code file: /data/c

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Moir, Michael (MMoir)
Alex, The motherboard I am using with the 9900K is the ASUS WS Z390 PRO. The PRO version has the extra PCIe controller. I am using two RTX 1070ti GPUs, and I can hear everyone snorting with derision, but with this configuration I get performance with 100,000 atoms of about 72 ns/day with 2019.

Re: [gmx-users] Drude force field

2019-07-17 Thread Myunggi Yi
Thank you Dr. Lemkel, I don't have ions in my simulation. It's a neutral system with a protein in membrane bilayer with solvent. I have downloaded the force field (Drude FF for charmm FF in Gromacs format). to run the simulation with charmm FF in "Gromacs 2019.3". However, it seems the format of t

Re: [gmx-users] Drude force field

2019-07-17 Thread Myunggi Yi
Thank you. On Thu, Jul 18, 2019 at 9:43 AM Justin Lemkul wrote: > > > On 7/17/19 8:39 PM, Myunggi Yi wrote: > > Dear users, > > > > I want to run a simulation with a polarizable force field. > > > > How and where can I get Drude force field for the current version of > > Gromacs? > > Everything

Re: [gmx-users] Drude force field

2019-07-17 Thread Justin Lemkul
On 7/17/19 8:39 PM, Myunggi Yi wrote: Dear users, I want to run a simulation with a polarizable force field. How and where can I get Drude force field for the current version of Gromacs? Everything you need to know: http://mackerell.umaryland.edu/charmm_drude_ff.shtml The implementation

[gmx-users] Drude force field

2019-07-17 Thread Myunggi Yi
Dear users, I want to run a simulation with a polarizable force field. How and where can I get Drude force field for the current version of Gromacs? Thank you. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posti

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Moir, Michael (MMoir)
Certainly. When I get home this evening I will post the information. Mike -Original Message- From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se On Behalf Of Alex Sent: Wednesday, July 17, 2019 11:16 AM To: gmx-us...@gromacs.org Subject: [**EXTERNAL**] Re: [gmx-users] Xeon Gold + R

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Alex
Gentlemen, thank you both! Michael, would you be able to suggest a specific motherboard that removes the bottleneck? We aren't really limited by price in this case and would prefer to get every bit of benefit out of the processing components, if possible. Thanks, Alex On 7/17/2019 10:44 AM

[gmx-users] make manual fails

2019-07-17 Thread Michael Brunsteiner
hi,so I say:prompt> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DCMAKE_C_COMPILER=gcc-7 -DCMAKE_CXX_COMPILER=g++-7 -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=/home/michael/local/gromacs-2019-3-bin -DGMX_BUILD_MANUAL=onprompt> make -j 4prompt> make install prompt> make manualmanual cannot be built because Sphinx

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Moir, Michael (MMoir)
This is not quite true. I certainly observed this degradation in performance using the 9900K with two GPUs as Szilárd states using a motherboard with one PCIe controller, but the limitation is from the motherboard not from the CPU. It is possible to obtain a motherboard that contains two PCIe

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Szilárd Páll
Hi Alex, I've not had a chance to test the new 3rd gen Ryzen CPUs, but all public benchmarks out there point to the fact that they are a major improvement over the previous generation Ryzen -- which were already quite competitive for GPU-accelerated GROMACS runs compared to Intel, especially in pe

Re: [gmx-users] decreased performance with free energy

2019-07-17 Thread Szilárd Páll
Hi, Lower performe especially with GPUs is not unexpected, but what you report is unusually large. I suggest you post your mdp and log file, perhaps there are some things to improve. -- Szilárd On Wed, Jul 17, 2019 at 3:47 PM David de Sancho wrote: > Hi all > I have been doing some testing fo

Re: [gmx-users] rtx 2080 gpu

2019-07-17 Thread Szilárd Páll
On Wed, Jul 17, 2019 at 2:13 PM Stefano Guglielmo < stefano.guglie...@unito.it> wrote: > Hi Benson, > thanks for your answer and sorry for my delay: in the meantime I had to > restore the OS. I obviously re-installed NVIDIA driver (430.64) and CUDA > 10.1, I re-compiled Gromacs 2019.2 with the fol

Re: [gmx-users] rtx 2080 gpu

2019-07-17 Thread Szilárd Páll
On Wed, Jul 10, 2019 at 2:18 AM Stefano Guglielmo < stefano.guglie...@unito.it> wrote: > Dear all, > I have a centOS machine equipped with two RTX 2080 cards, with nvidia > drivers 430.2; I installed cuda toolkit 10-1. when executing mdrun the log > reported the following message: > > GROMACS vers

[gmx-users] decreased performance with free energy

2019-07-17 Thread David de Sancho
Hi all I have been doing some testing for Hamiltonian replica exchange using Gromacs 2018.3 on a relatively simple system with 3000 atoms in a cubic box. For the modified hamiltonian I have simply modified the water interactions by generating a typeB atom in the force field ffnonbonded.itp with dif

Re: [gmx-users] rtx 2080 gpu

2019-07-17 Thread Stefano Guglielmo
Hi Benson, thanks for your answer and sorry for my delay: in the meantime I had to restore the OS. I obviously re-installed NVIDIA driver (430.64) and CUDA 10.1, I re-compiled Gromacs 2019.2 with the following command: cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_SIMD=AVX2_256 -DGMX_GPU=ON -DCUDA_TOOLKI

Re: [gmx-users] heat capacity collection

2019-07-17 Thread Amin Rouy
Thank you David, but I just had an issue with output file. I wanted to have heat-capacities as a data file, which I guess gromacs does not provide a separate output file for that. So I could mange to do that with my own script (calculating heat capacity from energy and temperature outpute data).