whoops ---not moo ( iphone took over ) but mpi…. > On Aug 2, 2019, at 5:09 PM, Paul Buscemi <pbusc...@q.com> wrote: > > Why run moo on a single node ? > > PB > >> On Aug 1, 2019, at 5:53 PM, Mark Abraham <mark.j.abra...@gmail.com> wrote: >> >> Hi, >> >> We can't tell whether or what the problem is without more information. >> Please upload your .log file to a file sharing service and post a link. >> >> Mark >> >>> On Fri, 2 Aug 2019 at 01:05, Maryam <maryam.kow...@gmail.com> wrote: >>> >>> Dear all >>> I want to run a simulation of approximately 12000 atoms system in gromacs >>> 2016.6 on GPU with the following machine structure: >>> Precision: single Memory model: 64 bit MPI library: thread_mpi OpenMP >>> support: enabled (GMX_OPENMP_MAX_THREADS = 32) GPU support: CUDA SIMD >>> instructions: AVX2_256 FFT library: >>> fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512 RDTSCP usage: enabled TNG >>> support: enabled Hwloc support: disabled Tracing support: disabled Built >>> on: Fri Jun 21 09:58:11 EDT 2019 Built by: julian@BioServer [CMAKE] Build >>> OS/arch: Linux 4.15.0-52-generic x86_64 Build CPU vendor: AMD Build CPU >>> brand: AMD Ryzen 7 1800X Eight-Core Processor Build CPU family: 23 Model: 1 >>> Stepping: 1 >>> Number of GPUs detected: 1 #0: NVIDIA GeForce RTX 2080 Ti, compute cap.: >>> 7.5, ECC: no, stat: compatible >>> i used different commands to get the best performance and i dont know which >>> point i am missing. the quickest time possible is got by this command:gmx >>> mdrun -s md.tpr -nb gpu -deffnm MD -tunepme -v >>> which is 10 ns/day! and it takes 2 months to end. >>> though i used several commands to tune it like: gmx mdrun -ntomp 6 -pin on >>> -resethway -nstlist 20 -s md.tpr -deffnm md -cpi md.cpt -tunepme -cpt 15 >>> -append -gpu_id 0 -nb auto. In the gromacs website it is mentioned that >>> with this properties I should be able to run it in 295 ns/day! >>> could you help me find out what point i am missing that i can not reach the >>> best performance level? >>> Thank you >>> ------------------------------ >>> -- >>> Gromacs Users mailing list >>> >>> * Please search the archive at >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before >>> posting! >>> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists >>> >>> * For (un)subscribe requests visit >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or >>> send a mail to gmx-users-requ...@gromacs.org. >>> >> -- >> Gromacs Users mailing list >> >> * Please search the archive at >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! >> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists >> >> * For (un)subscribe requests visit >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a >> mail to gmx-users-requ...@gromacs.org. > > -- > Gromacs Users mailing list > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > * For (un)subscribe requests visit > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a > mail to gmx-users-requ...@gromacs.org.
-- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.