On Thu, Dec 13, 2018 at 10:12 PM <pbusc...@q.com> wrote:

> Szilard,
>
> I get an "unknown command " gpustasks  in :
>
> 'mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING
>

TASKSTRIGS is a placeholder. It is the manual mapping of tasks to GPU
hardware.


>
> where > typically N = 4, 6, 8 are worth a try (but N <= #cores) and the >
> TASKSTRING should have N digits with either N-1 zeros and the last 1
> > or N-2 zeros and the last two 1, i.e..

Would you please complete the i.e...
>

What's best depends on your hardware, so this is manual tuning territory
and it is hard to provide a recipe; what you can try is e.g. assuming
16-cores/32-threads and two GPUs:
gmx mdrun -ntmpi 8 -npme 1 -ntomp 4 -nb gpu -pme gpu -gputasks 00000011
gmx mdrun -ntmpi 8 -npme 1 -ntomp 4 -nb gpu -pme gpu -gputasks 00000111

As note before, don't expect this to be more than ~1.5x faster than using
one GPU.


Thanks again,
> Paul
>
>
>
> -----Original Message-----
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of paul
> buscemi
> Sent: Tuesday, December 11, 2018 5:56 PM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] using dual CPU's
>
> Szilard,
>
> Thank you vey much for the information and I apologize how the text
> appeared - internet demons at work.
>
> The computer described in the log files is a basic test rig which we use
> to iron out models. The workhorse is a many core AMD with now one and
> hopefully soon to be two 2080ti’s,  It will have to handle several 100k
> particles and at the moment do not think the simulation could be divided.
> These are essentially of  a multi component ligand adsorption from solution
> onto a substrate  including evaporation of the solvent.
>
> I saw from a 2015 paper form your group  “ Best bang for your buck: GPU
> nodes for GROMACS biomolecular simulations “ that I should expect maybe a
> 50% improvement for 90k atoms ( with  2x  GTX 970 ) What bothered me in my
> initial attempts was that my simulations became slower by adding the second
> GPU - it was frustrating to say the least
>
> I’ll give your suggestions a good workout, and report on the results when
> I hack it out..
>
> Bes
> Paul
>
> > On Dec 11, 2018, at 12:14 PM, Szilárd Páll <pall.szil...@gmail.com>
> wrote:
> >
> > Without having read all details (partly due to the hard to read log
> > files), what I can certainly recommend is: unless you really need to,
> > avoid running single simulations with only a few 10s of thousands of
> > atoms across multiple GPUs. You'll be _much_ better off using your
> > limited resources by running a few independent runs concurrently. If
> > you really need to get maximum single-run throughput, please check
> > previous discussions on the list on my recommendations.
> >
> > Briefly, what you can try for 2 GPUs is (do compare against the
> > single-GPU runs to see if it's worth it):
> > mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING where
> > typically N = 4, 6, 8 are worth a try (but N <= #cores) and the
> > TASKSTRING should have N digits with either N-1 zeros and the last 1
> > or N-2 zeros and the last two 1, i.e..
> >
> > I suggest to share files using a cloud storage service like google
> > drive, dropbox, etc. or a dedicated text sharing service like
> > paste.ee, pastebin.com, or termbin.com -- especially the latter is
> > very handy for those who don't want to leave the command line just to
> > upload a/several files for sharing (i.e. try "echo "foobar" | nc
> > termbin.com 9999)
> >
> > --
> > Szilárd
> > On Tue, Dec 11, 2018 at 2:44 AM paul buscemi <pbusc...@q.com> wrote:
> >>
> >>
> >>
> >>> On Dec 10, 2018, at 7:33 PM, paul buscemi <pbusc...@q.com> wrote:
> >>>
> >>>
> >>> Mark, attached are the tail ends of three  log files for the same
> >>> system but run on an AMD 8  Core/16 Thread 2700x, 16G ram In
> >>> summary:
> >>> for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8
> , and 6.0 ns/day.
> >>> Clearly, I do not have a handle on using 2 GPU's
> >>>
> >>> Thank you again, and I'll keep probing the web for more understanding.
> >>> I’ve propbably sent too much of the log, let me know if this is the
> >>> case
> >> Better way to share files - where is that friend ?
> >>>
> >>> Paul
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Reply via email to