Hi, On Tue, Nov 6, 2012 at 12:03 AM, Thomas Evangelidis <teva...@gmail.com>wrote:
> Hi, > > I get these two warnings when I run the dhfr/GPU/dhfr-solv-PME.bench > benchmark with the following command line: > > mdrun_intel_cuda5 -v -s topol.tpr -testverlet > > "WARNING: Oversubscribing the available 0 logical CPU cores with 1 > thread-MPI threads." > > 0 logical CPU cores? Isn't this bizarre? My CPU is Intel Core i7-3610QM > That is bizzarre. Could you run with "-debug 1" and have a look at the mdrun.debug output which should contain a message like: "Detected N processors, will use this as the number of supported hardware threads." I'm wondering, is N=0 in your case!? > (2.3 GHz). Unlike Albert, I don't see any performance loss, I get 13.4 > ns/day on a single core with 1 GPU and 13.2 ns/day with GROMACS v4.5.5 on 4 > cores (8 threads) without the GPU. Yet, I don't see any performance gain > with more that 4 -nt threads. > > mdrun_intel_cuda5 -v -nt 2 -s topol.tpr -testverlet : 15.4 ns/day > mdrun_intel_cuda5 -v -nt 3 -s topol.tpr -testverlet : 16.0 ns/day > mdrun_intel_cuda5 -v -nt 4 -s topol.tpr -testverlet : 16.3 ns/day > mdrun_intel_cuda5 -v -nt 6 -s topol.tpr -testverlet : 16.2 ns/day > mdrun_intel_cuda5 -v -nt 8 -s topol.tpr -testverlet : 15.4 ns/day > I guess there is not much point in not using all cores, is it? Note that the performance drops after 4 threads because Hyper-Threading with OpenMP doesn't always help. > > I have also attached my log file (from "mdrun_intel_cuda5 -v -s topol.tpr > -testverlet") in case you find it helpful. > I don't see it attached. -- Szilárd > > Thanks, > Thomas > > > > On 5 November 2012 18:54, Szilárd Páll <szilard.p...@cbr.su.se> wrote: > > > The first warning indicates that you are starting more threads than the > > hardware supports which would explain the poor performance. > > > > Could share a log file of the suspiciously slow run as well as the > command > > line you used to start mdrun? > > > > Cheers, > > > > -- > > Szilárd > > > > > > On Sun, Nov 4, 2012 at 5:32 PM, Albert <mailmd2...@gmail.com> wrote: > > > > > well, IC. > > > the performance is rather poor than GTX590..... 32ns/day vs 4 ns/day > > > probably that's also something related to the warnings? > > > > > > THX > > > > > > > > > > > > On 11/04/2012 01:59 PM, Justin Lemkul wrote: > > > > > >> > > >> > > >> On 11/4/12 4:55 AM, Albert wrote: > > >> > > >>> hello: > > >>> > > >>> I am running Gromacs 4.6 GPU on a workstation with two GTX 660 Ti > (2 > > x > > >>> 1344 > > >>> CUDA cores), and I got the following warnings: > > >>> > > >>> thank you very much. > > >>> > > >>> ---------------------------**messages----------------------** > > >>> ------------- > > >>> > > >>> WARNING: On node 0: oversubscribing the available 0 logical CPU cores > > >>> per node > > >>> with 2 MPI processes. > > >>> This will cause considerable performance loss! > > >>> > > >>> 2 GPUs detected on host boreas: > > >>> #0: NVIDIA GeForce GTX 660 Ti, compute cap.: 3.0, ECC: no, stat: > > >>> compatible > > >>> #1: NVIDIA GeForce GTX 660 Ti, compute cap.: 3.0, ECC: no, stat: > > >>> compatible > > >>> > > >>> 2 GPUs auto-selected to be used for this run: #0, #1 > > >>> > > >>> Using CUDA 8x8x8 non-bonded kernels > > >>> Making 1D domain decomposition 1 x 2 x 1 > > >>> > > >>> * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * > > >>> We have just committed the new CPU detection code in this branch, > > >>> and will commit new SSE/AVX kernels in a few days. However, this > > >>> means that currently only the NxN kernels are accelerated! > > >>> In the mean time, you might want to avoid production runs in 4.6. > > >>> > > >>> > > >> I can't address the first warning, but the second is fairly obvious. > > >> You're not using an official release, you're using the development > > version > > >> - let the user beware. The code is not yet production-ready. > > >> > > >> -Justin > > >> > > >> > > > -- > > > gmx-users mailing list gmx-users@gromacs.org > > > http://lists.gromacs.org/**mailman/listinfo/gmx-users< > > http://lists.gromacs.org/mailman/listinfo/gmx-users> > > > * Please search the archive at http://www.gromacs.org/** > > > Support/Mailing_Lists/Search< > > http://www.gromacs.org/Support/Mailing_Lists/Search>before posting! > > > * Please don't post (un)subscribe requests to the list. Use the www > > > interface or send it to gmx-users-requ...@gromacs.org. > > > * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists< > > http://www.gromacs.org/Support/Mailing_Lists> > > > > > -- > > gmx-users mailing list gmx-users@gromacs.org > > http://lists.gromacs.org/mailman/listinfo/gmx-users > > * Please search the archive at > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > > * Please don't post (un)subscribe requests to the list. Use the > > www interface or send it to gmx-users-requ...@gromacs.org. > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > > > > > -- > > ====================================================================== > > Thomas Evangelidis > > PhD student > University of Athens > Faculty of Pharmacy > Department of Pharmaceutical Chemistry > Panepistimioupoli-Zografou > 157 71 Athens > GREECE > > email: tev...@pharm.uoa.gr > > teva...@gmail.com > > > website: https://sites.google.com/site/thomasevangelidishomepage/ > > -- > gmx-users mailing list gmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > * Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing list gmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists