Yup, about 7-8 times between with and without GPU acceleration, not making
this up: I had 11 ns/day and now ~80-87 ns/day, the numbers vary a bit.
I've been getting a similar boost on our GPU-accelerated cluster node (dual
core i7, 8 cores each) with two Tesla C2075 cards (I am directing my
simulations to one of them via -gpu_id).
All runs are -ntomp 4, with or without GPU. The physics in all cases is
perfectly acceptable. So far I only tested my new box on vacuum
simulations, about to run the solvated version (~30K particles).

Alex


On Wed, Jul 1, 2015 at 6:09 PM, Szilárd Páll <pall.szil...@gmail.com> wrote:

> Hmmm, 8x sounds rather high, are you sure you are comparing to CPU-only
> runs that use proper SIMD optimized kernels?
>
> Because of the way offload-based acceleration works, the CPU and GPU will
> inherently be executing concurrently only part of the runtime and as a
> consequence the GPU is idle part of the run-time (during
> integration+constraints). You can make use of this idle time by running
> multiple independent simulations concurrently. This can yield serious
> improvements in terms of _aggregate_ simulation performance especially with
> small inputs and many cores (see slide 51 https://goo.gl/7DnSri)/
>
> --
> Szilárd
>
> On Wed, Jul 1, 2015 at 4:16 AM, Alex <nedoma...@gmail.com> wrote:
>
>>  I am happy to say that I am getting an 8-fold increase in simulation
>> speeds for $200.
>>
>>
>> An additional question: normally, how many simulations (separate mdruns
>> on separate CPU cores) can be performed simultaneously on a single GPU?
>> Say, for 20-40K particle sized simulations.
>>
>> The coolers are not even spinning during a single test (mdrun -ntomp 4),
>> and I get massive acceleration. They aren't broken, the card is just cool
>> (small system, ~3K particles).
>>
>>
>> Thanks,
>>
>>
>> Alex
>>
>>
>>
>>   >
>>
>>
>>
>>
>>   >
>>
>> Ah, ok, so you can get a 6-pin from the PSU and another from a converted
>> molex connector. That should be just fine, especially as the card should
>> will not pull more than ~155W (under heavy graphics load) based on the
>> Tomshardware review* and you are providing 225W max.
>>
>>
>>
>> *
>> http://www.tomshardware.com/reviews/evga-super-super-clocked-gtx-960,4063-3.html
>>
>>
>>
>>
>> --
>>
>> Szilárd
>>
>>
>>
>> On Tue, Jun 30, 2015 at 7:31 PM, Alex <nedoma...@gmail.com> wrote:
>>
>>
>> Well, I don't have one like this. What I have instead is this:
>>
>>
>> 1. A single 6-pin directly from the PSU.
>>
>> 2. A single molex to 6-pin (my PSU does provide one molex).
>>
>> 3. Two 6-pins to a single 8-pin converter going to the card.
>>
>>
>> In other words, I can populate both 6-pins on the 6-8 converter, just not
>> sure about the pinouts in this case.
>>
>>
>> Not good?
>>
>>
>> Alex
>>
>>
>>
>>   >
>>
>> What I meant is this: http://goo.gl/8o1B5P
>>
>>
>> That is 2x molex -> 8pin PCI-E. A single molex may not be enouhg.
>>
>>
>>
>> --
>>
>> Szilárd
>>
>>
>>
>> On Tue, Jun 30, 2015 at 7:10 PM, Alex <nedoma...@gmail.com> wrote:
>>
>>
>> It is a 4-core CPU, single GPU box, so I doubt I will be running more
>>
>> than one at a time. We will very likely get a different PSU, unless...
>>
>> I do have a molex to 6 pin concerter sitting on this very desk. Do you
>>
>> think it will satisfy the card? I just don't know how much a single
>>
>> molex line delivers. If you feel this should work, off to installing
>>
>> everything I go.
>>
>>
>> Thanks a bunch,
>>
>> Alex
>>
>>
>> SP> First of all, unless you run multiple independent simulations on the
>> same
>>
>> SP> GPU, GROMACS runs alone will never get anywhere near the peak power
>>
>> SP> consumption of the GPU.
>>
>>
>> SP> The good news is that NVIDIA has gained some sanity and stopped
>> blocking
>>
>> SP> GeForce GPU info in nvidia-smi - although only for newer cars, but it
>> does
>>
>> SP> work with the 960 if you use a 352.xx driver:
>>
>> SP> +------------------------------------------------------+
>>
>>
>> SP> | NVIDIA-SMI 352.21     Driver Version: 352.21         |
>>
>>
>> SP>
>> |-------------------------------+----------------------+----------------------+
>>
>> SP> | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile
>> Uncorr.
>>
>> SP> ECC |
>>
>> SP> | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util
>> Compute
>>
>> SP> M. |
>>
>> SP>
>> |===============================+======================+======================|
>>
>> SP> |   0  GeForce GTX 960     Off  | 0000:01:00.0      On |
>>
>> SP>  N/A |
>>
>> SP> |  8%   45C    P5    15W / 130W |   1168MiB /  2044MiB |     31%
>>
>> SP>  Default |
>>
>> SP>
>> +-------------------------------+----------------------+----------------------+
>>
>>
>>
>> SP> A single 6-pin can deliver 75W, an 8-pin 150W, so in your case, the
>> hard
>>
>> SP> limits of what your card can pull is 75W from the PCI-E slow + 150W
>> from
>>
>> SP> the cable = 225 W. With a single 6-pin cable you'll only get ~150W
>> max.
>>
>> SP> That can be OK if your card does not pull more power (e.g. the above
>>
>> SP> non-overclocked card would be just fine), but as your card is
>> overclocked,
>>
>> SP> I'm not sure it won't peak above 150W.
>>
>>
>> SP> You can try to get a molex -> PCI-E power cable converter.
>>
>>
>>
>> SP> --
>>
>> SP> Szilárd
>>
>>
>>
>> SP> On Mon, Jun 29, 2015 at 9:56 PM, Alex <nedoma...@gmail.com> wrote:
>>
>>
>> >> Hi all,
>>
>> >>
>>
>> >> I have a bit of a gromacs-unrelated question here, but I think this is
>> a
>>
>> >> better place to ask it than, say, a gaming forum. The Nvidia GTX 960
>> card
>>
>> >> we got here came with an 8-pin AUX connector on the card side, which
>>
>> >> interfaces _two_ 6-pin connectors to the PSU. It is a factory
>> superclocked
>>
>> >> card. My 525W PSU can only populate _one_ of those 6-pin connectors.
>> The
>>
>> >> EVGA website states that I need at least 400W PSU, while I have 525.
>>
>> >>
>>
>> >> At the same time, I have a dedicated high-power PCI-e slot, which on
>> the
>>
>> >> motherboard says "75W PCI-e". Do I need a different PSU to populate
>> the AUX
>>
>> >> power connector completely? Are these runs equivalent to drawing max
>> power
>>
>> >> during gaming?
>>
>> >>
>>
>> >> Thanks!
>>
>> >>
>>
>> >> Alex
>>
>> >> --
>>
>> >> Gromacs Users mailing list
>>
>> >>
>>
>> >> * Please search the archive at
>>
>> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>
>> >> posting!
>>
>> >>
>>
>> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> >>
>>
>> >> * For (un)subscribe requests visit
>>
>> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>
>> >> send a mail to gmx-users-requ...@gromacs.org.
>>
>> >>
>>
>>
>>
>> --
>>
>> Gromacs Users mailing list
>>
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>>
>> * For (un)subscribe requests visit
>>
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>>
>>
>>
>>
>> --
>>
>> Best regards,
>>
>>  Alex                            mailto:nedoma...@gmail.com
>> <nedoma...@gmail.com>
>>
>>
>>
>>
>>
>> --
>>
>> Best regards,
>>
>>  Alex                            mailto:nedoma...@gmail.com
>> <nedoma...@gmail.com>
>>
>>
>>
>>
>>
>> --
>>
>> Best regards,
>>
>>  Alex                            mailto:nedoma...@gmail.com
>> <nedoma...@gmail.com>
>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Reply via email to