While I'm not a huge fan of Anand, they do occasionally have a good article
out.
http://www.anandtech.com/show/6774/nvidias-geforce-gtx-titan-part-2-titans-performance-unveiled/3

This sheds some light on what you might be asking about, and on why some
times you hear that the 580s are doing better than the 680s and why the
latter is considered a crippled card for professional use.

It does omit the fact that the (factory) OCed premium 680s, especially with
the memory clocked higher, actually go up a fair chunk, and that if you
have a 680 that hits 1400 then some of those tests, especially short span
ones where Titan's turbo doesn't have the time to kick in, will actually
see the 680 taking the lead over the titan in both numbers and power usage.

Only benching I've done was CUDA and number crunching related because I've
taken an interest in it a while ago and still toy with it on and off, and
that includes the generic GEMM and FFT tests.

I don't bother with game benchmarks or 3DMark or cinebench, but single
precision the 680 stock cooled but OCed was constantly bang-on on par with
the titan for a lower power draw.
Double precision even OCed it (680) will fall back a fair chunk, and water
cooled OCed 580s actually take the lead in bang for buck by a mile, but
have horrible (high) power draw.

You can consider the k5000 somewhat closer to the titan than to the 680.


On Thu, Mar 28, 2013 at 10:16 AM, Jeff McFall <jeff.mcf...@sas.com> wrote:

>  I have been following this thread and have been wondering if the fact
> that the K6X0 and Quadro K5000 are more tuned for single precision is
> making the difference between them and Titan which from what I understand
> is more tuned for double precision?   Or does that even matter for this or
> other renderers?  I admit my knowledge in this area is pretty scarce.****
>
> ** **
>
> We just got some K5000’s and were trying to get a handle on all of this
> before we purchased them.  We never sorted it out so we went ahead with the
> K5000 which seem to be fine so far but I admit they have not yet been
> pushed for compute or rendering.****
>
> ** **
>
> ** **
>
> jeff****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> *From:* softimage-boun...@listproc.autodesk.com [mailto:
> softimage-boun...@listproc.autodesk.com] *On Behalf Of *Nicolas Burtnyk
> *Sent:* Wednesday, March 27, 2013 6:20 PM
> *To:* softimage@listproc.autodesk.com
> *Subject:* Re: Announcing Redshift - Biased GPU Renderer****
>
> ** **
>
> The TITAN is not a gimmick with respect to Redshift.****
>
> It's almost twice as fast as a GTX 670 on all the tests we've run.  We
> don't have a GTX 680 so I don't have the numbers to compare against.
>  Pricing wise, there TITAN costs $1K and the 680 4GB is $550 so the 680
> wins for price/performance ratio (but probably not by a whole lot).  For
> performance/watt, the TITAN wins by a lot.****
>
> ** **
>
> On Wed, Mar 27, 2013 at 3:09 PM, Raffaele Fragapane <
> raffsxsil...@googlemail.com> wrote:****
>
> Don't call it a gimmick then (although it is with all the fashion and hype
> elements around it), call it a singularity, but if you're looking at
> benching and sorting videocards for performance and bang for buck you
> should exclude it. Unless you also want to include that massive liquid
> cooled asus radeon that is sold in a military grade carrying case and other
> things like that :)
>
> I've tried it btw as a friend's shop had a review return they kindly lent
> me for a week (they work closely with GB since one of the partners is an ex
> employee and another moonlights reviewing hardware).
> It was hardly a noticeable improvement over the GB OC 680 4GB I had (and
> still have) in there.
>
> The practical performance gains are far, far inferior to 35%. Only the
> added ram is nice, but nothing justifies a price tag that is more than
> doubled compared to the 680. It's a gimmick because you need a serious
> hardware fetish to justify forking out 1250-1400$ out for it compared to a
> benched OC 680 with 4GB that you can have for 550$ and have chances to
> trivially overclock and narrow the gap again.
>
> I run a dell 2711 and an additional 1980x1200 monitor with it btw.****
>
> ** **
>
> On Wed, Mar 27, 2013 at 6:55 PM, Tim Leydecker <bauero...@gmx.de> wrote:**
> **
>
> The GTX Titan is not a gimmick but uses the successor to the chip series
> used in the GTX 680, e.g. the GT(X) 6xx series uses the GK104, while
> the GTX Titan uses the GK110. You can find the GK110 in the Tesla K20, too.
>
> You could describe the GTX690 as a gimmick, as it uses two GK104 on one
> card
> to maximize performance at the cost of higher powerconsumption, noise and
> heat.
>
> The performance gain between a GTX680 and a GTX Titan is roughly 35%
> and can be felt nicely when using it with higher screenresolutions like
> 1920x1200 or 2560x1440 and higher antialiasing in games.
>
> That´s where the 6GB VRAM of the GTX Titan come in handy, too.
>
> Cheers,
>
> tim****
>
>
>
>
>
>
>
> On 27.03.2013 05:24, Raffaele Fragapane wrote:****
>
> Benchmarking is more driver tuning than it's videocard performance, and if
> you want to look at number crunching you should look at the most recent
> gens.
>
> The 680 has brought nVIDIA back up top for number crunching (forgetting the
> silver editions or gimmicks like the titan), and close enough to bang for
> buck best, but AMD's response to that still has to come.
>
> Ironically, though, the 6xx gen is reported as a crippled, bad performer in
> DCC apps, although I can't say I noticed it myself. It sure as hell works
> admirably well in mudbox, mari, cuda work, and I've had no issues in maya
> or soft. I don't really benchmrak or obsess over numbers much though.
>
> When this will obsolesce, I will considering AMD again, probably in a
> couple years.
>
> For GPU rendering though, well, that's something you CAN bench reliably
> with the engine, and AMD might still win the FLOP per dollar run there, so
> it's not to be discounted.
>
> Would be good to know what the redshift guys have to say about it
> themselves though if they can spare the thought and can actually disclose.
>
> On Thu, Mar 21, 2013 at 9:04 PM, Mirko Jankovic
> <mirkoj.anima...@gmail.com>wrote:****
>
> well no idea about pro cards.. really never got financial justification to
> get one, quadro 4000 in old company didn;t really felt anything much better
> than gaming cards so...
> but in gaming segment..
> opengl scores in sinebench for example:
> gtx 580: ~55
> 7970: ~90
>
> to start with....
> not to mention annoying issue with high segment rotating cube in viewport
> in SI.
> 7970 smooth at ~170 fps
> with gtx580 bfore that.. to point out that the rest of comp is identical
> only switched card... for the first 30-50sec frame rate was stuck at
> something like 17 fps... and after that it kinda jump to ~70-80fps...
>
> in any case with gaming cards ati vs nvidia there is no doubt. and if you
> are not using CUDA much then no need to even thing which way to go.
> Now redshift is game changer heheh but I'm still hoping that OpenCL will
> be supported and I'm looking forward to test it out with two of 7970 in
> crossfire :)
>
> btw I'm not much into programming waters but is it really
> OpenCL programming  that as I understood should work on ALL cards, is that
> much more complex than for CUDA which is limited to nvidia only? Wouldn't
> it be more logical to go with solution that is covering a lot more market
> than something limited to one manufacturer?
>
>
> On Thu, Mar 21, 2013 at 10:55 AM, Arvid Björn <arvidbj...@gmail.com>wrote:
> ****
>
>
> My beef with ATI last time I tried FirePro was that it had a hard time
> locking into 25fps playback in some apps, as if the refresh rate was locked
> to 30/60. Realtime playback in Softimage would stutter annoyingly IIRC.
> Plus it seemed to draw text slightly differently in some apps.
>
> Nvidia just feels.. comfy.
>
>
>
> On Thu, Mar 21, 2013 at 5:21 AM, Raffaele Fragapane <
> raffsxsil...@googlemail.com> wrote:****
>
> These days if you hit the right combination of drivers and planet
> alignment they are OK.
>
> Performance wise they have been ahead of nVIDIA for a while in number
> crunching, the main problem is the drivers are still a coin toss chance,
> and that OCL isn't anywhere as popular as CUDA.
>
> With win7 or 8 and recent versions of Soft/Maya they can do well.
>
> nVIDIA didn't help with the crippling of the 6xx for professional use,
> and pissing off Linus. They are still ahead by a slight margin, for now,
> but I wouldn't discount AMD wholesale anymore.
>
> If the next generation is as disappointing as Kepler is, and AMD gets
> both Linux support AND decent (and properly OSS) drivers out, I'm moving
> time come for the next upgrade. For now I recently bought a 680 because it
> was kind of mandatory to not go insane with Mari and Mudbox, and because I
> like CUDA and I toy with it at home.
>
>
> On Wed, Mar 20, 2013 at 9:58 PM, Dan Yargici <danyarg...@gmail.com>wrote:*
> ***
>
> "Ati was tested over and over and showing a lot better viewport results
> in Softimage than nvidia... "
>
> Really?  I don't remember anyone ever suggesting ATI was anything other
> than shit!
>
> DAN****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
>
>
> ****
>
> --
> Our users will know fear and cower before our software! Ship it! Ship it
> and let them flee like the dogs they are!****
>
> ** **
>



-- 
Our users will know fear and cower before our software! Ship it! Ship it
and let them flee like the dogs they are!

Reply via email to