SLI and crossfire dio not affect viewport performance in any of 3d
application.



On Wed, Mar 27, 2013 at 11:12 AM, olivier jeannel
<olivier.jean...@noos.fr>wrote:

> There was a subject on Redshift Forum about having two grapphic cards.
> It seems to be possible to keep a quadro for dispaly (as it is
> significantly better at displaying), and have a Titan dedicated to
> rendering only (in Redshift you select which card is rendering) as they
> have a huge amount of cores and faster memory.
> I think I've red somewhere that Titan has 2600 cores against 256 for the
> Quadro 4000.
> After chating with Nicolas the Titan could be around 4 time faster than
> the Quadro4000 ...Which is huge :)
>
> Le 27/03/2013 09:26, Tim Leydecker a écrit :
>
>  Personally, I´m hesistant to using two or more cards with SLI
>> because of micro stuttering: http://en.wikipedia.org/wiki/**
>> Micro_stuttering <http://en.wikipedia.org/wiki/Micro_stuttering>
>>
>> If there would be a solution to that, I´d go with two GTX670 w/4GB VRAM,
>> as they are the same GK104´s with a 915MHz chipspeed instead of a 1006Mhz
>> chipspeed as in the reference design GTX680. That could save another
>> 15-35%
>> percent of investment compared to two single chip GTX 680 cards or one
>> GTX Titan.
>>
>> Overclocked versions may use slightly different chip/shader speeds.
>>
>> In any case, as much VRAM as available, as that always helps in many
>> progamms
>> like Mudbox, Redshift and isn´t much of an added cost (comparing 2GB vs
>> 4GB).
>>
>> At a company I worked Mari 1.5.x behaved bitchy unless it was given a
>> Quadro
>> or forced to ignore the actual card´s game heritage. But that may have
>> been
>> solved with 2.0...
>>
>> Cheers,
>>
>> tim
>>
>>
>>
>> On 27.03.2013 08:59, Mirko Jankovic wrote:
>>
>>> On the other hand Titan is more expensive than 2 gtx680 if I'm not
>>> mistaken... and i bet that with two 680 in SLI, when multi GPU is
>>> supported
>>> you will have better performance than with 1 titan right?
>>>
>>>
>>> On Wed, Mar 27, 2013 at 8:55 AM, Tim Leydecker <bauero...@gmx.de> wrote:
>>>
>>>  The GTX Titan is not a gimmick but uses the successor to the chip series
>>>> used in the GTX 680, e.g. the GT(X) 6xx series uses the GK104, while
>>>> the GTX Titan uses the GK110. You can find the GK110 in the Tesla K20,
>>>> too.
>>>>
>>>> You could describe the GTX690 as a gimmick, as it uses two GK104 on one
>>>> card
>>>> to maximize performance at the cost of higher powerconsumption, noise
>>>> and
>>>> heat.
>>>>
>>>> The performance gain between a GTX680 and a GTX Titan is roughly 35%
>>>> and can be felt nicely when using it with higher screenresolutions like
>>>> 1920x1200 or 2560x1440 and higher antialiasing in games.
>>>>
>>>> That´s where the 6GB VRAM of the GTX Titan come in handy, too.
>>>>
>>>> Cheers,
>>>>
>>>> tim
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 27.03.2013 05:24, Raffaele Fragapane wrote:
>>>>
>>>>  Benchmarking is more driver tuning than it's videocard performance,
>>>>> and if
>>>>> you want to look at number crunching you should look at the most recent
>>>>> gens.
>>>>>
>>>>> The 680 has brought nVIDIA back up top for number crunching (forgetting
>>>>> the
>>>>> silver editions or gimmicks like the titan), and close enough to bang
>>>>> for
>>>>> buck best, but AMD's response to that still has to come.
>>>>>
>>>>> Ironically, though, the 6xx gen is reported as a crippled, bad
>>>>> performer
>>>>> in
>>>>> DCC apps, although I can't say I noticed it myself. It sure as hell
>>>>> works
>>>>> admirably well in mudbox, mari, cuda work, and I've had no issues in
>>>>> maya
>>>>> or soft. I don't really benchmrak or obsess over numbers much though.
>>>>>
>>>>> When this will obsolesce, I will considering AMD again, probably in a
>>>>> couple years.
>>>>>
>>>>> For GPU rendering though, well, that's something you CAN bench reliably
>>>>> with the engine, and AMD might still win the FLOP per dollar run
>>>>> there, so
>>>>> it's not to be discounted.
>>>>>
>>>>> Would be good to know what the redshift guys have to say about it
>>>>> themselves though if they can spare the thought and can actually
>>>>> disclose.
>>>>>
>>>>> On Thu, Mar 21, 2013 at 9:04 PM, Mirko Jankovic
>>>>> <mirkoj.anima...@gmail.com>****wrote:
>>>>>
>>>>>   well no idea about pro cards.. really never got financial
>>>>> justification
>>>>>
>>>>>> to
>>>>>> get one, quadro 4000 in old company didn;t really felt anything much
>>>>>> better
>>>>>> than gaming cards so...
>>>>>> but in gaming segment..
>>>>>> opengl scores in sinebench for example:
>>>>>> gtx 580: ~55
>>>>>> 7970: ~90
>>>>>>
>>>>>> to start with....
>>>>>> not to mention annoying issue with high segment rotating cube in
>>>>>> viewport
>>>>>> in SI.
>>>>>> 7970 smooth at ~170 fps
>>>>>> with gtx580 bfore that.. to point out that the rest of comp is
>>>>>> identical
>>>>>> only switched card... for the first 30-50sec frame rate was stuck at
>>>>>> something like 17 fps... and after that it kinda jump to ~70-80fps...
>>>>>>
>>>>>> in any case with gaming cards ati vs nvidia there is no doubt. and if
>>>>>> you
>>>>>> are not using CUDA much then no need to even thing which way to go.
>>>>>> Now redshift is game changer heheh but I'm still hoping that OpenCL
>>>>>> will
>>>>>> be supported and I'm looking forward to test it out with two of 7970
>>>>>> in
>>>>>> crossfire :)
>>>>>>
>>>>>> btw I'm not much into programming waters but is it really
>>>>>> OpenCL programming  that as I understood should work on ALL cards, is
>>>>>> that
>>>>>> much more complex than for CUDA which is limited to nvidia only?
>>>>>> Wouldn't
>>>>>> it be more logical to go with solution that is covering a lot more
>>>>>> market
>>>>>> than something limited to one manufacturer?
>>>>>>
>>>>>>
>>>>>> On Thu, Mar 21, 2013 at 10:55 AM, Arvid Björn <arvidbj...@gmail.com
>>>>>>
>>>>>>> wrote:
>>>>>>>
>>>>>>
>>>>>>
>>>>>>  My beef with ATI last time I tried FirePro was that it had a hard
>>>>>>> time
>>>>>>> locking into 25fps playback in some apps, as if the refresh rate was
>>>>>>> locked
>>>>>>> to 30/60. Realtime playback in Softimage would stutter annoyingly
>>>>>>> IIRC.
>>>>>>> Plus it seemed to draw text slightly differently in some apps.
>>>>>>>
>>>>>>> Nvidia just feels.. comfy.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Mar 21, 2013 at 5:21 AM, Raffaele Fragapane <
>>>>>>> raffsxsil...@googlemail.com> wrote:
>>>>>>>
>>>>>>>   These days if you hit the right combination of drivers and planet
>>>>>>>
>>>>>>>> alignment they are OK.
>>>>>>>>
>>>>>>>> Performance wise they have been ahead of nVIDIA for a while in
>>>>>>>> number
>>>>>>>> crunching, the main problem is the drivers are still a coin toss
>>>>>>>> chance,
>>>>>>>> and that OCL isn't anywhere as popular as CUDA.
>>>>>>>>
>>>>>>>> With win7 or 8 and recent versions of Soft/Maya they can do well.
>>>>>>>>
>>>>>>>> nVIDIA didn't help with the crippling of the 6xx for professional
>>>>>>>> use,
>>>>>>>> and pissing off Linus. They are still ahead by a slight margin, for
>>>>>>>> now,
>>>>>>>> but I wouldn't discount AMD wholesale anymore.
>>>>>>>>
>>>>>>>> If the next generation is as disappointing as Kepler is, and AMD
>>>>>>>> gets
>>>>>>>> both Linux support AND decent (and properly OSS) drivers out, I'm
>>>>>>>> moving
>>>>>>>> time come for the next upgrade. For now I recently bought a 680
>>>>>>>> because it
>>>>>>>> was kind of mandatory to not go insane with Mari and Mudbox, and
>>>>>>>> because I
>>>>>>>> like CUDA and I toy with it at home.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Mar 20, 2013 at 9:58 PM, Dan Yargici <danyarg...@gmail.com
>>>>>>>>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>
>>>>>>>>   "Ati was tested over and over and showing a lot better viewport
>>>>>>>>
>>>>>>>>> results
>>>>>>>>> in Softimage than nvidia... "
>>>>>>>>>
>>>>>>>>> Really?  I don't remember anyone ever suggesting ATI was anything
>>>>>>>>> other
>>>>>>>>> than shit!
>>>>>>>>>
>>>>>>>>> DAN
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>
>>
>

Reply via email to