The conversation was aimed towards renderfarms rather than workstations
though, and I imagine running a render job per gpu rather than per node, so
that the scaling per gpu is much better (ie 100% minus maybe a small hit on
the cpu usage being shared). Could be run headless so no need for a display
card.

In terms of power at the wall, in the uk a kettle will routinely use 3000w
(albeit only for a short time) so a 4 gpu pc should be within acceptable
limits - between 1000 - 1500 w when rendering. The biggest problem I've had
is finding a suitable UPS which is silent as most at that rating need fans,
and are designed to sit in a server room instead of a studio space.

There was an interesting post on the RS forums recently from a guy setting
up a gpu renderfarm using these:
http://www.supermicro.com/products/system/2U/2028/SYS-2028GR-TRH.cfm

dual xeon, 6 gpu solutions mmm. Sounds like quite a bit of work to get it
all working smoothly though, including modifying 980ti card power outlet
from top to back to match tesla cards.


On 6 August 2015 at 10:16, Tim Leydecker <bauero...@gmx.de> wrote:

> Would you guys find the 980Ti hitting the sweetspot between price and
> performance?
>
> How about connectors and power supply?
>
> The 970 is running on 2x6pin, e.g. a maximum of 150 Watts plus the 75
> Watts from the slot, a 225 Watts total.
>
> The 980ti is mostly 1x6pin and 1x8pin, the 1x8pin offering 150Watts
> compared to a 1x6pin offering 75 Watts.
>
> In my case, I find it already hard to provide more than one 1x8pin and
> 1x6pin via connectors.
> How do you guys provide reliable power to more than 1 or 2 graphics cards
> without melting your power lines?
>
> Here in Germany, it is rare to have more than around 1 kW sustained drain
> per average wall plug supported by a great many home installations.
> There is always loads of headroom of course but technically, constantly
> draining a lot more from  such a wall plug can get, uhmmm, hot.
>
> That´s a few of the reasons I suggested to start out with just 1 card,
> like a Titan X (or a GTX980ti), case power supply connection, wall plugs,
> electrical limits.
>
> Cheers,
>
> tim
>
>
>
>
>
>
>
>
> Am 05.08.2015 um 16:10 schrieb Mirko Jankovic:
>
> agree. 980ti is just a bit above 2 970s price wise, performance wise it
> realyl dpends on scenes you are working on. but I plan to upgrade my 4x970
> with 980ti as soon as possible, even if it means replacing 1 by  1
>
> On Wed, Aug 5, 2015 at 3:36 PM, Matt Morris <matt...@gmail.com> wrote:
>
>> The 970 is the most cost efficient only with scenes that fit into its
>> memory - which using redshift is limited to 3.5Gb because of the internal
>> memory architecture. I'd recommend looking at gpus with 6Gb or higher. The
>> 980ti is a great card for the money, and the extra vram will help
>> performance even on small scenes as you can utilise memory optimisation
>> settings. Because you're limited to 4 gpus (risers don't work too well and
>> limited by number and speed of pci-e lanes as mirko said) you want to make
>> the most of that space. Per card electricity usage and heat output isn't
>> that much more for the 980ti.
>>
>> On 5 August 2015 at 14:04, Tim Leydecker < <bauero...@gmx.de>
>> bauero...@gmx.de> wrote:
>>
>>> Thanks for the clarification, Dan.
>>>
>>> I think I mixed this up with the download section of the forum for
>>> customers?
>>>
>>> Whatever, good that the registered user forum is accessible to
>>> interested parties.
>>>
>>> Cheers,
>>>
>>> tim
>>>
>>> P.S: For Hair, Shave&Haircut is supported (I don´t have personal
>>> experience with it).
>>>
>>>
>>> Am 05.08.2015 um 14:17 schrieb Dan Yargici:
>>>
>>> "you may find it helpful to register in the Redshift3D.com forums,
>>> afaik you´ll need to have
>>> at least one registered license to get access to the "Registered users
>>> only" forum area."
>>>
>>> Just to clear this up.  I'm pretty sure you don't need to have a license
>>> to access the Registered Users section of the Redshift forums.
>>>
>>> DAN
>>>
>>>
>>> On Wed, Aug 5, 2015 at 2:58 PM, Rob Chapman < <tekano....@gmail.com>
>>> tekano....@gmail.com> wrote:
>>>
>>>> A lot of good and informed points by all, just wanted to add, this guy
>>>> here, Sven, at <http://www.render4you.de/renderfarm.html>
>>>> http://www.render4you.de/renderfarm.html recently became the first
>>>> official Redshift GPU render farm and have used him already on a few jobs
>>>> with very tight deadlines.  Essentially he has a rack of 7x Tesla K40st -
>>>> so 1 node is the equivalent of a 6x single 980gtx which I find is pretty
>>>> cost effective solution of adding a decent online GPU render node, that
>>>> works with hardly any setup if you have a redshift scene ready to go
>>>>
>>>> best
>>>>
>>>> Rob
>>>>
>>>> On 5 August 2015 at 11:56, Tim Leydecker < <bauero...@gmx.de>
>>>> bauero...@gmx.de> wrote:
>>>>
>>>>> Hi Morten,
>>>>>
>>>>> you may find it helpful to register in the Redshift3D.com forums,
>>>>> afaik you´ll need to have
>>>>> at least one registered license to get access to the "Registered users
>>>>> only" forum area.
>>>>>
>>>>> There´s a few threads there about Hardware, multiple GPU systems and
>>>>> some user cases
>>>>> of testing single gpu vs. multi gpu rendering plus some Developer info
>>>>> about roadmaps and such.
>>>>>
>>>>> Personally, I´m a big fan of Redshift 3D.
>>>>>
>>>>> Still, here´s a few things to consider you may find useful:
>>>>>
>>>>> - Compared to Arnold, there is no HtoA or C4DtoA equivalent, e.g. no
>>>>> direct C4D or Houdini support
>>>>> - Compared to Arnold, rendering Yeti is not yet supported in
>>>>> Redshift3D - it´s looked at, no ETA.
>>>>> - Maya Fluids, Volumerendering, FumeFX e.g. Fire&Smoke&Dust&such isn´t
>>>>> in Redshift3D sofar
>>>>>
>>>>> - Multitasking, compared to CPU based multitasking and task switching
>>>>> (e.g. switching between
>>>>>   rendering in Maya, Softimage while simultaneously comping in Nuke
>>>>> and painting Textures in Photoshop
>>>>>   or Mari) may pose GPU specific limitations with multiple
>>>>> applications fighting for a very limited GPU VRAM.
>>>>>  Redshift3D can utilize system RAM for VRAM but there can be headache
>>>>> when other, "dumber" apps go ahead
>>>>>  and just block VRAM for their caching. It´s well worth running a good
>>>>> few hard tests in typical workflow scenarios.
>>>>>  Maya, Substance Painter/Designer, Nuke, Photoshop, they all offer one
>>>>> type or another of GPU caching or GPU
>>>>>  acceleration option. My personal feeling is, such stuff never gets
>>>>> tested in real-world, multiple-applications-running scenarios.
>>>>>
>>>>> At a glance, it would sound easy enough to have separate, dedicated
>>>>> GPUs run headless for rendering and reserving one GPU
>>>>> for viewport display and other apps but to be honest, all this stuff
>>>>> is so new, even thought it´s great, it´s still pushing grown
>>>>> legacy workflows and boundaries and in doing so, it may sometimes hurt.
>>>>>
>>>>> My very personal suggestion is:
>>>>>
>>>>> - a starter kit is just one GPU, optimally a Titan X with 12GB VRAM.
>>>>> - step 2, adding a second GPU, running headless, reserved for rendering
>>>>> - step 3, adding a third GPU, comparing speed to step 2
>>>>> - step 4, price/performance balancing, comparing a 1-2-3 GPU GTX970
>>>>> render rig with the above
>>>>>
>>>>> Could be you find out you like to run 1 Titan X for viewport display
>>>>> and multi-apps, and 2 GTX970 for a render job.
>>>>>
>>>>>
>>>>> Another thing.
>>>>>
>>>>> Multi-socket CPU boards and PCIe slots. It seems easier to get solid
>>>>> single socket CPU boards with lot´s of PCIe slots.
>>>>>
>>>>> Again, from my personal experience running a current generation dual
>>>>> socket Xeon rig, it is annoying how many CPU
>>>>> cycles I see wasted away in idle in most of my daily chores, except
>>>>> for pure rendering with Arnold or the likes, I find
>>>>> myself mostly having one CPU and even most of the other CPU´s cores
>>>>> just not used properly by software.
>>>>>
>>>>> I think a good sweetspot would have been to just go for one fast,
>>>>> solid 6-core(budget) or 8core (current) CPU, unless of course for a
>>>>> dedicated render slave...
>>>>>
>>>>>
>>>>> Cheers,
>>>>>
>>>>> tim
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Am 05.08.2015 um 12:05 schrieb Morten Bartholdy:
>>>>>
>>>>> I know several of you are using Redshift extensively or only now. We
>>>>> are looking in to expanding our permanent render license pool and are
>>>>> considering the pros and cons of Arnold, Vray and Redshift. I believe
>>>>> Redshift will provide the most bang for the buck, but at a cost of some
>>>>> production functionality we are used to with Arnold and Vray. Also, it 
>>>>> will
>>>>> likely require an initial investment in new hardware as Redshift will not
>>>>> run on our Pizzabox render units, so that cost has to be counted in as 
>>>>> well.
>>>>>
>>>>>
>>>>>
>>>>> It looks like the most priceefficient Redshift setup would be to make
>>>>> a few machines with as many GPUs in them as physically possible, but how
>>>>> have you guys set up your Redshift renderfarms?
>>>>>
>>>>>
>>>>> I am thinking a large cabinet with a huge PSU, lots of cooling, as
>>>>> much memory as possible on the motherboard and perhaps 8 GPUs in each. GTX
>>>>> 970 is probably the most power per pricepoint while Titans would make 
>>>>> sense
>>>>> if more memory for rendering is required.
>>>>>
>>>>>
>>>>> Any thoughts and pointers will be much appreciated.
>>>>>
>>>>>
>>>>>
>>>>> Morten
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>>
>> --
>> www.matinai.com
>>
>
>
>


-- 
www.matinai.com

Reply via email to