Maybe if we got a petition of xsi centric companies or a sort of
kickstarter goal we could persuade them that Softimage is worth porting too
:) i know i'd drop a grand for yeti.


On 8 February 2014 15:07, Sebastien Sterling
<sebastien.sterl...@gmail.com>wrote:

> Having seen people use Yeti for a year in production i'd have to say its
> pretty revolutionary in terms of workflow, i've seen people acclimatise to
> it in a matter of weeks, the only drawback i can see is ironically, it
> doesn't render using mental ray, obliging you to go for Arnold or vray
> renderman...
>
> A year ago, i sent them an email inquiering as to the possibility of a
> Softimage port in the near/far future. this was the response.
>
> "Thank you for the great feedback - we have investigated Yeti integration
> for rendering preview which may be available in a later version but at this
> time we're not planning an XSI version of the editing tools.  Adding in
> support for a whole new 3D application is a large task and we haven't had
> enough demand for an XSI version at this stage.  If at some point that
> changes and it looks like a studio may commit to a large number of licenses
> we could afford to do this.
>
> We're glad you're enjoying Yeti!
>
> All the best,
> Colin"
>
>
>
> On 8 February 2014 13:25, Tim Leydecker <bauero...@gmx.de> wrote:
>
>> Thanks for this in-depth answer!
>>
>> Personally, I´m starting to lean towards going for the trial of Yeti,
>> one reason being that I think I remember Colin Doncaster´s name from
>> another maya maling list and another because of the really nice
>> sample image of a bear posted by Yolandi Meiring in a similar thread here:
>>
>> (Thread) https://groups.google.com/forum/#!topic/xsi_list/2erKqUcghpI
>>
>> (Image)https://groups.google.com/group/xsi_list/attach/
>> 994086131ca9460/bear_still.jpg?part=4&view=1
>>
>>
>> Another really nice one is a proof of concept of bringing (3dsMax)
>> hair-farm into Softimage
>> from Lee-Perry Smith, with props to Dani Garcia and Steven Caron.
>> That human hairdo and it´s renderings look incredibly awesome.
>>
>> http://ir-ltd.net/hair-farm-hair-into-softimage/
>>
>> For a Melena/Kristinka workflow using Anto Matkovic´s tools in those
>> beautiful shed projects
>> there´s a nice clip posted by/on Lester Banks
>>
>> http://lesterbanks.com/2013/05/workflow-tips-for-creating-
>> and-grooming-hair-in-softimage/
>>
>> --
>>
>> I have only limited amounts of time I can spend on this and need to find
>> something
>> that has potential to be useable for testing Redshift´s hair shading
>> approach when
>> applicable but ideally integrates seamlessly into either
>> Maya/Max/Softimage.
>>
>> The combination of Maya+Redshift is allready working very well and it
>> seems it´ll be
>> easier to successfully migrate from simple hair/fur testing to something
>> actually looking
>> good (using yeti). Also, yeti has a variety of licensing options I might
>> find atractive
>> at a later date if tempted to actually finish something beyond spare-time
>> doodling.
>>
>> I´d prefer Softimage but if that stuff works better in Maya, it´ll be
>> Maya.
>>
>> I suck with Max, even the fastest and most intuitive plugin can´t
>> compensate that sad fact.
>>
>> Cheers,
>>
>> tim
>>
>>
>>
>>
>>
>>
>>
>>
>> On 08.02.2014 12:57, Stefan Kubicek wrote:
>>
>>> Hi Tim,
>>>
>>> I've just been dealing with hair an a hamster and used the built-in
>>> hair&fur of Softimage /2014SP2).
>>>
>>> A few tips about working with built-in hair:
>>>
>>> Avoid too dense meshes. It creates a guide hair for every vertex, hence
>>> dense meshes make you fiddle with lots and lots of guide hair strands
>>> manually, which can be
>>> counter-productive and -intuitive.
>>>
>>> If you want to edit hair parameters on a per vertex basis (via vertex
>>> colors), you need to plan ahead where exactly you want your hair to be and
>>> where you want certain features
>>> (transparency, density, kink & frizz) to change and over which
>>> distance/area. This is especially important for areas like hand and feet,
>>> as well as nose & eye lids.
>>> So, before you move the mesh into skinning/rigging, you better make sure
>>> your topology works not only for animation but also for the hair setup you
>>> have in mind.
>>>
>>> Don't rely on the built-in style transfer functionality. It does mostly
>>> work but has a tendency to "blur" the transferred hair style, even if your
>>> source and target emitter
>>> topology are the same. You need to move in again and reintroduce details
>>> in the fur that got lost.
>>>
>>> If you want to simulate hair with collision don't use a subd mesh as the
>>> emitter. The docs say that having hair emitted from a subd mesh cannot
>>> collide with its own emitter, so you
>>> have to duplicate the source mesh and subdivide it for real (that is,
>>> create more actual polygons) and use that as the collision object. That
>>> would still be acceptable, if it
>>> worked, which it does not. What I found after tedious testing was that
>>> any collision testing fails when your emitter is a subd mesh, independent
>>> of what you have it collide with
>>> (itself or another mesh), which kinda sucks and is the biggest problem I
>>> ran into for which I could not find a solution. Thankfully my fur was
>>> rather short and the character had a
>>> lot of secondary motion, so it looks alife enough (besides some problems
>>> when bending arms, which are hardly noticeable in the animation in my
>>> case). From what I can tell it looks
>>> like collision is always computed against a simplified collision sphere
>>> representation of the collision object, no matter what you set for
>>> collision  accuracy and shape, deforming
>>> shape, etc. It just doesn't work, at least not for me.
>>>
>>> Sometimes, when saving and reloading a scene some hair strands (like 1
>>> out of 1000) would suddenly stick in some random direction. I had the
>>> impression that it helps to always
>>> collapse the modeling stack before saving, at least it never occurred
>>> again in final stages of production when the fur description was final and
>>> not changed anymore.
>>>
>>> Next time I will surely look at Kristinka or Melena. AS for
>>> simulation...I believe there was a Strand Simulation framework with self
>>> collision introduced on softimage.tv some weeks
>>> ago that looked promising, I haven't heard of ne1 using it for hair so
>>> far, mayb someone else can comment on that? Would love to hear some ideas
>>> on this as well.
>>>
>>> Good luck,
>>>
>>>      Stefan
>>>
>>>
>>>
>>>   > Hi guys,
>>>
>>>>
>>>> what would be a convenient way to create,style and control hair
>>>> in Softimage, with lengths up to 10-12 inches and ideally
>>>> both a good collision model and dedicated styling tools?
>>>>
>>>> Which Softimage version would you suggest, e.g. 2014sp2?
>>>>
>>>> I´m a novice with hair and fur but would like to set up
>>>> a manageable sample/test that ideally works with Arnold
>>>> and Redshift.
>>>>
>>>> Is it possible to work generic or transfer results from,
>>>> say Yeti into Softimage?
>>>>
>>>> Would you recommend actually leaning towards Maya for such
>>>> a task, either going directly to Yeti or similar?
>>>>
>>>> I know those are fuzzy questions, I guess I´m actually looking
>>>> for a biased answer regarding any of the various hair plugin options
>>>> for any of the major three apps, e.g. max/maya/softimage.
>>>>
>>>> Sofar, I´ve found the following options:
>>>>
>>>> hairfarm, yeti, ornatrix, shave&haircut, maya hair, maya xgen "hair" in
>>>> 2014ext
>>>> and the cinema4d hair options (shadowmapped).
>>>>
>>>> Admittedly, that´s a lot of options and I find it difficult to bet
>>>> my time-investment onto any of them since I simply know bling about
>>>> pro´s or con´s.
>>>>
>>>> Cheers,
>>>>
>>>>
>>>> tim
>>>>
>>>
>>>
>>>
>

Reply via email to