this hairbusiness is a minefield.

i don´t have a arnold renderman or vray license running on my machine at the 
moment.
I actually took yeti - mR support for granted... crap


In terms of creating/styling hair, the sheep example from Rune Spaans as well as
the other tutorials here make hairfarm look desireable and artist friendly:

www.hair-farm.com/tutorials/

Especially because of it´s volume filling option (something that was also shown
in a demo of fabric engine).

That seems to be something very desireable, as creating a proxy volume instead 
of painfully
creating guide curves and brushing/clipping/forcing things around them in place 
gives a better
feedback while designing a hairdo imo.

The yeti docs for 1.3 i´ve found via google seem comprehensive and include a 
lot of sample files
but I haven´t asked for the trialyet, trying to time my use to having at least 
3-4 days toplay with it.


But that´s all still theoretical in my case. The only option I´ve tried and 
discard is the Fibermesh approach,
for me, that´s to fiddly to get used to it but maybe i just approached it 
without the proper mindset.

Thanks again for sharing your insights!


tim













On 08.02.2014 15:07, Sebastien Sterling wrote:
Having seen people use Yeti for a year in production i'd have to say its pretty 
revolutionary in terms of workflow, i've seen people acclimatise to it in a 
matter of weeks, the
only drawback i can see is ironically, it doesn't render using mental ray, 
obliging you to go for Arnold or vray renderman...

A year ago, i sent them an email inquiering as to the possibility of a 
Softimage port in the near/far future. this was the response.

"Thank you for the great feedback - we have investigated Yeti integration for 
rendering preview which may be available in a later version but at this time we're 
not planning an XSI
version of the editing tools.  Adding in support for a whole new 3D application 
is a large task and we haven't had enough demand for an XSI version at this 
stage.  If at some point
that changes and it looks like a studio may commit to a large number of 
licenses we could afford to do this.

We're glad you're enjoying Yeti!

All the best,
Colin"



On 8 February 2014 13:25, Tim Leydecker <bauero...@gmx.de 
<mailto:bauero...@gmx.de>> wrote:

    Thanks for this in-depth answer!

    Personally, I´m starting to lean towards going for the trial of Yeti,
    one reason being that I think I remember Colin Doncaster´s name from
    another maya maling list and another because of the really nice
    sample image of a bear posted by Yolandi Meiring in a similar thread here:

    (Thread) https://groups.google.com/__forum/#!topic/xsi_list/__2erKqUcghpI 
<https://groups.google.com/forum/#%21topic/xsi_list/2erKqUcghpI>

    
(Image)https://groups.google.__com/group/xsi_list/attach/__994086131ca9460/bear_still.__jpg?part=4&view=1
    
<https://groups.google.com/group/xsi_list/attach/994086131ca9460/bear_still.jpg?part=4&view=1>


    Another really nice one is a proof of concept of bringing (3dsMax) 
hair-farm into Softimage
    from Lee-Perry Smith, with props to Dani Garcia and Steven Caron.
    That human hairdo and it´s renderings look incredibly awesome.

    http://ir-ltd.net/hair-farm-__hair-into-softimage/ 
<http://ir-ltd.net/hair-farm-hair-into-softimage/>

    For a Melena/Kristinka workflow using Anto Matkovic´s tools in those 
beautiful shed projects
    there´s a nice clip posted by/on Lester Banks

    
http://lesterbanks.com/2013/__05/workflow-tips-for-creating-__and-grooming-hair-in-__softimage/
    
<http://lesterbanks.com/2013/05/workflow-tips-for-creating-and-grooming-hair-in-softimage/>

    --

    I have only limited amounts of time I can spend on this and need to find 
something
    that has potential to be useable for testing Redshift´s hair shading 
approach when
    applicable but ideally integrates seamlessly into either Maya/Max/Softimage.

    The combination of Maya+Redshift is allready working very well and it seems 
it´ll be
    easier to successfully migrate from simple hair/fur testing to something 
actually looking
    good (using yeti). Also, yeti has a variety of licensing options I might 
find atractive
    at a later date if tempted to actually finish something beyond spare-time 
doodling.

    I´d prefer Softimage but if that stuff works better in Maya, it´ll be Maya.

    I suck with Max, even the fastest and most intuitive plugin can´t 
compensate that sad fact.

    Cheers,

    tim








    On 08.02.2014 12:57, Stefan Kubicek wrote:

        Hi Tim,

        I've just been dealing with hair an a hamster and used the built-in 
hair&fur of Softimage /2014SP2).

        A few tips about working with built-in hair:

        Avoid too dense meshes. It creates a guide hair for every vertex, hence 
dense meshes make you fiddle with lots and lots of guide hair strands manually, 
which can be
        counter-productive and -intuitive.

        If you want to edit hair parameters on a per vertex basis (via vertex 
colors), you need to plan ahead where exactly you want your hair to be and 
where you want certain features
        (transparency, density, kink & frizz) to change and over which 
distance/area. This is especially important for areas like hand and feet, as well as 
nose & eye lids.
        So, before you move the mesh into skinning/rigging, you better make 
sure your topology works not only for animation but also for the hair setup you 
have in mind.

        Don't rely on the built-in style transfer functionality. It does mostly work but 
has a tendency to "blur" the transferred hair style, even if your source and 
target emitter
        topology are the same. You need to move in again and reintroduce 
details in the fur that got lost.

        If you want to simulate hair with collision don't use a subd mesh as 
the emitter. The docs say that having hair emitted from a subd mesh cannot 
collide with its own
        emitter, so you
        have to duplicate the source mesh and subdivide it for real (that is, 
create more actual polygons) and use that as the collision object. That would 
still be acceptable, if it
        worked, which it does not. What I found after tedious testing was that 
any collision testing fails when your emitter is a subd mesh, independent of 
what you have it collide
        with
        (itself or another mesh), which kinda sucks and is the biggest problem 
I ran into for which I could not find a solution. Thankfully my fur was rather 
short and the
        character had a
        lot of secondary motion, so it looks alife enough (besides some problems 
when bending arms, which are hardly noticeable in the animation in my case). 
>From what I can tell
        it looks
        like collision is always computed against a simplified collision sphere 
representation of the collision object, no matter what you set for collision  
accuracy and shape,
        deforming
        shape, etc. It just doesn't work, at least not for me.

        Sometimes, when saving and reloading a scene some hair strands (like 1 
out of 1000) would suddenly stick in some random direction. I had the 
impression that it helps to always
        collapse the modeling stack before saving, at least it never occurred 
again in final stages of production when the fur description was final and not 
changed anymore.

        Next time I will surely look at Kristinka or Melena. AS for 
simulation...I believe there was a Strand Simulation framework with self 
collision introduced on softimage.tv
        <http://softimage.tv> some weeks
        ago that looked promising, I haven't heard of ne1 using it for hair so 
far, mayb someone else can comment on that? Would love to hear some ideas on 
this as well.

        Good luck,

              Stefan



           > Hi guys,


            what would be a convenient way to create,style and control hair
            in Softimage, with lengths up to 10-12 inches and ideally
            both a good collision model and dedicated styling tools?

            Which Softimage version would you suggest, e.g. 2014sp2?

            I´m a novice with hair and fur but would like to set up
            a manageable sample/test that ideally works with Arnold
            and Redshift.

            Is it possible to work generic or transfer results from,
            say Yeti into Softimage?

            Would you recommend actually leaning towards Maya for such
            a task, either going directly to Yeti or similar?

            I know those are fuzzy questions, I guess I´m actually looking
            for a biased answer regarding any of the various hair plugin options
            for any of the major three apps, e.g. max/maya/softimage.

            Sofar, I´ve found the following options:

            hairfarm, yeti, ornatrix, shave&haircut, maya hair, maya xgen 
"hair" in 2014ext
            and the cinema4d hair options (shadowmapped).

            Admittedly, that´s a lot of options and I find it difficult to bet
            my time-investment onto any of them since I simply know bling about 
pro´s or con´s.

            Cheers,


            tim




Reply via email to