> It's funny how various studios use similar techniques with the same softwares.
Should be "the same game oriented software". ;-P On Fri, Sep 13, 2013 at 2:00 PM, Mathieu Leclaire <mlecl...@hybride.com>wrote: > Funny... we did something very similar for Jappeloup here at Hybride. We > had 404 crowds shots to do in 5 or 6 different locations with different > clothing styles. Some agents where to be seen very close to camera so we > created high resolution geometry for the agents with ICE logic to mix > various textures, clothing items, hair styles, etc. and we pre-baked a ton > of cloth and hair simulations. We lined them all up on the timeline like > you did and then artists put probabilities for each cycle appearing and it > would randomly chose depending on the probabilities. We had about 50 > animations cycles pre-baked for each man and woman agents. We started > developing our deep compositing pipeline for this show since we though > Arnold wouldn't be handle to handle all that high res geometry (some crowds > where over 80 000 high res agents) but Arnold chewed everything up so we > only finished our deep compositing pipeline a few months later for use on > White House Down. We used the actual Ubisoft mo-cap studio to do all our > mocap. We also created a 2D cards agent system as well with a few tricks to > allow us to actually relight the footage in the cards. Those also gave very > good results, but sometimes, having full 3D agents made it easier to > integrate. It depended on the situation really. And we reused the same > techniques for the Opera House in Smurf 2 as well and a very similar > approach for our White House Down crowds. We don't have any making-of yet > (we've been crazy busy for the past 2 years), but once we do, I'll gladly > share. It's funny how various studios use similar techniques with the same > softwares. > > -Mathieu > > ------------------------------ > -----Original Message----- > From: "Alan Fregtman" <alan.fregt...@gmail.com> > To: "XSI Mailing List" <softimage@listproc.autodesk.com> > Date: 09/13/13 13:01 > Subject: ICE Crowds in "Now You See Me" (making-of/breakdown video) > > Rodeo FX has just put up a short reel of the crowds we did for "*Now You > See Me*" to fill the MGM Grand stage with the help of ICE and Arnold... > > https://vimeo.com/74393635 > > > > It's not done with *CrowdFX* as SI|2013 was in beta while this was being > made and they didn't need to be too intelligent, so we went with a bunch of > nice cycle instancing tools and stationary particle instances. There were > various variations of animation clips of various different variations of > people. > > Animation was mocap captured with iPiSoft's playstation-eye-based mocap > software, then cleaned up in MotionBuilder, brought back into Softimage > (thanks to the MotionBuilder template rig) and caches exported out. > > The cycles were in one long timeline of one clip after another and we > stored start & end frame numbers along with an array of ICE strings (of the > cycle names.) We might have "clappingA", "clappingB", "clappingC" with > different frame ranges and then we had a neat ICE compound where you could > give it in a substring (eg. "clapping") and it would find all variations > for that name and randomly assign those frame ranges and cycle. > > If I recall correctly the general behaviours were: standing idle looking > around, clapping normally, clapping hyperenthusiastically with bonus > fistpumping, and grabbing money bills from the air. There were three or so > variations of each. > > Furthermore, the crowd on the floor near the stage is CG, but the one in > the stadium seats is actually 2D cards of footage of real people -- Rodeo > employees, in fact -- doing various motions, instanced in Nuke with some > scripted magic. (I was not involved with the 2D crowd so that's as much as > I know.) > > The 3D crowd models are Rodeo folks too, by the way. I'm among them, as > are most of my coworkers. We used some software with the Microsoft Kinect > to get some general 3D scan meshes of us as a reference for volume/form, > but they were modeled by hand as the scan wasn't quite perfect as-is. It > was super helpful to have the scans though! Its pretty amazing how often > you can tell people apart from their silhouette/stance alone. > > I co-developed the ICE side of it together with Jonathan Laborde (who is > in the list and probably reading this.) Hats off to my other fellow > coworkers who modeled, textured, lit and comped everything so well. :) > Teamwork! > > > Cheers, > > -- Alan > > > >