Re: White House Down

2013-07-05 Thread Eugen Sares

Thanks for elaborating, Mathieu!
Another interesting point would be the city generator and your tree system.
Did you build everything in Softimage from the ground up, or is there a 
mixed pipeline behind all the assets you needed?

Do you plan to ever go public with those systems?


Am 03.07.2013 18:01, schrieb Mathieu Leclaire:

Hi Eugen,

which version did you use, and how many seats?
Any serious trouble you ran into?

You're talking about Fabric Engine' Creation Platform? We have a site 
license so we can do deep image processing on about 120 machines if we 
chose to. For Nuke, we have 20 or 25 licenses (I'm not sure anymore). 
So by using our Creation Platform site licenses, we can use many more 
machines to process the data without having to buy more Nuke licenses. 
But the speed gain is not only because we can now use more machines, 
our CP based deep image tool is much faster then Nuke, though it's 
hard to say how much faster. It's relative to a lot of factors and we 
are still working on optimizing our version. We've noticed certain 
operations that can be over 10 times faster, while others can be 
longer then Nuke, but we are still putting in a lot of work and hope 
to improve performance even more as we learn to use CP more and more. 
Creation Platform is a lot to take in so all the trouble we ran into 
was mostly a normal part of the learning curve that comes with 
adopting a new development platform. I don't remember on which version 
we started implementing this tool, but we've adapted with each new 
release of the platform.


So you built your own crowd system... what's the reason for not using 
the built-in system?


We created a crowd system for the French movie Jappeloup where we 
delivered 404 crowd shots in about 3-4 months last summer. This 
happened before Crowd FX was released. I remember having discussions 
with Autodesk while they where still working on Crowd FX, but we had 
to deliver shots way before their system was to be complete and 
stable. Our crowd system isn't complex. There's no real interaction 
between agents. It's a stadium agent based crowd system. It's based on 
a list of animation clips doing various actions, and the system is all 
instance based. The system basically randomly choses an animation 
cycle with a few clever tools to help select the timing and 
distribution. Since it's all instances, the memory needed is much less 
then creating unique mesh per agent, but the more varied the action of 
the agents are, the more memory it takes and it can eventually become 
as memory heavy as having unique geometry per agent. It was a matter 
of finding a good threshold where you have enough variations in the 
cycles so you don't notice they are being repeated, but not too much 
so it keeps memory low. Our agents are also very high resolution 
compared to most crowd solutions where agents are very low-resolution. 
That helps the look of the crowd a lot. It adds a lot of detail to the 
crowds and really helps sell the shots. With deep composting, we can 
raise the bar even further and split our renders in many layers 
without ever having to worry about where the agents of each layer fit 
in the crowd. The system of White House Down is the evolution of that 
system. Since the shots have constantly moving cameras and the crowds 
are motion blurred with individual agents only visible for a limited 
number of frames, we could deal the crowds as if they where stadium 
agents where the stadium is basically the streets of Washington. It 
was only a matter of adding animation cycles of people reacting to a 
helicopter passing just over their heads.


-Mathieu


On 03/07/2013 2:44 AM, Eugen Sares wrote:

Thanks a lot for this, Mathieu!
Always nice to hear when Softimage is used on such high profile 
titles. That prooves a lot technically, and it is good for the 
spirit, too...

Autodesk wants to use this for advertising...
Also, what you say about Fabric Engine instead of Nuke is amazing.

If I may ask,
which version did you use, and how many seats?
Any serious trouble you ran into?
So you built your own crowd system... what's the reason for not using 
the built-in system?




Am 02.07.2013 18:19, schrieb Mathieu Leclaire:

Hi guys,

I just wanted to share some information on the shots we did for 
White House Down.


First off, there's an article in fxguide that explains a bit what we 
did :


http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/ 




And here is some more details about how we did it :

We built upon our ICE based City Generator that we created for Spy 
Kids 4. In SK4, all the buildings where basically a bunch of 
instances (windows, wall, doors, etc.) put together using Softimage 
ICE logic to build very generic buildings. ICE was also used to 
create the streetscape, populate the city with props (lamp post, 
traffic lights, garbage cans, bus stops, etc.), distribute static 
trees and car traffic. Everything was instances so memory

Re: White House Down

2013-07-05 Thread Mathieu Leclaire
Both are 100% ICE based systems and it's not Hybride's policy to go 
public with our tools. We did collaborate with Fabric Engine on CP 
Flora. We wanted to use it on this project and we had a huge input into 
it's design, but because of time constraints and lack of familiarity 
with their code to quickly adjust it to our needs, we ended up going 
back to a 100% ICE based tree solution for White House Down.


-Mathieu

On 05/07/2013 4:05 AM, Eugen Sares wrote:

Thanks for elaborating, Mathieu!
Another interesting point would be the city generator and your tree 
system.
Did you build everything in Softimage from the ground up, or is there 
a mixed pipeline behind all the assets you needed?

Do you plan to ever go public with those systems?


Am 03.07.2013 18:01, schrieb Mathieu Leclaire:

Hi Eugen,

which version did you use, and how many seats?
Any serious trouble you ran into?

You're talking about Fabric Engine' Creation Platform? We have a site 
license so we can do deep image processing on about 120 machines if 
we chose to. For Nuke, we have 20 or 25 licenses (I'm not sure 
anymore). So by using our Creation Platform site licenses, we can use 
many more machines to process the data without having to buy more 
Nuke licenses. But the speed gain is not only because we can now use 
more machines, our CP based deep image tool is much faster then Nuke, 
though it's hard to say how much faster. It's relative to a lot of 
factors and we are still working on optimizing our version. We've 
noticed certain operations that can be over 10 times faster, while 
others can be longer then Nuke, but we are still putting in a lot of 
work and hope to improve performance even more as we learn to use CP 
more and more. Creation Platform is a lot to take in so all the 
trouble we ran into was mostly a normal part of the learning curve 
that comes with adopting a new development platform. I don't remember 
on which version we started implementing this tool, but we've adapted 
with each new release of the platform.


So you built your own crowd system... what's the reason for not 
using the built-in system?


We created a crowd system for the French movie Jappeloup where we 
delivered 404 crowd shots in about 3-4 months last summer. This 
happened before Crowd FX was released. I remember having discussions 
with Autodesk while they where still working on Crowd FX, but we had 
to deliver shots way before their system was to be complete and 
stable. Our crowd system isn't complex. There's no real interaction 
between agents. It's a stadium agent based crowd system. It's based 
on a list of animation clips doing various actions, and the system is 
all instance based. The system basically randomly choses an animation 
cycle with a few clever tools to help select the timing and 
distribution. Since it's all instances, the memory needed is much 
less then creating unique mesh per agent, but the more varied the 
action of the agents are, the more memory it takes and it can 
eventually become as memory heavy as having unique geometry per 
agent. It was a matter of finding a good threshold where you have 
enough variations in the cycles so you don't notice they are being 
repeated, but not too much so it keeps memory low. Our agents are 
also very high resolution compared to most crowd solutions where 
agents are very low-resolution. That helps the look of the crowd a 
lot. It adds a lot of detail to the crowds and really helps sell the 
shots. With deep composting, we can raise the bar even further and 
split our renders in many layers without ever having to worry about 
where the agents of each layer fit in the crowd. The system of White 
House Down is the evolution of that system. Since the shots have 
constantly moving cameras and the crowds are motion blurred with 
individual agents only visible for a limited number of frames, we 
could deal the crowds as if they where stadium agents where the 
stadium is basically the streets of Washington. It was only a matter 
of adding animation cycles of people reacting to a helicopter passing 
just over their heads.


-Mathieu


On 03/07/2013 2:44 AM, Eugen Sares wrote:

Thanks a lot for this, Mathieu!
Always nice to hear when Softimage is used on such high profile 
titles. That prooves a lot technically, and it is good for the 
spirit, too...

Autodesk wants to use this for advertising...
Also, what you say about Fabric Engine instead of Nuke is amazing.

If I may ask,
which version did you use, and how many seats?
Any serious trouble you ran into?
So you built your own crowd system... what's the reason for not 
using the built-in system?




Am 02.07.2013 18:19, schrieb Mathieu Leclaire:

Hi guys,

I just wanted to share some information on the shots we did for 
White House Down.


First off, there's an article in fxguide that explains a bit what 
we did :


http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/ 




And here is some more details about how we did

Re: White House Down

2013-07-05 Thread Guillaume Laforge
Btw, CP Flora has been used for some shots done in the trailer :). But as
Mathieu said, we finally switch to a 100% ICE based system that could give
more realistic results on the shape of the trees. It could generate a
simulated tree from a polygon mesh model, so the control on the look could
be total. To finish, Mathieu is too modest to say that the ICE based tree
system was its own design and implementation. I'm still amazed on things
you can do with an ICE tree ... if you know your stuff ;).

Cheers,

Guillaume


On Fri, Jul 5, 2013 at 5:21 PM, Mathieu Leclaire mlecl...@hybride.comwrote:

 Both are 100% ICE based systems and it's not Hybride's policy to go public
 with our tools. We did collaborate with Fabric Engine on CP Flora. We
 wanted to use it on this project and we had a huge input into it's design,
 but because of time constraints and lack of familiarity with their code to
 quickly adjust it to our needs, we ended up going back to a 100% ICE based
 tree solution for White House Down.

 -Mathieu


 On 05/07/2013 4:05 AM, Eugen Sares wrote:

 Thanks for elaborating, Mathieu!
 Another interesting point would be the city generator and your tree
 system.
 Did you build everything in Softimage from the ground up, or is there a
 mixed pipeline behind all the assets you needed?
 Do you plan to ever go public with those systems?


 Am 03.07.2013 18:01, schrieb Mathieu Leclaire:

 Hi Eugen,

 which version did you use, and how many seats?
 Any serious trouble you ran into?

 You're talking about Fabric Engine' Creation Platform? We have a site
 license so we can do deep image processing on about 120 machines if we
 chose to. For Nuke, we have 20 or 25 licenses (I'm not sure anymore). So by
 using our Creation Platform site licenses, we can use many more machines to
 process the data without having to buy more Nuke licenses. But the speed
 gain is not only because we can now use more machines, our CP based deep
 image tool is much faster then Nuke, though it's hard to say how much
 faster. It's relative to a lot of factors and we are still working on
 optimizing our version. We've noticed certain operations that can be over
 10 times faster, while others can be longer then Nuke, but we are still
 putting in a lot of work and hope to improve performance even more as we
 learn to use CP more and more. Creation Platform is a lot to take in so all
 the trouble we ran into was mostly a normal part of the learning curve that
 comes with adopting a new development platform. I don't remember on which
 version we started implementing this tool, but we've adapted with each new
 release of the platform.

 So you built your own crowd system... what's the reason for not using
 the built-in system?

 We created a crowd system for the French movie Jappeloup where we
 delivered 404 crowd shots in about 3-4 months last summer. This happened
 before Crowd FX was released. I remember having discussions with Autodesk
 while they where still working on Crowd FX, but we had to deliver shots way
 before their system was to be complete and stable. Our crowd system isn't
 complex. There's no real interaction between agents. It's a stadium agent
 based crowd system. It's based on a list of animation clips doing various
 actions, and the system is all instance based. The system basically
 randomly choses an animation cycle with a few clever tools to help select
 the timing and distribution. Since it's all instances, the memory needed is
 much less then creating unique mesh per agent, but the more varied the
 action of the agents are, the more memory it takes and it can eventually
 become as memory heavy as having unique geometry per agent. It was a matter
 of finding a good threshold where you have enough variations in the cycles
 so you don't notice they are being repeated, but not too much so it keeps
 memory low. Our agents are also very high resolution compared to most crowd
 solutions where agents are very low-resolution. That helps the look of the
 crowd a lot. It adds a lot of detail to the crowds and really helps sell
 the shots. With deep composting, we can raise the bar even further and
 split our renders in many layers without ever having to worry about where
 the agents of each layer fit in the crowd. The system of White House Down
 is the evolution of that system. Since the shots have constantly moving
 cameras and the crowds are motion blurred with individual agents only
 visible for a limited number of frames, we could deal the crowds as if they
 where stadium agents where the stadium is basically the streets of
 Washington. It was only a matter of adding animation cycles of people
 reacting to a helicopter passing just over their heads.

 -Mathieu


 On 03/07/2013 2:44 AM, Eugen Sares wrote:

 Thanks a lot for this, Mathieu!
 Always nice to hear when Softimage is used on such high profile titles.
 That prooves a lot technically, and it is good for the spirit, too...
 Autodesk wants to use this for advertising

Re: White House Down

2013-07-03 Thread Eugen Sares

Thanks a lot for this, Mathieu!
Always nice to hear when Softimage is used on such high profile titles. 
That prooves a lot technically, and it is good for the spirit, too...

Autodesk wants to use this for advertising...
Also, what you say about Fabric Engine instead of Nuke is amazing.

If I may ask,
which version did you use, and how many seats?
Any serious trouble you ran into?
So you built your own crowd system... what's the reason for not using 
the built-in system?




Am 02.07.2013 18:19, schrieb Mathieu Leclaire:

Hi guys,

I just wanted to share some information on the shots we did for White 
House Down.


First off, there's an article in fxguide that explains a bit what we 
did :


http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/ 




And here is some more details about how we did it :

We built upon our ICE based City Generator that we created for Spy 
Kids 4. In SK4, all the buildings where basically a bunch of instances 
(windows, wall, doors, etc.) put together using Softimage ICE logic to 
build very generic buildings. ICE was also used to create the 
streetscape, populate the city with props (lamp post, traffic lights, 
garbage cans, bus stops, etc.), distribute static trees and car 
traffic. Everything was instances so memory consumption was very low 
and render times where minimal (20-30 minutes a frame in Mental Ray at 
the time). The city in Spy Kids 4 was very generic and the cameras 
where very high up in the sky so we didn't care as much about having a 
lot of details and interaction on the ground level and we didn't 
really need specific and recognizable buildings either.


The challenge in White House Down was the fact that it was Washington 
and we needed to recognize very specific landmarks so it needed to be 
a lot less generic. The action also happens very close to the ground 
so we needed to have a lot more detail on the ground level and there 
needed to be a lot of interaction with the helicopters that are 
passing by.


So we modeled a lot more specific assets to add more variation (very 
specific buildings and recognizable landmarks, more props, more 
vegetation, more cars, etc.). We updated our building generator to 
allow more customizations. We updated our props and cars distribution 
systems. They where all still ICE based instances, but we added a lot 
more controls to allow our users to easily manage such complex scenes. 
We had a system to automate the texturing of cars and props based on 
rules so we could texture thousands of assets very quickly. Everything 
was also converted to Stand-Ins to keep our working scenes very light 
and leave the heavy lifting to the renderer.


Which brings me to Arnold.

We knew the trick to making these shots as realistic as possible would 
be to add as much details as we possibly could. Arnold is so good at 
handling a lot of geometry and we where all very impressed by how much 
Arnold could chew (we where managing somewhere around 500-600 million 
polygons at a time) but it still wasn't going to be enough, so we 
built a deep image compositing pipeline for this project to allowed us 
to add so much more detail to the shots.


Every asset where built in low and high resolution. So we basically 
loaded whatever elements we where rendering in a layer as high 
resolution while the rest of the scene assets where all low resolution 
only to be visible through secondary rays (so to cast reflections, 
shadows, GI, etc.). We could then combine all the layers through deep 
compositing and could extract whatever layer we desired without 
worrying about generating the proper hold-out mattes at render time 
(which would have been impossible to manage at that level of detail).


In one shot, we calculated that once all the layers where merged 
together using our deep image pipeline, it added up to just over 4.2 
billion polygons... though that number is not quite exact since we 
always loaded all assets as lo-res in memory except for the visible 
elements that where being rendered in high resolution. We have a lot 
of low res geometry that is repeated in many layers, so the exact 
number is slightly lower then the 4.2 billion polygons reported, but 
still... we ended up managing a lot of data for that show.


Render times where also very reasonable, varying from 20 minutes to 
2-3 hours per frame rendered at 3K. Once we added all the layers in 
one shot, then it came somewhere between 10-12 hours per frame.


We started out using Nuke to manipulate our deep images, but we ended 
up creating an in-house custom standalone application using Creation 
Platform from Fabric Engine to accelerate the deep image 
manipulations. What took hours to manage in Nuke could now be done in 
minutes and we could now also exploit our entire render farm to 
extract the desired layers when needed.


Finally, the last layer of complexity came from the interaction 
between the helicopters and the environment. We simulated and baked

Re: White House Down

2013-07-03 Thread Mathieu Leclaire

Hi Eugen,

which version did you use, and how many seats?
Any serious trouble you ran into?

You're talking about Fabric Engine' Creation Platform? We have a site 
license so we can do deep image processing on about 120 machines if we 
chose to. For Nuke, we have 20 or 25 licenses (I'm not sure anymore). So 
by using our Creation Platform site licenses, we can use many more 
machines to process the data without having to buy more Nuke licenses. 
But the speed gain is not only because we can now use more machines, our 
CP based deep image tool is much faster then Nuke, though it's hard to 
say how much faster. It's relative to a lot of factors and we are still 
working on optimizing our version. We've noticed certain operations that 
can be over 10 times faster, while others can be longer then Nuke, but 
we are still putting in a lot of work and hope to improve performance 
even more as we learn to use CP more and more. Creation Platform is a 
lot to take in so all the trouble we ran into was mostly a normal part 
of the learning curve that comes with adopting a new development 
platform. I don't remember on which version we started implementing this 
tool, but we've adapted with each new release of the platform.


So you built your own crowd system... what's the reason for not using 
the built-in system?


We created a crowd system for the French movie Jappeloup where we 
delivered 404 crowd shots in about 3-4 months last summer. This happened 
before Crowd FX was released. I remember having discussions with 
Autodesk while they where still working on Crowd FX, but we had to 
deliver shots way before their system was to be complete and stable. Our 
crowd system isn't complex. There's no real interaction between agents. 
It's a stadium agent based crowd system. It's based on a list of 
animation clips doing various actions, and the system is all instance 
based. The system basically randomly choses an animation cycle with a 
few clever tools to help select the timing and distribution. Since it's 
all instances, the memory needed is much less then creating unique mesh 
per agent, but the more varied the action of the agents are, the more 
memory it takes and it can eventually become as memory heavy as having 
unique geometry per agent. It was a matter of finding a good threshold 
where you have enough variations in the cycles so you don't notice they 
are being repeated, but not too much so it keeps memory low. Our agents 
are also very high resolution compared to most crowd solutions where 
agents are very low-resolution. That helps the look of the crowd a lot. 
It adds a lot of detail to the crowds and really helps sell the shots. 
With deep composting, we can raise the bar even further and split our 
renders in many layers without ever having to worry about where the 
agents of each layer fit in the crowd. The system of White House Down is 
the evolution of that system. Since the shots have constantly moving 
cameras and the crowds are motion blurred with individual agents only 
visible for a limited number of frames, we could deal the crowds as if 
they where stadium agents where the stadium is basically the streets of 
Washington. It was only a matter of adding animation cycles of people 
reacting to a helicopter passing just over their heads.


-Mathieu


On 03/07/2013 2:44 AM, Eugen Sares wrote:

Thanks a lot for this, Mathieu!
Always nice to hear when Softimage is used on such high profile 
titles. That prooves a lot technically, and it is good for the spirit, 
too...

Autodesk wants to use this for advertising...
Also, what you say about Fabric Engine instead of Nuke is amazing.

If I may ask,
which version did you use, and how many seats?
Any serious trouble you ran into?
So you built your own crowd system... what's the reason for not using 
the built-in system?




Am 02.07.2013 18:19, schrieb Mathieu Leclaire:

Hi guys,

I just wanted to share some information on the shots we did for White 
House Down.


First off, there's an article in fxguide that explains a bit what we 
did :


http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/ 




And here is some more details about how we did it :

We built upon our ICE based City Generator that we created for Spy 
Kids 4. In SK4, all the buildings where basically a bunch of 
instances (windows, wall, doors, etc.) put together using Softimage 
ICE logic to build very generic buildings. ICE was also used to 
create the streetscape, populate the city with props (lamp post, 
traffic lights, garbage cans, bus stops, etc.), distribute static 
trees and car traffic. Everything was instances so memory consumption 
was very low and render times where minimal (20-30 minutes a frame in 
Mental Ray at the time). The city in Spy Kids 4 was very generic and 
the cameras where very high up in the sky so we didn't care as much 
about having a lot of details and interaction on the ground level and 
we didn't really need specific and recognizable

White House Down, Hybride ICE

2013-07-02 Thread Eric Thivierge

Hey all,

New article about the FX work on White House Down is out and they talk a 
little about Hybrides work using ICE. Mathieu Leclaire can chime in for 
any additional info / questions if you have any but I thought I'd throw 
the link in for anyone interested.


http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/

--
 
Eric Thivierge

===
Character TD / RnD
Hybride Technologies
 





White House Down

2013-07-02 Thread Mathieu Leclaire

Hi guys,

I just wanted to share some information on the shots we did for White 
House Down.


First off, there's an article in fxguide that explains a bit what we did :

http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/


And here is some more details about how we did it :

We built upon our ICE based City Generator that we created for Spy Kids 
4. In SK4, all the buildings where basically a bunch of instances 
(windows, wall, doors, etc.) put together using Softimage ICE logic to 
build very generic buildings. ICE was also used to create the 
streetscape, populate the city with props (lamp post, traffic lights, 
garbage cans, bus stops, etc.), distribute static trees and car traffic. 
Everything was instances so memory consumption was very low and render 
times where minimal (20-30 minutes a frame in Mental Ray at the time). 
The city in Spy Kids 4 was very generic and the cameras where very high 
up in the sky so we didn't care as much about having a lot of details 
and interaction on the ground level and we didn't really need specific 
and recognizable buildings either.


The challenge in White House Down was the fact that it was Washington 
and we needed to recognize very specific landmarks so it needed to be a 
lot less generic. The action also happens very close to the ground so we 
needed to have a lot more detail on the ground level and there needed to 
be a lot of interaction with the helicopters that are passing by.


So we modeled a lot more specific assets to add more variation (very 
specific buildings and recognizable landmarks, more props, more 
vegetation, more cars, etc.). We updated our building generator to allow 
more customizations. We updated our props and cars distribution systems. 
They where all still ICE based instances, but we added a lot more 
controls to allow our users to easily manage such complex scenes. We had 
a system to automate the texturing of cars and props based on rules so 
we could texture thousands of assets very quickly. Everything was also 
converted to Stand-Ins to keep our working scenes very light and leave 
the heavy lifting to the renderer.


Which brings me to Arnold.

We knew the trick to making these shots as realistic as possible would 
be to add as much details as we possibly could. Arnold is so good at 
handling a lot of geometry and we where all very impressed by how much 
Arnold could chew (we where managing somewhere around 500-600 million 
polygons at a time) but it still wasn't going to be enough, so we built 
a deep image compositing pipeline for this project to allowed us to add 
so much more detail to the shots.


Every asset where built in low and high resolution. So we basically 
loaded whatever elements we where rendering in a layer as high 
resolution while the rest of the scene assets where all low resolution 
only to be visible through secondary rays (so to cast reflections, 
shadows, GI, etc.). We could then combine all the layers through deep 
compositing and could extract whatever layer we desired without worrying 
about generating the proper hold-out mattes at render time (which would 
have been impossible to manage at that level of detail).


In one shot, we calculated that once all the layers where merged 
together using our deep image pipeline, it added up to just over 4.2 
billion polygons... though that number is not quite exact since we 
always loaded all assets as lo-res in memory except for the visible 
elements that where being rendered in high resolution. We have a lot of 
low res geometry that is repeated in many layers, so the exact number is 
slightly lower then the 4.2 billion polygons reported, but still... we 
ended up managing a lot of data for that show.


Render times where also very reasonable, varying from 20 minutes to 2-3 
hours per frame rendered at 3K. Once we added all the layers in one 
shot, then it came somewhere between 10-12 hours per frame.


We started out using Nuke to manipulate our deep images, but we ended up 
creating an in-house custom standalone application using Creation 
Platform from Fabric Engine to accelerate the deep image manipulations. 
What took hours to manage in Nuke could now be done in minutes and we 
could now also exploit our entire render farm to extract the desired 
layers when needed.


Finally, the last layer of complexity came from the interaction between 
the helicopters and the environment. We simulated and baked rotor wash 
wind fields of air being pushed by those animated Black Hawks using 
Exocortex Slipstream. That wind field then was used to simulate dust, 
debris, tree deformations and crowd cloth simulations. Since the trees 
needed to be simulated, we created a custom ICE strand based tree system 
to deform the branches and simulate the leaves movement from that wind 
field. Since the trees where all strand based, they where very light to 
manage and render. We also had created a custom ICE based crowd system 
for the movie Jappeloup

Re: [SItoA] White House Down

2013-07-02 Thread Greg Punchatz
Sounds brilliant . I need to see the movie now. 



Sent from my iPhone

On Jul 2, 2013, at 11:19 AM, Mathieu Leclaire mlecl...@hybride.com wrote:

 Hi guys,
 
 I just wanted to share some information on the shots we did for White House 
 Down.
 
 First off, there's an article in fxguide that explains a bit what we did :
 
 http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/
 
 
 And here is some more details about how we did it :
 
 We built upon our ICE based City Generator that we created for Spy Kids 4. In 
 SK4, all the buildings where basically a bunch of instances (windows, wall, 
 doors, etc.) put together using Softimage ICE logic to build very generic 
 buildings. ICE was also used to create the streetscape, populate the city 
 with props (lamp post, traffic lights, garbage cans, bus stops, etc.), 
 distribute static trees and car traffic. Everything was instances so memory 
 consumption was very low and render times where minimal (20-30 minutes a 
 frame in Mental Ray at the time). The city in Spy Kids 4 was very generic and 
 the cameras where very high up in the sky so we didn't care as much about 
 having a lot of details and interaction on the ground level and we didn't 
 really need specific and recognizable buildings either.
 
 The challenge in White House Down was the fact that it was Washington and we 
 needed to recognize very specific landmarks so it needed to be a lot less 
 generic. The action also happens very close to the ground so we needed to 
 have a lot more detail on the ground level and there needed to be a lot of 
 interaction with the helicopters that are passing by.
 
 So we modeled a lot more specific assets to add more variation (very specific 
 buildings and recognizable landmarks, more props, more vegetation, more cars, 
 etc.). We updated our building generator to allow more customizations. We 
 updated our props and cars distribution systems. They where all still ICE 
 based instances, but we added a lot more controls to allow our users to 
 easily manage such complex scenes. We had a system to automate the texturing 
 of cars and props based on rules so we could texture thousands of assets very 
 quickly. Everything was also converted to Stand-Ins to keep our working 
 scenes very light and leave the heavy lifting to the renderer.
 
 Which brings me to Arnold.
 
 We knew the trick to making these shots as realistic as possible would be to 
 add as much details as we possibly could. Arnold is so good at handling a lot 
 of geometry and we where all very impressed by how much Arnold could chew (we 
 where managing somewhere around 500-600 million polygons at a time) but it 
 still wasn't going to be enough, so we built a deep image compositing 
 pipeline for this project to allowed us to add so much more detail to the 
 shots.
 
 Every asset where built in low and high resolution. So we basically loaded 
 whatever elements we where rendering in a layer as high resolution while the 
 rest of the scene assets where all low resolution only to be visible through 
 secondary rays (so to cast reflections, shadows, GI, etc.). We could then 
 combine all the layers through deep compositing and could extract whatever 
 layer we desired without worrying about generating the proper hold-out mattes 
 at render time (which would have been impossible to manage at that level of 
 detail).
 
 In one shot, we calculated that once all the layers where merged together 
 using our deep image pipeline, it added up to just over 4.2 billion 
 polygons... though that number is not quite exact since we always loaded all 
 assets as lo-res in memory except for the visible elements that where being 
 rendered in high resolution. We have a lot of low res geometry that is 
 repeated in many layers, so the exact number is slightly lower then the 4.2 
 billion polygons reported, but still... we ended up managing a lot of data 
 for that show.
 
 Render times where also very reasonable, varying from 20 minutes to 2-3 hours 
 per frame rendered at 3K. Once we added all the layers in one shot, then it 
 came somewhere between 10-12 hours per frame.
 
 We started out using Nuke to manipulate our deep images, but we ended up 
 creating an in-house custom standalone application using Creation Platform 
 from Fabric Engine to accelerate the deep image manipulations. What took 
 hours to manage in Nuke could now be done in minutes and we could now also 
 exploit our entire render farm to extract the desired layers when needed.
 
 Finally, the last layer of complexity came from the interaction between the 
 helicopters and the environment. We simulated and baked rotor wash wind 
 fields of air being pushed by those animated Black Hawks using Exocortex 
 Slipstream. That wind field then was used to simulate dust, debris, tree 
 deformations and crowd cloth simulations. Since the trees needed to be 
 simulated, we created a custom ICE strand based tree system to deform the 
 branches

Re: [SItoA] White House Down

2013-07-02 Thread Ahmidou Lyazidi
Thanks for the details Mathieux, very much appreciated!

---
Ahmidou Lyazidi
Director | TD | CG artist
http://vimeo.com/ahmidou/videos
http://www.cappuccino-films.com


2013/7/3 Greg Punchatz g...@janimation.com

 Sounds brilliant . I need to see the movie now.



 Sent from my iPhone

 On Jul 2, 2013, at 11:19 AM, Mathieu Leclaire mlecl...@hybride.com
 wrote:

  Hi guys,
 
  I just wanted to share some information on the shots we did for White
 House Down.
 
  First off, there's an article in fxguide that explains a bit what we did
 :
 
 
 http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/
 
 
  And here is some more details about how we did it :
 
  We built upon our ICE based City Generator that we created for Spy Kids
 4. In SK4, all the buildings where basically a bunch of instances (windows,
 wall, doors, etc.) put together using Softimage ICE logic to build very
 generic buildings. ICE was also used to create the streetscape, populate
 the city with props (lamp post, traffic lights, garbage cans, bus stops,
 etc.), distribute static trees and car traffic. Everything was instances so
 memory consumption was very low and render times where minimal (20-30
 minutes a frame in Mental Ray at the time). The city in Spy Kids 4 was very
 generic and the cameras where very high up in the sky so we didn't care as
 much about having a lot of details and interaction on the ground level and
 we didn't really need specific and recognizable buildings either.
 
  The challenge in White House Down was the fact that it was Washington
 and we needed to recognize very specific landmarks so it needed to be a lot
 less generic. The action also happens very close to the ground so we needed
 to have a lot more detail on the ground level and there needed to be a lot
 of interaction with the helicopters that are passing by.
 
  So we modeled a lot more specific assets to add more variation (very
 specific buildings and recognizable landmarks, more props, more vegetation,
 more cars, etc.). We updated our building generator to allow more
 customizations. We updated our props and cars distribution systems. They
 where all still ICE based instances, but we added a lot more controls to
 allow our users to easily manage such complex scenes. We had a system to
 automate the texturing of cars and props based on rules so we could texture
 thousands of assets very quickly. Everything was also converted to
 Stand-Ins to keep our working scenes very light and leave the heavy lifting
 to the renderer.
 
  Which brings me to Arnold.
 
  We knew the trick to making these shots as realistic as possible would
 be to add as much details as we possibly could. Arnold is so good at
 handling a lot of geometry and we where all very impressed by how much
 Arnold could chew (we where managing somewhere around 500-600 million
 polygons at a time) but it still wasn't going to be enough, so we built a
 deep image compositing pipeline for this project to allowed us to add so
 much more detail to the shots.
 
  Every asset where built in low and high resolution. So we basically
 loaded whatever elements we where rendering in a layer as high resolution
 while the rest of the scene assets where all low resolution only to be
 visible through secondary rays (so to cast reflections, shadows, GI, etc.).
 We could then combine all the layers through deep compositing and could
 extract whatever layer we desired without worrying about generating the
 proper hold-out mattes at render time (which would have been impossible to
 manage at that level of detail).
 
  In one shot, we calculated that once all the layers where merged
 together using our deep image pipeline, it added up to just over 4.2
 billion polygons... though that number is not quite exact since we always
 loaded all assets as lo-res in memory except for the visible elements that
 where being rendered in high resolution. We have a lot of low res geometry
 that is repeated in many layers, so the exact number is slightly lower then
 the 4.2 billion polygons reported, but still... we ended up managing a lot
 of data for that show.
 
  Render times where also very reasonable, varying from 20 minutes to 2-3
 hours per frame rendered at 3K. Once we added all the layers in one shot,
 then it came somewhere between 10-12 hours per frame.
 
  We started out using Nuke to manipulate our deep images, but we ended up
 creating an in-house custom standalone application using Creation Platform
 from Fabric Engine to accelerate the deep image manipulations. What took
 hours to manage in Nuke could now be done in minutes and we could now also
 exploit our entire render farm to extract the desired layers when needed.
 
  Finally, the last layer of complexity came from the interaction between
 the helicopters and the environment. We simulated and baked rotor wash wind
 fields of air being pushed by those animated Black Hawks using Exocortex

Re: White House Down

2013-07-02 Thread Andre De Angelis
Stunning work Mathieu


On Wed, Jul 3, 2013 at 2:19 AM, Mathieu Leclaire mlecl...@hybride.comwrote:

 Hi guys,

 I just wanted to share some information on the shots we did for White
 House Down.

 First off, there's an article in fxguide that explains a bit what we did :

 http://www.fxguide.com/**featured/action-beats-6-**
 scenes-from-white-house-down/http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/


 And here is some more details about how we did it :

 We built upon our ICE based City Generator that we created for Spy Kids 4.
 In SK4, all the buildings where basically a bunch of instances (windows,
 wall, doors, etc.) put together using Softimage ICE logic to build very
 generic buildings. ICE was also used to create the streetscape, populate
 the city with props (lamp post, traffic lights, garbage cans, bus stops,
 etc.), distribute static trees and car traffic. Everything was instances so
 memory consumption was very low and render times where minimal (20-30
 minutes a frame in Mental Ray at the time). The city in Spy Kids 4 was very
 generic and the cameras where very high up in the sky so we didn't care as
 much about having a lot of details and interaction on the ground level and
 we didn't really need specific and recognizable buildings either.

 The challenge in White House Down was the fact that it was Washington and
 we needed to recognize very specific landmarks so it needed to be a lot
 less generic. The action also happens very close to the ground so we needed
 to have a lot more detail on the ground level and there needed to be a lot
 of interaction with the helicopters that are passing by.

 So we modeled a lot more specific assets to add more variation (very
 specific buildings and recognizable landmarks, more props, more vegetation,
 more cars, etc.). We updated our building generator to allow more
 customizations. We updated our props and cars distribution systems. They
 where all still ICE based instances, but we added a lot more controls to
 allow our users to easily manage such complex scenes. We had a system to
 automate the texturing of cars and props based on rules so we could texture
 thousands of assets very quickly. Everything was also converted to
 Stand-Ins to keep our working scenes very light and leave the heavy lifting
 to the renderer.

 Which brings me to Arnold.

 We knew the trick to making these shots as realistic as possible would be
 to add as much details as we possibly could. Arnold is so good at handling
 a lot of geometry and we where all very impressed by how much Arnold could
 chew (we where managing somewhere around 500-600 million polygons at a
 time) but it still wasn't going to be enough, so we built a deep image
 compositing pipeline for this project to allowed us to add so much more
 detail to the shots.

 Every asset where built in low and high resolution. So we basically loaded
 whatever elements we where rendering in a layer as high resolution while
 the rest of the scene assets where all low resolution only to be visible
 through secondary rays (so to cast reflections, shadows, GI, etc.). We
 could then combine all the layers through deep compositing and could
 extract whatever layer we desired without worrying about generating the
 proper hold-out mattes at render time (which would have been impossible to
 manage at that level of detail).

 In one shot, we calculated that once all the layers where merged together
 using our deep image pipeline, it added up to just over 4.2 billion
 polygons... though that number is not quite exact since we always loaded
 all assets as lo-res in memory except for the visible elements that where
 being rendered in high resolution. We have a lot of low res geometry that
 is repeated in many layers, so the exact number is slightly lower then the
 4.2 billion polygons reported, but still... we ended up managing a lot of
 data for that show.

 Render times where also very reasonable, varying from 20 minutes to 2-3
 hours per frame rendered at 3K. Once we added all the layers in one shot,
 then it came somewhere between 10-12 hours per frame.

 We started out using Nuke to manipulate our deep images, but we ended up
 creating an in-house custom standalone application using Creation Platform
 from Fabric Engine to accelerate the deep image manipulations. What took
 hours to manage in Nuke could now be done in minutes and we could now also
 exploit our entire render farm to extract the desired layers when needed.

 Finally, the last layer of complexity came from the interaction between
 the helicopters and the environment. We simulated and baked rotor wash wind
 fields of air being pushed by those animated Black Hawks using Exocortex
 Slipstream. That wind field then was used to simulate dust, debris, tree
 deformations and crowd cloth simulations. Since the trees needed to be
 simulated, we created a custom ICE strand based tree system to deform the
 branches and simulate the leaves movement