RE: Multi-Layered .EXR - The revival...

2015-08-29 Thread Schoenberger
Hi
 
make no mention of interleave options or the like.

The interleave option is not what you are searching for.
It is the multi-part that needs to be enabled. This way Nuke can read a part 
instead of the whole file.
The interleave is just a setting which layers/channels get into a part.
If your renderer or conversion tool writes multi-part images and you do not 
have any interleave option, then it is one AOV/render
layer into one part. 
 
Afaik no renderer does this at the moment. Compability reasons.
I have written a test app to print how your .exr file is organized, this can be 
used to check what kind of .exr your renderer
writes. I have to check if it requires a license or not, otherwise you could 
use it.
 
 
Holger Schönberger
technical director
The day has 24 hours, if that does not suffice, I will take the night

 


  _  

From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Jason S
Sent: Friday, August 28, 2015 8:04 PM
To: softimage@listproc.autodesk.com
Subject: Re: Multi-Layered .EXR - The revival...



Indeed so it seems!  

and still looks very new because each renderer's output options (at least in 
the docs) make no mention of interleave options or
the like.

So I wonder what are generally the defaults for Multichannel EXR outputs. 
(hum...)


On 08/28/15 8:36, Jens Lindgren wrote:


Yeah you're right. But just using exr 2.x isn't enough to ensure multi-layered 
files are read fast. You have to save the exr in a
specific way.

From Nukes help:



Notes on Rendering OpenEXR Files


Nuke supports multi-part OpenEXR 2.0.1 files, which allow you to store your 
channels, layers, and views in separate parts of the
file. Storing the data this way can make loading .exr files faster, as Nuke 
only has to access the part of the file that is
requested rather than all parts. However, for backwards compatibility, you also 
have the option to render your .exr files as
single-part images.

To set how the data is stored in your rendered .exr file, open the Write 
properties and set interleave to:

• channels, layers and views - Write channels, layers, and views into the same 
part of the rendered .exr file. This creates a
single-part file to ensure backwards compatibility with earlier versions of 
Nuke and other applications using an older OpenEXR
library.

• channels and layers - Write channels and layers into the same part of the 
rendered .exr file, but separate views into their own
part. This creates a multi-part file and can speed up Read performance, as Nuke 
only has to access the part of the file that is
requested rather than all parts.

• channels - Separate channels, layers, and views into their own parts of the 
rendered .exr file. This creates a multi-part file and
can speed up Read performance if you work with only a few layers at a time.




So to have the same speed with multi-layered exr that you have with separate 
exr files, you have to use exr 2.x and only then you
have the choice to save a file in what Nuke call channels and layers (fast) or 
channels (faster) mode. But this breaks backwards
compatibility so the files can not be opened in applications that don't support 
exr 2.


On Thu, Aug 27, 2015 at 7:17 PM, Jason S jasonsta...@gmail.com wrote:


   
   On 08/27/15 8:44, Jens Lindgren wrote:
   This slowness is solved with EXR 2.0 but Softimage isn't using that.

Correction:   MentalRay in SI may not be using that,

but Arnold, Redshift, and 3Delight in SI do   (among many other updates)

Including things like VDB support etc...
(None of which would have happened had there not been any usage/demand (more 
than a year post EOL)

Recent changes (2015) for Arnold  3Delight for Softimage, 
(Redshift updates would just be too long to list)


 https://support.solidangle.com/display/SItoAUG Arnold for Softimage


3.8 July 13, 2015

*   Faster export times overall on Windows (up to 20% faster).



*   Faster export of polygon meshes (up to 35% faster).



*   Much faster export of ICE instances (up to 5x faster) and ICE primitive 
cylinders, discs, cones, boxes (up to 3x faster).



*   Faster rendering when using SItoA shaders on Windows (up to 15% faster).


3.7 June 10 2015

*   Faster cutout texture mapped opacity, with more accurate renders (in 
previous versions, an object was rendered more
transparent when it was further away from the camera). 



*   Multiple scattering for volumes: Indirect light in volumes now supports 
an arbitrary number of bounces instead of being
fixed to one bounce. 



*   Per-light volume contribution: A volume contribution scaling parameter 
was added to lights, similar to the existing diffuse
and specular parameters.



*   Deep volume output support: Volumes are now visible in deep renders 
(note that older atmosphere shaders and volumetric mattes are not supported 
yet).



*   The volume property now shows the names of available grids in VDB files.



*  

Re: Multi-Layered .EXR - The revival...

2015-08-29 Thread Jason S

  
  
Hum! interesting. So not to confuse
  multi-part with multi-channel!
  
  And until multi-part would become more common-place (question of
  time no doubt) , you could use a unique render manager with one of
  many unique features
  of converting your outputs to multipart EXR's as images are
  written. :)
  
  

  

  EXR Crop and EXR 2.0
Multi-Part Speed up
  compositing by huge factors, for one animation
  feature the speed gain was  100x (Depends on how much emtpy space your
  image has and how many layers the EXR  file has
  
  
  ___
  3. If you
use Nuke 8, then enable "Convert to EXR 2.0
Multi-Part images" in rrConfig, tab Image.
  This has an huge effect if
you render Exr files with many render
layers/elements (e.g. beauty, AO, specular,
...)
  
  
  If you
load one layer/element from an Exr v1.0 files in
Nuke, then Nuke has to load all layers, it loads
the whole file.
  
  
  If you
load one layer/element from an Exr v2.0 files in
Nuke, then Nuke loads only the required layer.
  

  


  

  
  
  
  On 08/29/15 7:49, Schoenberger wrote:


  
  
  Hi
  
  make no mention of
"interleave options" or the like.
  
  The "interleave
option" is not what you are searching for.
  It is the "multi-part"
that needs to be enabled. This way Nuke can read a part
instead of the whole file.
  The interleave is just
a setting which layers/channels get into a part.
  If your renderer or conversion tool writes
multi-part images and you do not have any interleave option,
then it is one AOV/render layer into one part. 
  
  Afaik no renderer does
this at the moment. Compability reasons.
  I have written a test
app to print how your .exr file is organized, this can be
used to check what kind of .exr your renderer writes. I have
to check if it requires a license or not, otherwise you
could use it.
  
  
  Holger Schnberger
  technical director
The day has 24 hours, if that does not suffice, I will take
the night
  
  
  
  

   From:
softimage-boun...@listproc.autodesk.com
[mailto:softimage-boun...@listproc.autodesk.com] On
  Behalf Of Jason S
Sent: Friday, August 28, 2015 8:04 PM
To: softimage@listproc.autodesk.com
Subject: Re: Multi-Layered .EXR - The revival...
  


  Indeed so it seems! 
  
  and still looks very new because each renderer's output
  options (at least in the docs) make no mention of "interleave
  options" or the like.
  
  So I wonder what are generally the defaults for Multichannel
  EXR outputs. (hum...)
  
  
  On 08/28/15 8:36, Jens Lindgren wrote:


  
Yeah you're right. But just using exr 2.x isn't enough
  to ensure multi-layered files are read fast. You have to
  save the exr in a specific way.


From Nukes help:
  


  Notes on Rendering OpenEXR
Files
  Nuke supports multi-part OpenEXR 2.0.1
files, which allow you to store your channels, layers,
and views in separate parts of the file. Storing the
data this way can make loading .exrfiles
faster, as Nuke only has to access the
part of the file that is requested rather than all
parts. However, for backwards compatibility, you also
have the option to render your .exr files as
single-part images.
  To set how the data is stored in your rendered .exr
file, open the Write properties and set interleave
to:
   channels, layers and
  views - Write channels, layers, and views into the
same part of the rendered .exr file. This
creates a single-part file to ensure backwards
compatibility with earlier 

Re: Friday Flashback #238

2015-08-29 Thread peter_b

Hm - me getting it wrong is certainly a possibility.
But when I'm picking my memory about this, little bits and pieces seem to 
bubble up.


There was this studio owner who had a DS, and he came by all excited, asking 
me for a softimage 3D scene and corresponding render for testing. I was 
surprised since they didn’t do 3D at their studio - he explained it was for 
testing on the DS and I didn’t quite get what they were trying to do.
The 'clip on the timeline, done with softimage 3D' (under the hood) sure 
sounds like what I have in mind.
It surely wasn’t a 3D import, I don’t recall seeing any wireframe - just 
entering the path to the 3D scene and probably just horizontal and vertical 
transform parameters. Probably the texture scaled at SD video resolution was 
the source layer for the clip.
I'm not sure on version - but it was '98 or '99, certainly before v4 , and 
quite possibly not a public release. I recall having a discussion about what 
sense it made to do a correction on a system that was billed 4 times more 
than a 3D station. If you had a correction, just send it to the people who 
made the 3D. In any case, what were the odds of having a client requesting a 
correction on shifting a texture a bit left or right on the 3D model, that 
was already rendered and delivered? It would be a miracle for that one to 
come up I thought.
I remember looking for buckets vs scanlines being rendered to confirm that 
it was mental ray which didn’t do scanline to my knowledge - and remember 
that it seemed very slow. I did 3D on sgi/irix and DS being on 
win/intergraph, I expected rendering to be very fast on a PC. I was 
constantly asking my studio to get me one.


Sounds like it might have happened - you tell me?



-Original Message- 
From: Luc-Eric Rousseau

Sent: Saturday, August 29, 2015 3:21 AM
To: softimage@listproc.autodesk.com
Subject: Re: Friday Flashback #238

I remember opening a softimage 3D asset in the DS timeline, and changing 
the
texture placement on it, and having it re-render, right there in the 
editing
timeline, with mental ray - 15 years ago. It wasn’t all that useful, but 
it

hinted of some very exciting future links between 3D and editing/comp.


Did you dream it?  DS never really had that, they always dreamed of
having mental ray or importing 3D scenes.

In 1997 there was a pre-Sumatra demo shown around where you could
create a 3D clip on the timeline and paint on it, that was built
using Softimage|3D.  It never passed the prototype stage.  At one
point a colleague  put the dotXSI viewer as a plugin in DS and that
was a demo which AFAIK never shipped.  You could do basic 3D with the
built-in Marquee tool in DS - a product Avid acquired - that's it.
All of these were OpenGL only.

That said, you were able to import Softimage|3D scenes in Eddie and
render them in there.


But then Avid drove a wedge between DS and XSI,


Avid wasn't smart enough to scheme like that and AFAIK didn't do any
such thing.  Early on, we couldn't get anything done with the source
code constantly changing by another team with their own priorities
while we were trying to wind down and ship, so we branched out.

After XSI V1.0, it was very difficult to consider merging back because
it's emergencies after emergencies, and XSI wasn't made to run inside
other application so it took over a lot of things that DS has other
ideas for, and there were conflicting changes in both branches.  Also
their code version was increasingly not portable back to unix,
something they don't care about.  And it something would lead them to
their demise as they couldn't port anything to Mac where Avid wanted
to be.

And we disagreed on many things.  For example, the DS team wanting to
control all the UI like the FCurve editor, but wanting to be focus on
non-animators, or controlling the architecture of operators to conform
it to their vision.   So you're trying to make a 3D animation product,
but you have to negotiate with another team that wants you do to
things for them and their clients.  You have to explain, justify and
negotiate everything.  Same thing for the mixer UI or the rendertree,
they wanted to own that, but on their own terms.

The principles of DS is that DS provides everything as shared service
(ex: the FCurve editor, toolbars, menu, hotkey mapping, etc) and then
you can plug your mini-app in it as a plug-in, a clip on its timeline.
Only one such third party plugin was ever made, Toonz.

In retrospect it's DS that should have been built on XSI, not the
reverse - but DS shipped 2 years before XSI v1.0.   Because 3D apps
have become frameworks, XSI is the one that's the superset, with
scripting, expressions, construction history, lots of viewport tools,
etc.  But in DS team's mind, the NLE market was 100 times bigger and
the 3D market is shrinking, so it should be up to the 3D team to
follow, not the reverse.  Different points of views!

In any case, nowadays it's kind of illogical to think of a Softimage
as a 

Re: Friday Flashback #238

2015-08-29 Thread peter_b
' Each application also needed to go in directions that didn't make sense 
for the other. '


yes of course.
I've always been hoping for more convergence/integration between 3D and comp 
in one streamlined package.
but then there's the hard reality of studios, and their needs and projects - 
which lie elsewhere, 3D and postprod/editing being quite different crowds.





-Original Message- 
From: Matt Lind

Sent: Saturday, August 29, 2015 12:20 AM
To: softimage@listproc.autodesk.com
Subject: Re: Friday Flashback #238

If memory serves, the main reason for splitting DS and XSI was
architectural, not sales driven.  XSI needed more than DS could provide, and
vice versa.  Each application also needed to go in directions that didn't
make sense for the other.  'Twister' was split for the incompatibility
reasons as well.

Yes, very exciting but unfulfilled dream.  What should've been.


Matt




Date: Fri, 28 Aug 2015 22:40:20 +0200
From: pete...@skynet.be
Subject: Re: Friday Flashback #238
To: softimage@listproc.autodesk.com

ah
DS discontinued by Avid and XSI discontinued by AD.
and what?s there to fill that particular void?

they shared architecture and interface to a degree, and both had some very
interesting forward thinking (visionary?) concepts at their origin.
I remember opening a softimage 3D asset in the DS timeline, and changing the
texture placement on it, and having it re-render, right there in the editing
timeline, with mental ray - 15 years ago. It wasn?t all that useful, but it
hinted of some very exciting future links between 3D and editing/comp.
But then Avid drove a wedge between DS and XSI, pushing DS into a very
awkward position in the Avid portfolio, and XSI into a kind of no mans land
? like an unwanted child they ended up with, not knowing what to do with.
Somehow, that child managed to survive Avid and even start to show promise,
then got sold off to AD, and even survived that and prospered. A while.

I guess the industry as a whole didn?t need that integrated Digital Studio,
and few really used DS and XSI in tandem - but I feel we are all the poorer
without it.
Sure, there?s some interesting convergence happening between 3D and comp
these days ? but how I miss that particular Softimage spin on it.