​Thanks a lot for the indepth replies Jonathan! I will reach out to my 
rendering vendor of choice asap and get some more discussion going. There seem 
to various approaches, so far i found most intriguing in some ways, but lacking 
in other areas. I have not taken a thorough enough look at DCX to have an 
opinion. Stay tuned! :)


Cheers and thanks again,

Thorsten

---
Thorsten Kaufmann
Production Pipeline Architect

Mackevision Medien Design GmbH
Forststraße 7
70174 Stuttgart

T +49 711 93 30 48 661
F +49 711 93 30 48 90
M +49 151 19 55 55 02

[email protected]
www.mackevision.com<http://www.mackevision.com/?utm_source=E-Mail-Signatur&utm_medium=E-Mail&utm_campaign=Mackevision-Link>

Geschäftsführer: Armin Pohl, Joachim Lincke, Jens Pohl
HRB 243735 Amtsgericht Stuttgart

---
PORTFOLIO: The Next Level of User Experience - Real-time 
Solutions<http://www.mackevision.com/portfolio/real-time-solutions/?utm_source=E-Mail-Signatur&utm_medium=E-Mail&utm_campaign=real-time-signatur>
VFX: Game of Thrones, Season 6 – VFX 
breakdown<http://www.mackevision.com/references/game-of-thrones-season-6-vfx-breakdown/?utm_source=E-Mail-Signatur&utm_medium=E-Mail&utm_campaign=GoT-S6-vfx-breakdown-Signatur>
SOCIAL: Follow us on Facebook<https://www.facebook.com/mackevision/>, 
Twitter<https://twitter.com/Mackevision>, 
Behance<https://www.behance.net/mackevision> and 
Vimeo<https://vimeo.com/mackevision>

________________________________
Von: Oiio-dev <[email protected]> im Auftrag von 
[email protected] <[email protected]>
Gesendet: Mittwoch, 7. Dezember 2016 20:37
An: OpenImageIO developers
Betreff: Re: [Oiio-dev] Deep merge

Sure thing.
It's actually an 8x8 grid (64 bits stored in two 32-bit float channels) - but 
even 8x8 is far too coarse to provide adequate resolution to capture fractional 
subpixel weighting. To support fractional weighting we split (duplicate) 
samples between full and partial subpixel-coverage (not pixel-coverage.)
(fwiw we found duplicating samples to be far cheaper than simply increasing the 
subpixel mask size as increasing the mask increases the cost for all samples 
rather than just a subset that may need partial weighting.)

A renderer sampling at a high subpixel rate should have some method of 
collapsing multiple samples together otherwise the resulting deep file ends up 
being massive. This collapsing routine changes slightly when outputting dcx 
deep samples:
    * Similar samples (usually similar Z & ID, and possibly color) are combined 
together and their subpixel xy locations ORd into the nearest-neighbor 8x8 grid 
bin.  As each sample is added to a grid bin the hit count for that bin is 
incremented. Full-coverage and partial-coverage masks are constructed from 
these bin counts representing that sample's full and partial contributions. All 
the masks bits are exclusive so a full-coverage bit will never be set for a 
partial-coverage bit, and vice-versa.
    * There may be more than one partial-coverage mask depending on the 
combined partial weights, but again these bits are exclusive with the other 
partial masks.
    * The partial weight is stored in the dcx sample metadata as an 8-bit 
number where 0x00=0.0 and 0xff=0.996 so that 0x80=0.5. 0x100 is redundant as 
this indicates a full-coverage sample and thus no partial weight is recorded.
    * As partial contributions add up from multiple samples they may saturate a 
bin (0x100) which is then switched to a full subpixel-coverage sample.

So, for 256 deep samples created from 256 rays hitting the same opaque 
primitive the deep output after collapsing would be 1 full subpixel-coverage 
sample and perhaps 4 or more partial-subpixel coverage samples depending on 
whether the primitive completely covers the pixel.  That's assuming that color 
thresholding allows all ray samples to be close enough in color to combine - to 
retain high-frequency/high-contrast specular details more samples may be need 
to be generated.

During deep pixel flattening each subpixel bit is flattened individually for 
only the samples with that bit enabled, and the full and partial contributions 
for each subpixel integrated to form the final result.  Partial samples are 
added to the composited result rather than being under'd, but if multiple 
coincident partials add up to full coverage then the result is under'd as a 
solid sample would.


Cheers,
-jonathan

Jonathan, can you clarify how OpenDCX partitions the region of a pixel? My 
recollection is that it's inherently based on a 4x4 grid of subregions. That 
maps well to a renderer that implements some kind of stratified sampling (with 
a multiple of 4x4), but not necessarily other sampling schemes. What do you do 
with some kind of blue noise sampling that's not on a stratified grid? What do 
you do for "progressive" rendering where there's not a fixed number of samples, 
but you can always continue to generate more samples at any point and the 
placement of them spatially is a fairly opaque process?

I totally get how subpixel information lets you address a number of artifacts 
of the per-pixel deep image approach, but I just haven't quite wrapped my head 
around how it maps to different kinds of renderers and their sampling schemes.
_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to