Sure thing.
It's actually an 8x8 grid (64 bits stored in two 32-bit float channels) - but even 8x8 is far too coarse to provide adequate resolution to capture fractional subpixel weighting. To support fractional weighting we split (duplicate) samples between full and partial subpixel-coverage (not pixel-coverage.)
(fwiw we found duplicating samples to be far cheaper than simply increasing the subpixel mask size as increasing the mask increases the cost for all samples rather than just a subset that may need partial weighting.)

A renderer sampling at a high subpixel rate should have some method of collapsing multiple samples together otherwise the resulting deep file ends up being massive. This collapsing routine changes slightly when outputting dcx deep samples:
    * Similar samples (usually similar Z & ID, and possibly color) are combined together and their subpixel xy locations ORd into the nearest-neighbor 8x8 grid bin.  As each sample is added to a grid bin the hit count for that bin is incremented. Full-coverage and partial-coverage masks are constructed from these bin counts representing that sample's full and partial contributions. All the masks bits are exclusive so a full-coverage bit will never be set for a partial-coverage bit, and vice-versa.
    * There may be more than one partial-coverage mask depending on the combined partial weights, but again these bits are exclusive with the other partial masks.
    * The partial weight is stored in the dcx sample metadata as an 8-bit number where 0x00=0.0 and 0xff=0.996 so that 0x80=0.5. 0x100 is redundant as this indicates a full-coverage sample and thus no partial weight is recorded.
    * As partial contributions add up from multiple samples they may saturate a bin (0x100) which is then switched to a full subpixel-coverage sample.

So, for 256 deep samples created from 256 rays hitting the same opaque primitive the deep output after collapsing would be 1 full subpixel-coverage sample and perhaps 4 or more partial-subpixel coverage samples depending on whether the primitive completely covers the pixel.  That's assuming that color thresholding allows all ray samples to be close enough in color to combine - to retain high-frequency/high-contrast specular details more samples may be need to be generated.

During deep pixel flattening each subpixel bit is flattened individually for only the samples with that bit enabled, and the full and partial contributions for each subpixel integrated to form the final result.  Partial samples are added to the composited result rather than being under'd, but if multiple coincident partials add up to full coverage then the result is under'd as a solid sample would.


Cheers,
-jonathan

Jonathan, can you clarify how OpenDCX partitions the region of a pixel? My recollection is that it's inherently based on a 4x4 grid of subregions. That maps well to a renderer that implements some kind of stratified sampling (with a multiple of 4x4), but not necessarily other sampling schemes. What do you do with some kind of blue noise sampling that's not on a stratified grid? What do you do for "progressive" rendering where there's not a fixed number of samples, but you can always continue to generate more samples at any point and the placement of them spatially is a fairly opaque process?

I totally get how subpixel information lets you address a number of artifacts of the per-pixel deep image approach, but I just haven't quite wrapped my head around how it maps to different kinds of renderers and their sampling schemes.
_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to