It would be nice to have a convention that does not need a secret decoder ring 
to understand. Since the time of our dinomammalian forebears a depth buffer has 
been parallel to the image plane. A point projection distance should be named 
explicitly, not hidden as a magic mode. 

Keep z as is, and if one needs something like this call it something explicit 
like 'distance from camera' but please don't overload the meaning of z. 

- Nick

Sent from my iPhone

> On Jun 4, 2014, at 9:50, "Larry Gritz" <[email protected]> wrote:
> 
> Interesting. I take back what I said, this isn't quite a closed case.
> 
> I think "z" and "depth" are widely understood to be (B) (even if not always 
> implemented as such by every last renderer), though this does not generalize 
> to all possible transformations. For a spherical projection, say, it does 
> seem better to store distance, but it would be very confusing to call it "Z". 
> (One might argue that it should have been distance from the start, 
> "d-buffer", not "z-buffer", so as to generalize, but that's water under the 
> bridge at this point.)
> 
> The EXR spec is pretty clear that deep files need a "Z" channel. What happens 
> for a different projection. Or, for example, is a "deep" latlong environment 
> map allowed? To the extent that anyone cares, I would propose one of the 
> following solutions:
> 
> 1. Change the spec to allow "Z" -OR- "distance" (and "distanceBack?"), and 
> nonstandard projections should have that channel rather than Z.
> 
> 2. Keep the name "Z" always but dictate that for any projection that can be 
> specified by a 4x4 matrix, "Z" means depth, but for non-matrix-based 
> projections, it means distance.
> 
> 
> 
>> On Jun 3, 2014, at 12:32 PM, Jonathan Litt <[email protected]> wrote:
>> 
>> For what it's worth, V-Ray also uses type A) for its standard z-depth 
>> buffer. I inquired about this a long time ago and they had a reasonably 
>> logical answer: B) only works for standard camera projections, but it 
>> doesn't work for other projection types such as spherical and cylindrical. 
>> So they went with A) for consistency. They could probably be convinced to 
>> add an option to use B) for regular cameras, but it was easy enough to write 
>> a conversion expression in Nuke and we didn't bother pursuing it further. 
>> It's also easy to generate a "camera space P" AOV and get B) from that. 
>> Also, they do use B) for the native depth channels in deep exr 2.0 files, 
>> which seems an admission that the old way is just legacy at this point.
>> 
>> My $.02: no harm in asking 3delight to add an option for this.
>> 
>> 
>> 
>> On Friday, May 30, 2014 12:21 PM, Larry Gritz <[email protected]> wrote:
>> 
>> 
>> "depth" (aka "Z") always means your choice B. That's true for every 
>> textbook, file format, or renderer, from OpenGL z-buffers to RenderMan 
>> shadow map files.
>> 
>> I can't speak for 3delight, but if your interpretation is correct, they are 
>> just wrong (and incompatible with other renderers they try hard to be 
>> compatible with), or have chosen a very strange naming convention that is 
>> different than the rest of the computer graphics field.
>> 
>> 
>> 
>>> On May 29, 2014, at 6:34 PM, Daniel Dresser <[email protected]> 
>>> wrote:
>>> 
>>> I'm not exactly sure what the best way of wording this question is, which 
>>> may be why I haven't turned up many answers in my searching.  Hopefully 
>>> someone here can suggest the best terminology and/or point me to an answer.
>>> 
>>> Assuming that we want to store depth in an image using unnormalized world 
>>> space distance units, there are two main ways we could do this:
>>> A) Distance from the point location of the camera (ie. if the camera is 
>>> facing directly at a flat plane, the depth value is highest at the corners 
>>> and lowest in the middle )
>>> B) Distance from the image plane (ie. if the camera is facing directly at a 
>>> flat plane, the depth value is constant )
>>> 
>>> The depth channel in an OpenEXR image is by convention named Z, which 
>>> suggests interpretation B), where depth is orthogonal to the pixel X/Y 
>>> location.
>>> 
>>> I tried looking through the document "Interpreting OpenEXR Deep Pixels" for 
>>> any sort of suggestion one way or another, but all I could find was:
>>> "Each of these samples is associated with a depth, or distance from the 
>>> viewer".  I'm not sure how to parse this - it's either defining depth as 
>>> "distance from the viewer", which suggests A), or it is saying you could 
>>> use either A) or B).
>>> 
>>> Is there a convention for this in OpenEXR?  The two renderers I currently 
>>> have convenient access to are Mantra, which does B), and 3delight, which 
>>> does A).  I'm wondering whether I should try and pressure 3delight to 
>>> switch to B), or whether our pipeline needs to support and convert between 
>>> both.  It shouldn't be hard to convert back and forth, but it's one more 
>>> confusing thing that can go subtly wrong when moving data between renderers.
>>> 
>>> -Daniel
>> 
>> --
>> Larry Gritz
>> [email protected]
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Openexr-devel mailing list
>> [email protected]
>> https://lists.nongnu.org/mailman/listinfo/openexr-devel
> 
> --
> Larry Gritz
> [email protected]
> 
> 
> 
> _______________________________________________
> Openexr-devel mailing list
> [email protected]
> https://lists.nongnu.org/mailman/listinfo/openexr-devel
_______________________________________________
Openexr-devel mailing list
[email protected]
https://lists.nongnu.org/mailman/listinfo/openexr-devel
_______________________________________________
Openexr-devel mailing list
[email protected]
https://lists.nongnu.org/mailman/listinfo/openexr-devel

Reply via email to