Pascal de Bruijn <pmjdebru...@pcode.nl> writes:

I am only 95% sure of what I'm saying here, so please point out any
specific errors.

The thing I am most unclear of is whether the 12-bit or 14-bit raw space
has had any encoding applied to it, although it seems not.  (There's
compression, but that's different.)

> Also, there is no good reason, why camera gamma should be similar to
> sRGB. 

(I will ignore veiling glare, even though it seems often one wants to
take that out to get back closer to scene luminance.  But that's not
usually about profiles, which I'd expect to be done in an environment
without extremely high scene luminances, but with black point
compensation.  Auto black point compenstation is tricky, since it
stretches contrast if there are no deep shadows in the scene, which is
conceptually distinct from removing sensor bias noise or veiling glare.)

It's not camera gamma we are talking about, but assumed profile gamma.
The sensor is basically a linear device, so either it doesn't make sense
to talk about camera gamma, or it's 1.

It doesn't make sense to use camera bits without some input profile, or
rather at least we have to assume something about the colors the numbers
represent.  The default in ufraw from before there was any color
management support is to treat the camera as using the sRGB primaries.
But, sRGB is both a set of primaries, and an encoding method to map a
linear space into 8 bits.  To treat the pixel values as being in sRGB,
we have to not only assign camera red to sRGB red and so on, but to map
the linear value to an sRGB pixel value.  The obvious interpretation
would be to use the sRGB-specified gamma and linearity, and thus the
interpretation of the sRGB values would correspond to the same scene
luminances as using a linear profile with the sRGB primaries.

But we don't do that; we use 0.45 and 0.1.  This is gamma 2.22, and sRGB
gamma is 2.4.  The linear part for sRGB (in the linear space) is
0.00313.  So, the "treat input as sRGB with gamma 0.45 and linearity
0.1" is doing two things at once.  One is basically treating the input
as sRGB primaries, and the other is a tone adjustment.  I think it
basically increases contrast, and I agree it looks better.

One of the great mysteries to me has been why this tone transformation
is useful.  I've talked about replacing it with the theoretically
correct gamma/linearity and then an explicit intentional tone
transformation.  We're trying to get nice pictures, not a photometric
measurment of scene luminance.  I would like to be able to understand
all the deviations from measurements of scene luminance, though, and
some ramblings about this are in README-processsing.txt.

When using an actual profile, it can either be a linear profile,
expecting the input to be linear, or it can be a profile encoded with
the sRGB rules, or a profile encoded in some other way.  For
sRGB-encoding-rules profiles, we have to transform the input into a
pseudo-sRGB space first.  For linear profiles, we should not do this
transform.  I think sRGB-encoded profiles exist because they are viewed
as a way to correct data that's already been mapped into sRGB encoding
into correct sRGB colors, but I find linear profiles more natural - as
you say sRGB has nothing to do with sensors.

I suggested a profile/gamma/linearity tuple, because one has to use the
same processing when using a profile as when it was created.  I don't
fully understand how this works mechanically - loading a camera-specific
profile can change the default gamma/linearity values, and I have one
profile with gamma 0.45 and linearity 0.

> The sRGB standaard, is more or less the common denominator of all
> el'cheapo screens. The is no relation to camera's...

Agreed - until we assume camera input is like sRGB, which used to be the
assumption.  It's actually pretty close - doing gamma/linearity
conversion on raw pixels and assuming sRGB gets entirely usable images.

> Each type of camera sensor can have it's own gamma characteristics. And
> besides the actual technical accuracy of said gamma, there are always
> artistic considerations. I think manufacturers ramp up the contrast a
> bit.

I believe that raw gamma is 1, basically.  The contrast adjustment comes
in converting raw to JPG/TIFF.  Actually I think it can be the other way
around from increased contrast since it's about mapping scene luminances
to file luminances to print luminances, and prints handle 1:50 or 1:100
luminance range, and high-contrast daylight scenes exceed that - or say
the black-and-white film texts.  I'm not confident enough in
understanding what's up in raw processing to be sure we are measuring
this correctly.

> Anyway, to me it seems, by default we should try to emulate to
> manufacturers gamma/linearity as closely as possible.

I think this means using linear profiles and avoiding the whole issue of
ufraw converting into faux sRGB.

This is all far more relevant with a color matrix, which maps the camera
RGB to the sRGB primaries, but the 9 value of the matrix don't address
or affect gamma/linearity at all.  The entire notion that the color
matrix is a reasonable approximation to a profile is based on the sensor
being linear and thus gamma 1.  Alternatively, since color matrices can
only represent profiles which do not depend on input level, they only
make sense with linear sensors.

Attachment: pgpEiRTvMT5Ft.pgp
Description: PGP signature

------------------------------------------------------------------------------
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
_______________________________________________
ufraw-devel mailing list
ufraw-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ufraw-devel

Reply via email to