Dave, are you maybe doing the sRGB->linear conversion backwards?

JPEG is (by its specification) sRGB, which specifies a nonlinear response that 
is close to having a 1/2.2 gamma baked in (an ok approximation, but it's ever 
so slightly weirder, see http://en.wikipedia.org/wiki/SRGB). This is because 
our eyes are more sensitive to fine gradations in the dim part of the range 
than in the bright part, so with only 8 bits to work with, the gamma curve 
functions to spread the the encoded values roughly in perceptually equal (but 
not photometrically equal) increments. I know you know this, I'm explaining for 
anybody else following along.

So, the first thing you do with your jpeg (or the sRGB ramp you create in your 
example script, thanks for that) is raise to the 0.45 power (close to 1/2.2). 
This seems backwards to me! To get a photometrically linear value, you should 
be raising the sRGB-encoded value to ^2.2. (Somebody please tell me I'm right, 
and not simply that I haven't had enough coffee today.)

Second, you are doing the power (right or wrong) "in place" in an 8 bit buffer. 
Obviously, for any function f() that is not the identity function or a 
"shuffle", if you have 256 discrete input values, the set of f(input), rounded 
to only 256 discrete output values, will have to map multiple input values to 
the same output value in at least one case, thereby losing precision. The 
solution here is that rather than doing the power in place, you should make a 
new buffer (float is best), like this:

    newspec = rampSpec
    newspec = OpenImageIO.FLOAT
    newbuf = OpenImageIO.ImageBuf (newbuf)

    OpenImageIO.ImageBufAlgo.pow( newbuf ,
                                      rampBuf ,
                                      2.2    )   # I think you want 2.2 here

    # If you're done with the original 8 bit buffer at this point, the 
following will just
    # replace it with your corrected float buffer, and free the old one
    rampbuf = newbuf


I'll answer some other questions inline...


On Sep 21, 2014, at 7:39 AM, Dave Lajoie <[email protected]> 
wrote:

> Hello Guys,
> 
> I have a slate generation tool which take any file format and generated 
> slated images sequences, or still image. I have problem with jpeg source 
> files, where there is quantize / banding issues ( it gets worst once 
> processed). So I have search the oiio doc, and found the section about how to 
> resolve quantize issues.
> 
> However, despite my best effort, I cannot get rid of the quantize.
> 
> Long story short, here is what I am attempting to do in the code:
> 
> - Load UINT8 sRGB jpg file as ImageBuf instance,
> - Remove sRGB from input ImageBuff using pow() <<< this creates the quantize 
> issue

        * I think you want to pow to 2.2, not 0.45, in order to linearize an 
sRGB value
        * do the pow *into* a float buffer to preserve full precision, rather 
than in-place or into another UINT8 buffer

> - "over" operation into HALF ImageBuf Instance.

If RAM is not a limiting issue, I recommend that all "internal" ImageBuf be 
FLOAT. You won't lose precision, and also all the math will be much faster, 
since math on non-float buffers (including HALF) are converted to float for 
every individual math operation, then converted back to the buffer format to 
store it.

> - write out the HALF ImageBuf Instance as .exr

Do the math in float, convert to HALF just as part of the output step.


> Here are few questions:
> 
> 1) Is there a way to load a UINT8 file, and load it straight into HALF linear 
> space ImageBuf without having to convert/process the UINT8 ImageBuf?

Yes,

        halfbuf = ImageBuf ("myfile.jpg")
        # Note: it didn't really read the file yet. It will do so lazily, when 
you do an operation
        # that requires the pixel values. But you can force a read before then, 
and when you
        # do so, there is the option to convert the data type:
        halfbuf.read (force=True, convert=OpenImageIO.HALF)

Though, like I said earlier, I actually recommend forcing read into a FLOAT 
buffer rather than HALF.

Note that this converts to half (or float), but it doesn't linearize. You'll 
still need to do the pow or colorconvert.


> 2) Is there a way to convert ImageBuf format in-place? for instance I have 
> loaded an jpg sRGB into ImageBuf instance, and I want to convert it to HALF. 
> Is there a way to do this without having to use the pixel/scanline/tile level 
> api? ( looking for an atomic operation here, since python can be slow for 
> pixel level operations. :) )

You can't convert data format in place, but you can do it by creating a half 
(or float) ImageBuf, then copying via ImageBuf.copy_pixels().


> 3) ImageBugAlgo are assumed to be done in linear space, right?
> Also there is not implicit color space conversion, since it takes the buffers 
> "as is" and apply the math.

Correct.


> 4) Dithering doesn't appear to be working. When is dithering being applied? 
> on image buf write? during ImageBufAlgo operations?

Dithering only happens when you output a float buffer to an UINT8 file.


--
Larry Gritz
[email protected]



_______________________________________________
Oiio-dev mailing list
[email protected]
http://lists.openimageio.org/listinfo.cgi/oiio-dev-openimageio.org

Reply via email to