And you do this in Photoshop, right?   ]'-)

> You're confusing Bayer interpolation with gamma-encoding images.
> They're not the same thing.... Bayer interpolation takes the monochome
> "image" taken by the sensor with alternating RGBG color filter  
> masks and
> tries to recreate an actual 3-colors per pixel image from it.
> Gamma-encoding images is simply applying a logrithmic function to  
> the data
> before quantizing.

Sorry, but that's not true. I know the difference between Bayer  
interpolation and gamma-encoding. And gamma-encoding is not "simply  
applying a logrithmic function" to the data before quantizing. It's a  
mite bit more involved than that, although it presents a first order  
approximation.

Now that you explain what you meant more clearly, I understand  
exactly what you're saying. But it is meaningless when it comes to  
RAW workflow for someone using tools like Photoshop, and nearly all  
other production RAW conversion tools, on Mac OS X or Windows. The  
theoretical advantages of dividing  Bayer interpolation from gamma- 
encoding, keeping the data in a linear representation, etc, is of no  
real significance when it comes to producing pictorial photographs  
when you have tools that do it well combined with good color  
management and [EMAIL PROTECTED] quantization space.

G


On Jan 7, 2007, at 3:00 PM, Cory Papenfuss wrote:

>> I'm not entirely certain what you are suggesting, Cory. What is "16-
>> bit gamma RGB"? I've never heard of that. And besides, what the
>> sensor captured is 12bit linear data ... you can never have more
>> gradations than were there, all transformations will have losses,
>> mathematically speaking. There may be more values but they are
>> synthesized in interpolation.
>>
>       Typically, RGB data represents gamma-corrected RGB data, but it
> doesn't have to.  It just happens to be a logical extension of 8- 
> bit RGB
> images which *HAVE* to be gamma-corrected in order to have enough
> gradations within the dynamic range.  If one takes the 12-bit Bayer  
> data
> from the sensor and interpolates into 3 channels (RGB), quantizing  
> to the
> closest level of 16-bits/channel, the result is a 16-bit LINEAR RGB  
> file.
> If one further applies the logrithmic gamma processing to each of the
> channels, it's a "typical" 16-bit RGB image file.  If one then  
> quantizes
> it to 8-bits, it's a typical RGB file.
>
>       I often do the RAW conversion of my -DS pictures into a 16-bit
> linear TIFF.  I have icc-profiled my camera, so that if I enable the
> "color-manage display" option in cinepaint, it will gamma-correct the
> image I see in real-time.  Thus, the data and all processing done  
> on it
> (levels, curves, WB, sharpening, etc) are all done on the *LINEAR*  
> data...
> not the log data.  Here's a link to some pages where some guy has gone
> into depth on some of this stuff:
> http://www.aim-dtp.net/aim/evaluation/gamma_error/processing_space.htm
>
>> You have to do gamma correction in RAW conversion to have a properly
>> rendered image, and data loss in RAW conversion is unavoidable: it is
>> mathematically impossible to do the conversion without it. The
>> process of interpolation (compression of high values and expansion of
>> low values to suit the curve normal to vision) will change original
>> values at the photosites into something else, and some data will be
>> lost in that transformation. Data loss isn't always bad, it is
>> actually necessary to the process; the goal is to lose as little
>> *significant* data as possible.
>>
>       You're confusing Bayer interpolation with gamma-encoding images.
> They're not the same thing.... Bayer interpolation takes the monochome
> "image" taken by the sensor with alternating RGBG color filter  
> masks and
> tries to recreate an actual 3-colors per pixel image from it.
> Gamma-encoding images is simply applying a logrithmic function to  
> the data
> before quantizing.
>
>> Of course, moving to as large a data space and gamut as possible will
>> maximize what you keep and present the greatest number of options for
>> further editing. All my RAW conversion is done into ProPhoto RGB now,
>> the largest possible color gamut, represented as full [EMAIL PROTECTED]
>> RGB images. I only do downsample conversion to [EMAIL PROTECTED] and sRGB
>> gamut for web display, and all printing is color managed through the
>> appropriate profiles at time of printing.
>>
>       Likely done in a gamma-corrected 16-bit colorspace, but it doesn't
> *have* to be.  With 16-bits/channel, gamma-correction is even more
> processing to the original RAW data than is necessary.


-- 
PDML Pentax-Discuss Mail List
PDML@pdml.net
http://pdml.net/mailman/listinfo/pdml_pdml.net

Reply via email to