On Thu, Oct 1, 2009 at 9:53 AM, pierrelafran...@sympatico.ca <
pierrelafran...@sympatico.ca> wrote:

> René, you put the fingers exactly in the center of our question: Why
> camera gave 12, or even 14 bits per color, if Windows, Display, Grahic
> cards are limited to 8 bits per color ?
>
> I'm not saying RGB888 is perfect for human color perception, but to answer
your question about why cameras can go higher than 8 bits per channel (like
with .RAW files) while the display can only go to 8 bits per channel - the
basic reason is because people will take the things their cameras record and
put them through transforms and scalings and stuff like that. Like with
James' example about his pictures in the cavern - if you take a shot where
you want to take the data your camera recorded and remap it to a different
output range, you are losing precision, and your output data is aliased. So
if you want to be able to do levels or color remappings or other photo
manipulation effects, you need the extra bits in your source, cause you are
going to lose bits when you do those effects/transforms.

So, assuming your display technology is optimized to be able to present
things to the full range of possibilities humans can perceive, it should be
*expected* that an optimal recording technology would have higher precision
in it's recording than the output device the final product is intended for.

To think that the output technology precision should match the source
precision so you get to "see" all that data is a flawed way of looking at
it. The right way to think about it is your output technology should be able
to express the full range of human perception, and your source tech should
be able to have enough precision to express the full range of your output
technology after applying the filters and effects and transforms.

Reply via email to