On Wed, 24 Mar 1999 "Curfman, Donald (GEIS)" <[EMAIL PROTECTED]> wrote:

>       > Wait a second! How will the camera know if
>       > 123, 123, 123 is really a white fence in the
>       > shade or a grey card in full sunlight?
>
>       It doesn't consider any of the parameters separately.
>       It looks at all of the data and finds the most
>       significant pattern.
>
>       Only that pattern matters, the actual value of any one
>       piece of data is irrelevant.  Only how that piece of
>       data is related to the other data from the scene matters.


Yes, but people complain that their camera didn't make the white bird white,
even though it may only have occupied a few sensors.  The camera has no way of
knowing if the bird is white and in lower light, or grey and in brighter light.
So this is why the camera makes 'mistakes'.

>> What happens if I am taking an urban landscape that 
>> has a vertical patch of blue at the side?
>
>It depends on the content of those 30,000 images, but 
>the blue at the side shouldn't be enough to override 
>the rest of the information available.

Why not? How else would the meter know the orientation of the camera (assuming
that the other buildings are grey, or even worse, green)?

Reply via email to