> Wait a second! How will the camera know if 
        > 123, 123, 123 is really a white fence in the 
        > shade or a grey card in full sunlight?  

        It doesn't consider any of the parameters separately.  
        It looks at all of the data and finds the most 
        significant pattern.  

        Only that pattern matters, the actual value of any one 
        piece of data is irrelevant.  Only how that piece of 
        data is related to the other data from the scene matters.


        >>  In the F4, the finder had mercury switches which 
        >>  changed the bias on the metering cells when the
        >>  camera was rotated.  In the F5 if the meter 
        >>  detects that the top half is blue (r,g,b values 
        >>  within certain parameters) it assumes this is the 
        >>  sky, either the long or the short side of the frame.

        >  Are you sure the F5 doesn't use a similar switch?  

        The F5 doesn't have a switch.  It was programmed with 
        images in both horizontal and vertical formats so it
        can recognize camera orientation in cases where it makes
        a difference.


        > What happens if I am taking an urban landscape that 
        > has a vertical patch of blue at the side?

        It depends on the content of those 30,000 images, but 
        the blue at the side shouldn't be enough to override 
        the rest of the information available.


        >  Let's see: 16 bit color, 335 sensors... that gives 
        >  us about 1 kbyte of info per picture.  Now if we 
        >  have a database reference of 30,000 pictures, we 
        >  need about 30 megabytes of ROM to hold all this 
        >  info (assuming no compression).  I don't think so.  

        Correct.  I suppose Nikon refers to it as a 'database' 
        because they think that it's easier for people to understand.  

        The camera actually has a neural network that was trained 
        by feeding it 30,000 different scenes, telling it the correct 
        compensation for each scene.  All that's in the camera now is 
        the trained neural net -- there's no scene database.  The net 
        accepts the data from the current scene as input, and outputs 
        an exposure value based on the patterns it recognizes in the 
        data.  

        You don't get to choose which part of the pattern it considers
        most significant, so it won't always do what you want.  


        -Don

Reply via email to