> So maybe it could use something akin to a floating-point representation - > 12 bits of precision, and a few extra bits to indicate the sensitivity of > any particular pixel. > > This isn't a new idea - Pixar experimented with floating-point linear > response frame buffers back in the '80s, when they were trying to extend > the dynamic range. It's a reasonable idea for that purpose, especially > with noise-free synthetic images, but I'm not sure it would out-perform > a simple 16-bit linear sensor in the real world. > > Given the eye's logarithmic sensitivity to intensity, it actually make a lot of sense. At least as much as gamma-correct fixed-point. IIRC some of the linux utilities used in the movie industry for CG stuff (cinepaint, vips/nip) have support for some of these mini-floating-point formats. I think the've got 16-bit floats, 24-bit floats, etc.
I really doubt that the dynamic range is the problem of digital encoding. It really seems to be sensor limitations, judging by the "shadow noise" of highly-stretched images. You can encode all the way down, but digitizing noise is still noise. Now... if the ANALOG SNR could be improved (by say, a 50ISO-designed APS-sized sensor), then there might be some benefit. As it is, 12 linear bits seems to fit the effective dynamic range at ISO200 pretty well (yielding approx 8-bits of gamma-corrected image). -Cory -- ************************************************************************* * Cory Papenfuss, Ph.D., PPSEL-IA * * Electrical Engineering * * Virginia Polytechnic Institute and State University * ************************************************************************* -- PDML Pentax-Discuss Mail List PDML@pdml.net http://pdml.net/mailman/listinfo/pdml_pdml.net