Caveman wrote:



Toralf Lund wrote:

I think the real situation is that the camera does not capture according to the sampling theorem, i.e. the data has a frequency a lot higher than half of your sample rate, so you're not going to be able to reproduce the input accurately (according to Nyquist.)


I mentioned "ideal" case. If we want to speak reality, you have the lens and an optical lowpass filter sitting in front of the sensor. The manufacturer is supposed to combine the MTF of the lens with the one of the filter in order to have a band limited signal on the sensor.


So what you should be able to reconstruct using an ideal sensor would be the band limited image formed on the sensor plane and not the real image in the outside world.

And no matter which way you look at it, you cannot extend the bandwidth. Which is why I say interpolation doesn't change the resolution.



But then we have the sensor itself. The photosites are not infinitely small so you don't have an ideal sampling. However you could approximate the effect of the finite size of the photosites as yet another lowpass filter. Then you have sensor and circuitry noise, which will give you yet another approximation. Then you have quantization errors. Etc.


So finally you get some approximative data that's more or less close to the ideal sampling of ideally low pass filtered image.


Are you sure that's right? Isn't the interpolation more about guessing what values you would have got, if sampling at a higher rate - by assuming the signal (pixel data) develops according to a certain well-known function?


It's not wild guessing, they are approximations of the real formula.

You could use that formula and it would give you "the perfect guess" i.e. reconstruction. However that one is very computational intensive, and since you're starting with a non-ideal set of samples, the benefits would not be that great.

I'm not sure I understand why we have to drag Nyquist into this. Maybe I'm too simplistic, but I really like to see interpolation as just a way of giving an estimate of what additional data might look like, without thinking about bandwidths etc. I'd much rather say "Newton" instead, if I had to mention a name...


Mathematically, interpolation is the estimation of a function based on certain known function values. Furthermore, a function value f(x) is called an *interpolation* if there are a number of values in the range x1 ... xn (where xn>x1) for which the function values are known, and x lies between x1 and xn. If x is outside the range, f(x) is an *extrapolation*. The point is, f(x) cannot be accurately determined because there is an infinite number of functions that may fit. However, it has been shown that a polynomial function is often a good approximation; if you set up an n'th degree polynomial that would give the correct function values at your known points x1 ... xn/f(x1)...f(xn), you will often get a good estimate of the function value at some other point by applying the same polynomial function - but again, it's an *estimate*, you can't actually know if you're right, and there is no such thing as the "real formula". And one challenge is choosing the value of n. In pixel interpolation, essentially you often treat the data as separate sets of measurements approximated with separate sets of functions, and the trick to it all is knowing how to split up your data, of if you like, knowing what degree polynomial to assume.

- the "real" formula computes level at point x,y based on the values of *all* the samples of the image

What you are talking about here is probably an n-degree polynomial, where n is the number of pixels or samples. Might give you good results, but it's still an approximation. And I' not sure you can automatically assume that the data will be more correct than what you get if using lower-degree polynomials within smaller regions.



- bicubic computes level at point x,y based on the values of the samples in a 4x4 vicinity of point x,y (and discards from computation all the other samples)

That would be a 3rd degree polynomial (in 2 dimensions.)



-bilinear computes level at point x,y based on the values of the samples in a 2x2 vicinity of point x,y

Yes, 1st degree or linear interpolation. It actually gives you good results in many cases, an the effect is rather similar to the one of letting adjacent pixels float into each other...



- nearest neighbour computes level at point x,y based on the value of 1 pixel (it just copies the value of the closest sample)


So now you got an idea of what the approximations are and how they are done.




Reply via email to