On 3/27/14 2:20 PM, Doug Houghton wrote:
Some great replies, gives me a lot to think about

Terms like "well behaved" when applied to the "functon" make me wonder what stipulations might be implied by the language that you'd have to be a formal
mathmatician to interpret.

i'm not so terribly worried about the existence of audio or music signals that are not sufficiently "well behaved"

  As an example, I don't even know what the
instrinsic properties of a "function" may be in this context.

the context are audio and music signals. the end receptacle of these signals are our ears and brains. i'm pretty sure that bandwidth restrictions apply. that *really* nails down the "well-behaved". the are continuous-time, finite power signals that are also bandlimited. whether it's bandlimited to 22.05 kHz or to 48 kHz or 96 kHz, doesn't matter. that is a quantitative issue. doesn't change the validity nor the qualitative conditions for the theorem.


Since it's an infinit series I suppose it doesn't really matter, given
enough time you could prove out any rational requirement? which is why you
can throw math at it.

yup. and then, when you get practical about reconstruction, you realize that the infinite series of sinc() functions will turn into a finite approximation to the same thing. one approach, to a finite sum, is to truncate the sinc() function to a finite length. that is the same as applying a rectangular window (which is often the worst kind), so then you try the sinc() function windowed by a good window function. now that is a slightly different low-pass filter than the ideal brick-wall filter (which has a sinc() function for its impulse response). so then you investigate how bad is it from different points of view (usually a spectral POV over some frequencies of interest).

  If it was just a bunch of random numbers that started
somewhere and stopped somewhere, I doubt anyone would be writing equations
that mean anything.

???

um, you can model *any* linear and time-invariant signal reconstruction problem (or "interpolation problem") as a specific case of a string of impulses weighted with the samples values, x[n], going into a particular low-pass filter. you can write equations for that. in both the time domain (you would use these equations to implement the interpolation) and in the frequency domain: [what does this LPF do to the baseband signal? what does it do to the images?]

so, for even polynomial interpolation (like Lagrange or Hermite), you can model it as a convolution with an impulse response and you can compute the Fourier transform of that continuous-time impulse response and see how good or how bad the frequency response is. how bad does it kill the images and how safe is it to your original signal?

you can write equations for that, and they mean something.

  I'd guess we would turn to statistics at that pint to
supply some context.


but you can make some good guesses instead of doing this as a complicated statistical process



As a broad answer to questions posted in a couple of the replies, my
interest lies in imrpoving my understanding of specifically what the SNST
proves, and the requirements for it to be valid.

take a look at that earlier wikipedia version to show you. if you ideally uniformly sample, your spectrum is repeatedly shifted (by integer multiples of the sampling frequency, these are called "images") and added together. to recover the original signal, you must remove all of the images, yet preserve the original (that's what the brick-wall LPF does). Only if there is no overlap of the adjacent images is it possible to recover the original spectrum. To make that happen, the sampling frequency must exceed twice the frequency of the highest frequency component of a bandlimited signal. if it does not do that, images will overlap and once you add two numbers, it's pretty hard to separate them. so the overlapped images can be thought of as non-overlapped frequency components that "just happened" to be at those frequencies. those frequency components are called "aliases".

take a look at


https://en.wikipedia.org/w/index.php?title=Nyquist%E2%80%93Shannon_sampling_theorem&oldid=217945915


the sampling theorem is actually quite simple to be expressed rigorously. it is, at least, if you accept the EE notion of the Dirac delta function and not worry so much about it "not really being a function", which is literally what the math folks tell us.

--

r b-j                  [email protected]

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to