On 28/07/2016 12:04 AM, Ethan Fenn wrote:
    Because I don't think there can be more than one between any two
    adjacent sampling times.


This really got the gears turning. It seems true, but is it a theorem?
If not, can anyone give a counterexample?

I don't know whether it's a classical theorem, but I think it is true.

Define the normalized sinc function as:

sinc(t) := sin( pi t ) / (pi t)

sinc(0) = 1. the signal is analytic everywhere.

A bandlimited, periodically sampled discrete-time signal {x_n} can be interpolated by a series of time-shifted normalized sinc functions, each centered at time n and scaled by amplitude x_n. This procedure can be used to produce the continuous-time analytic signal x(t) induced by {x_n}. We want to know how many peaks (direction changes) there can be in x(t) between x(n) and x(n+1).

Sinc is bandlimited and has no frequencies above the Nyquist rate (fs/2). A sum of time shifted sincs is also bandlimited and therefore has no frequencies above the Nyquist rate.

Now all you need to do is prove that a band-limited analytic signal whose highest frequency is fs/2 has no more than one direction change per sample period. I can't think how to do that formally right now, but intuitively it seems plausible that a signal with no frequencies above the nyquist rate would not have time-domain peaks spaced closer than the sampling period.

Ross.



_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to