I second Sampo about giving some more hints about Hilbert spaces,
shift-invariance, Riesz representation theorem... etc
Correct me if you said it somewhere and I didn't saw it, but an
important /implicit/ assumption in your explanation is that you are
talking about "uniform bandlimited sampling".
Personnally, my biggest enlighting moment regarding sampling where when
I read these 2 articles:
"Sampling—50 Years After Shannon"
http://bigwww.epfl.ch/publications/unser0001.pdf
and
"Sampling Moments and Reconstructing Signals of Finite Rate of
Innovation: Shannon Meets Strang–Fix"
https://infoscience.epfl.ch/record/104246/files/DragottiVB07.pdf
I wish I had discovered them much earlier during my signal processing
classes.
Talking about generalized sampling, may seem abstract and beyond what
you are trying to explain. However, in my personal experience, sampling
seen through the lense of approximation theory as 'just a projection'
onto a signal subspace made everything clearer by giving more perspective:
* The choice of basis functions and norms is wide. The sinc function
being just one of them and not a causal realizable one (infinite
temporal support).
* Analysis and synthesis functions don't have to be the same (cf
wavelets bi-orthogonal filterbanks)
* Perfect reconstruction is possible without requiring bandlimitedness!
* The key concept is 'consistent sampling': /one seeks a signal
approximation that is such that it would yield exactly the same
measurements if it was reinjected into the system/.
* All that is required is a "finite rate of innovation" (in the
statistical sense).
* Finite support kernels are easier to deal with in real-life because
they can be realized (FIR) (reminder: time-limited <=> non-bandlimited)
* Using the L2 norm is convenient because we can reason about best
approximations in the least-squares sense and solve the projection
problem using Linear Algebra using the standard L2 inner product.
* Shift-invariance is even nicer since it enables /efficient/ signal
processing.
* Using sparser norms like the L1 norm enables sparse sampling and the
whole field of compressed sensing. But it comes at a price: we have
to use iterative projections to get there.
All of this is beyond your original purpose, but from a pedagocial
viewpoint, I wish these 2 articles were systematically cited in a
"Further Reading" section at the end of any explanation regarding the
sampling theorem(s).
At least the wikipedia page cites the first article and has a section
about non-uniform and sub-nyquist sampling but it's easy to miss the big
picture for a newcomer.
Here's a condensed presentation by Michael Unser for those who would
like to have a quick historical overview:
http://bigwww.epfl.ch/tutorials/unser0906.pdf
On 27/08/17 08:20, Sampo Syreeni wrote:
On 2017-08-25, Nigel Redmon wrote:
http://www.earlevel.com/main/tag/sampling-theory-series/?order=asc
Personally I'd make it much simpler at the top. Just tell them
sampling is what it is: taking an instantaneous value of a signal at
regular intervals. Then tell them that is all it takes to reconstruct
the waveform under the assumption of bandlimitation -- a high-falutin
term for "doesn't change too fast between your samples".
Even a simpleton can grasp that idea.
Then if somebody wants to go into the nitty-gritty of it, start
talking about shift-invariant spaces, eigenfunctions, harmonical
analysis, and the rest of the cool stuff.
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp