On 2014-03-27, robert bristow-johnson wrote:
the *sampling* function is periodic (that's why we call it "uniform
sampling"), but the function being sampled, x(t), is just any
reasonably "well-behaved" function of t.
Ah, yes, that much is true. But in fact, if you look a bit further,
actually the uniformity isn't a requirement. It only makes the proof
easier and the translational symmetry that goes with it was the original
simplification which enabled the theory to be discovered. In reality, we
now have a proof somewhere in the compressed sensing literature which
says that the sampling instants are almost completely irrelevant as far
as the invertibility of the representation goes. All that matters is the
bandlimit. And in fact even the characterization of the Nyquist
frequency usually given is wrong: you don't have to sample at twice the
highest frequency present, but in fact only at twice the frequency of
the total support width of the spectrum, even if the support is pretty
much arbitrarily chopped up over many frequencies. And if you don't know
where the support is, then another theorem says twice the critical
frequency again suffices.
All that stuff follows from the weird and wonderful properties of
complex analysis. When you impose a bandlimit, you at the same time make
your signal analytic. That is a stupendously strong condition slipped in
through the back door, and makes the class of bandlimited signals
exceedingly rigid. In the strict sense, they have pretty much no
genuinely local properties, but instead their information content is
spread out over all of them, in both time and in frequency. As a result,
not only are the signals so rigid that the dimensionality of them as a
function space drops from a double continuum to a discrete one, it does
so in an manner which lets you reconstitute the signal from pretty much
any sufficient number of samples either in the frequency or in the
temporal domain, no matter where they lie. Taking a million samples
within some second now, in pretty much any arrangement, theoretically
lets you perfectly reconstruct a bandlimited signal into the end of
time. (More exactly, as long as the rate of innovation of the signal is
lower than that of the information gathered by sampling process, perfect
reconstructibility is guaranteed. Thought the interpolation formulae
which result can be pretty horrendous.)
That's a slightly more unnnerving way to put Ethan's earlier point:
technically you can't fully satisfy the bandlimiting condition. His
rationale was a bit different and didn't sound too bad, but this one's
really dehumanizing: real bandlimitation implies perfect
reconstructibility of events which have yet to happen, so that no finite
delay process can even theoretically produce a truly bandlimited signal,
by any process at all. But of course as Ethan explained, in the square
norm sense you can easily approach that situation, to the degree that
you don't have to worry about it in practice, just by intelligently
cutting of the tails of the sinc interpolation kernel in time.
Furthermore, it generalizes to settings where periodicity isn't even an
option.
oh, it outa be an *option*. we know how to take the Fourier transform
of a sinusoid. (it's not square integrable, but we hand-wave our way
through it anyway.)
Those situations have to do with abstract harmonic analysis over groups
other than the real numbers. The addition operator there doesn't have to
have an interpretation as a shift like it has with the real line. Thus,
periodicity as a concept doesn't make much sense there either.
back, before i was banned from editing wikipedia (now i just edit it
anonymously), [...]
Jesus, your case went as far as the arbitration committee. What the hell
did you do? Given gun control and the like in the record, was it the age
old mistake of going full libertarian? If you don't mind my asking? ;)
[...] i spelled out the mathematical basis for the sampling and
reconstruction theorem in this version of the page:
https://en.wikipedia.org/w/index.php?title=Nyquist%E2%80%93Shannon_sampling_theorem&oldid=70176128
since then two particular editors (Dicklyon and BobK) have really made
the mathematical part less informative and useful. they just refer you
to the Poisson summation formula as the mathematical basis.
Not good. That article is in dire need of TLC. While you can logically
make it about Poisson summation, historically I seem to remember at
least Nyquist's signalling work was independent of it. Plus the text as
it stands really has nigh zero pedagogical value, compared to what you'd
expect to find e.g. in Britannica.
the only lacking in this proof is the same lacking that most
electrical engineering texts make with the Dirac delta function (or
the Dirac comb, which is the sampling function). to be strictly
legitimate, i'm not s'pose to have "naked" Dirac impulses in a
mathematical expression.
Naked deltas, combs, beds of nails, all of them are perfectly fine as
long as you remember that they're functionals, not functions. So for
instance, it's all well and good to e.g. multiply them absent any hint
of (test) functions, as long as their singular supports are disjoint.
That sort of thing BTW is why the thing about tempered distributions is
not just handwaving. They actually have structure and properties you
need to know if you want to get continuous time Fourier analysis in
full. And in particular if you want to be proficient in solving ODE's in
the Laplace domain. That shit don't fly in all its generality and beauty
unless you're at ease with the full calculus of naked distributions, so
that you can encode arbitrary ODE's as distributional convolution
kernels, and so on.
i am simply treating Dirac impulses just like we do for the nascent
delta functions of very tiny, but non-zero width.
That's also fully kosher once you grasp the abstract machinery. The
formal argument for why you're allowed to do that is that test functions
are dense in the space of distributions. That means that each and every
distribution can be arbitrarily well approximated in the weak topology
by a Cauchy sense convergent sequence of C-infinity functions. Thus all
that you're actually doing with those nascents of yours is leaving the
final passage to the limit implicit.
That's BTW one common way of seeing why distributions really are
generalized functions and not some arbitrarily exotic set of structures
like the full dual of R. As far as mathematical objects go, they're
actually pretty tame, domesticated and all-round benign, even if making
the idea exact calls for annoying amounts of machinery, in the form of
Schwartz spaces and whatnot. In that regard the Wikipedia article in
fact gets it mostly right, using another characterization starting with
continuous functions (obviously coming from the theory of classical
mixed probability distributions, just as the name of the construct does
too).
the Dirac delta is, strictly speaking, not really a "function", as the
mathematicians would put it. strictly speaking, if you integrate a
function that is zero "almost everywhere", the integral is zero, but
we lazy-ass electrical engineers say that the Dirac delta is a
"function" that is zero everywhere except the single point when t=0
and we say the integral is 1.
Here some background in probability helps a lot. There you're already
familiar with the fact that you can represent probability distributions
in two forms: the probability density function, and the cumulative
distribution function, related to each other by derivation and (here,
normalized so that you don't even have the integratino constant in
there) antiderivation.
The whole original reason for retaining both those representations is
that once you start mixing discrete and continuous stuff within the same
framework, using functions alone makes that equivalence of
representations break down. You can't derive a cumulative probability
function which is discontinuous, eventhough the discontinuity lets you
systematically represent and operate on discrete concentrations of
probability mass. Once you get how that works, and what it historically
lead to, distributions in the general feel extremely natural: they're
just the minimum closed system of function like objects which preserves
the PDF-CDF equivalence, even under iterated differentiation, summation,
and even limited forms of multiplication (that actually gets you into
things like Colombeau algebras, which are a notch beyond in the
machinery department). Plus of course all of this is pretty much the
same thing, just from a different angle, that is handled by measure
theory.
Once you grasp that, everything just clicks into place and suddenly
there's absolutely nothing magic or inconvenient about Deltas or even
more exotic distributions like the dipole (the negative of the
derivative of the Delta). They just work and make your life *much*
easier than you ever had it with plain old functions.
--
Sampo Syreeni, aka decoy - [email protected], http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp