>Not exactly. If you take the typical sampling formula, with equidistant
samples, you need them all.

Yeah, that's what we're discussing isn't it?

>But in theory pretty much any numerable number of samples from any compact
interval will do.

Sure, but that's not going to help us with figuring out what comes out of
an audio DAC.

>Linearly? It dies off as 1/x.

Yeah that's what I mean. Kind of informal, but "die off" was meant to imply
"this is what is in the denominator."

>Not quite so. The proper way to say it is "when probed locally by nice
enough
>test functions, the reconstruction works the same".

I'm not sure we're on the same page here - the statement you were replying
to was referring to the classical L2 sampling theorem stuff.

>The sinc convolution is just fine even in this setting.

??? The sinc convolution is not implementable in any setting.

E





On Fri, Jun 19, 2015 at 4:12 PM, Sampo Syreeni <de...@iki.fi> wrote:

> On 2015-06-19, Ethan Duni wrote:
>
>  We theoretically need all samples from -inf to +inf in the regular
>> sampling theorem as well, [...]
>>
>
> Not exactly. If you take the typical sampling formula, with equidistant
> samples, you need them all. But in theory pretty much any numerable number
> of samples from any compact interval will do.
>
> I'm not 100% certain, but with polynomials in the distributional setting,
> I think you'll actually need -inf to +inf in some sense (equidistant
> sampling being sufficient but probably not quite necessary), despite the
> bandlimitation which usually makes the function rigid enough to be
> analytic, whole, and so resamplable+continuabe from pretty much whatever
> you have at hand.
>
>  This happens basically because the sinc function dies off linearly [...]
>>
>
> Linearly? It dies off as 1/x. And that's part of the magic. You see:
>
> -1/x dominates any decaying exponential, being in a sense their limit
> -exp(x) dominates any monomial, being in a sense their limit
> -log(x) dominates any root, being in a sense their limit
> -there's a fourth one, plus some integral equalities, here
>
> This stuff basically delimits in real terms the Schwartz space used to
> construct tempered distributions. It also delimits the L_p spaces. The fact
> that the 1/x growth rate is the limit of decaying exponentials and that we
> go through the weak* topology of the dual space is somehow the reason why
> we can pass to the 1/x limit of the Shannon-Whittaker interpolation
> formula, both in the simpler L_2 theory and in the more general
> distributinal framework. And it's somehow clearly the reason why you can't
> have but polynomial growth in (tempered) distributions.
>
> I don't understand this stuff fully myself, yet, but it's evidently there.
> So the limiting growth rate of the sinc function cannot be an accident. I
> think it comes from the dominating real convergence rate of any
> polynomially bounded tempered distribution, when approximated via milder
> distributions in the weak* topology.
>
>  [...] and we are dealing with signals with at most constant-ish
>> asymptotic behavior - so the contribution of a given sample to a given
>> reconstruction region is guaranteed to die off as you get farther away from
>> the region in question.
>>
>
> Not quite so. The proper way to say it is "when probed locally by nice
> enough test functions, the reconstruction works the same".
>
> That's a bitch because some functions within the space of tempered
> distributions can be plenty weird. The main counter example I've found is
> f(x)=sin(x*e^x). That's bounded and continuous, so it induces a well
> behaved tempered distribution. Then we know that every derivative of a
> tempered distribution is also a tempered distribution.
> g(x)=f'(x)=cos(x*e^x)*D(x*e^x)=e^x*cos(x*e^x). That doesn't look
> polynomially growing at *all*, yet it's part of the space. (The reason is
> its fast oscillation while it grows.)
>
>  So for any finite delay, we can get a finite error bound on the
>> reconstruction. But in the case of a polynomial it seems to me that the
>> reconstruction in a given region (around t=0 say) could depend very
>> strongly on samples way off at t = +- 1,000,000,000, since the polynomial
>> is eventually going to be growing faster than the sinc is shrinking.
>>
>
> That's the problem: the local integration theory we use with the
> distributions doesn't work with your usual error metrics or notions of
> convergence. This sort of argument is meaningless there. What you need to
> do is bring in the whole set of test functions, in order to construct a
> nice functional, and then show it can be induced by a function which
> doesn't integrate in the normal sense against any L_2 function, say.
>
>  So I'm not seeing how we can get any error bounds for causal,
>> finite-delay approximations to the ideal reconstruction filter in the
>> polynomial case.
>>
>
> You'll have to go via the functional transposition operator.
>
>  We also need the property that the reconstruction can be approximated
>> with realizable filters in a useful way.
>>
>
> The sinc convolution is just fine even in this setting. It's just that we
> just happened to prove its workability in a slightly more general setting.
>
> And yes, that blows my mind, too. :)
>
> --
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to