Thanks for the reference and explanation Robert. Now that you jog my memory
I realize you covered this before, some time back. If I can find a few
minutes I will try to work out a version including the images from the
resampling. The trade-off between resampling and interpolation is not
entirely clear to me in this analysis.

So linearly interpolating two adjacent FIR polyphases is equivalent to a
single FIR with interpolated coefficients. I.e., we're using an affine
approximation of the underlying resampling kernel. And at high resampling
ratios this will be a good approximation. IIRC this has been covered on the
list before as well. However, it costs twice as much CPU as running a
single phase - so isn't the fair comparison to an FIR of double the order
(and so, double the resampling ratio)?

Ethan D

On Wed, Sep 6, 2017 at 9:57 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
> ---------------------------- Original Message ----------------------------
> Subject: Re: [music-dsp] Sampling theory "best" explanation
> From: "Ethan Duni" <ethan.d...@gmail.com>
> Date: Wed, September 6, 2017 4:49 pm
> To: "robert bristow-johnson" <r...@audioimagination.com>
> "A discussion list for music-related DSP" <music-dsp@music.columbia.edu>
> --------------------------------------------------------------------------
>
> > rbj wrote:
> >>what do you mean be "non-ideal"? that it's not an ideal brick wall LPF?
> > it's still LTI if it's some other filter **unless** you're meaning that
> > the possible aliasing.
> >
> > Yes, that is exactly what I am talking about. LTI systems cannot produce
> > aliasing.
> >
> > Without an ideal bandlimiting filter, resampling doesn't fulfill either
> > definition of time invariance. Not the classic one in terms of sample
> > shifts, and not the "common real time" one suggested for multirate cases.
> >
> > It's easy to demonstrate this by constructing a counterexample. Consider
> > downsampling by 2, and an input signal that contains only a single
> sinusoid
> > with frequency above half the (input) Nyquist rate, and at a frequency
> that
> > the non-ideal bandlimiting filter fails to completely suppress. To be
> LTI,
> > shifting the input by one sample should result in a half-sample shift in
> > the output (i.e., bandlimited interpolation). But this doesn't happen,
> due
> > to aliasing. This becomes obvious if you push the frequency of the input
> > sinusoid close to the (input) Nyquist frequency - instead of a
> half-sample
> > shift in the output, you get negation!
> >
> >>we draw the little arrows with different heights and we draw the impulses
> > scaled with samples of negative value as arrows pointing down
> >
> > But that's just a graph of the discrete time sequence.
>
> well, even if the *information* necessary is the same, a graph of x[n]
> need only be little dots, one per sample.  or discrete lines (without
> arrowheads).
>
> but the use of the symbol of an arrow for an impulse is a symbol of
> something difficult to graph for a continuous-time function (not to be
> confused with a continuous function).  if the impulse heights and
> directions (up or down) are analog to the sample value magnitude and
> polarity, those graphing object suffice to depict these *hypothetical*
> impulses in the continuous-time domain.
>
>
> >
> >>you could do SRC without linear interpolation (ZOH a.k.a. "drop-sample")
> > but you would need a much larger table
> >>(if i recall correctly, 1024 times larger, so it would be 512Kx
> > oversampling) to get the same S/N. if you use 512x
> >>oversampling and ZOH interpolation, you'll only get about 55 dB S/N for
> an
> > arbitrary conversion ratio.
> >
> > Interesting stuff, it didn't occur to me that the SNR would be that low.
> > How do you estimate SNR for a particular configuration (i.e., target
> > resampling ratio, fixed upsampling factor, etc)? Is that for ideal 512x
> > resampling, or does it include the effects of a particular filter design
> > choice?
>
> this is what Duane Wise and i ( https://www.researchgate.net/
> publication/266675823_Performance_of_Low-Order_
> Polynomial_Interpolators_in_the_Presence_of_Oversampled_Input ) were
> trying to show and Olli Niemitalo (in his pink elephant paper
> http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf ).
>
> so let's say that you're oversampling by a factor of R.  if the sample
> rate is 96 kHz and the audio is limited to 20 kHz, the oversampling ratio
> is 2.4 . but now imagine it's *highly* oversampled (which we can get from
> polyphase FIR resampling) like R=32 or R=512 or R=512K.
>
> when it's upsampled (like hypothetically stuffing 31 zeros or 511 zeros or
> (512K-1) zeros into the stream and brick-wall low-pass filtering) then the
> spectrum has energy at the baseband (from -Nyquist to +Nyquist of the
> original sample rate, Fs) and is empty for 31 (or 511 or (512K-1)) image
> widths of Nyquist, and the first non-zero image is at 32 or 512 or 512K x
> Fs.
>
> now if you're drop-sample or ZOH interpolating it's convolving the train
> of weighted impulses with a rect() pulse function and in the frequency
> domain you are multiplying by a sinc() function with zeros through every
> integer times R x Fs except for the one at 0 x Fs (the baseband, where the
> sinc multiplies by virtually 1)  those reduce your image by a known amount.
>  multiplying the magnitude by sinc() is the same as multiplying the power
> spectrum by sinc^2().
>
> with linear interpolation, you're convolving with a triangular pulse,
> multiplying the sample values by the triangular pulse function and in the
> frequency domain you're multiplying by a sinc^2() function and in the power
> spectrum you're multiplying by a sinc^4() function.
>
> now that sinc^2 or sinc^4 functions really puts a hole in those images,
> reducing the area in those power spectrum images greatly.
>
> now when we resample at an arbitrary rate, we can expect in worse case
> that *all* of those images get folded back into the baseband.
>
> with 3rd-order B-spline (which i don't recommend) it's sinc^4 in the
> frequency domain and sinc^8 in the power spectrum.
>
> so you have to determine the area of those images (compared to the area of
> the baseband image) and add up all of the areas of those non-baseband
> images in the power spectrum and that is the noise power (N) in this
> conversion process.  and the area of the baseband image is the signal power
> (S).  S/N is your signal-to-noise ratio and 10 log10( S/N ) is your S/R in
> dB.
>
> it **assumes** that your filter design choice is perfect and **all** of
> those images in between the baseband and (R Fs) are zero.  as a *separate*
> exercise, you can try to get a handle on the images between baseband and (R
> Fs) that your optimally-designed polyphase FIR filter didn't completely
> kill and that noise power should add to the other noise power, N, above.
>
> the point is that, with optimal upsampling, those images at non-zero
> integer multiples of (R Fs) get whacked pretty good with the sinc^4
> function, which is what the linear interpolation between subsamples gets
> you.
>
> take a look at the "Drop Sample Interpolation" section and the previous
> section to get how we get a handle on the theoretical approximations to S/N.
>
> ____
>
> hay, about mixing it up here on music-dsp, i'll take blame for some of it.
>  i really do sorta pick fights with people who don't approve of my naked
> dirac delta functions.  here is why i think i can get away with it:
>
> if you have a nascent impulse (that is a limiting approximation that has
> constant area and shrinking width, it could be a rect() or a triangular or
> Gaussian pulse), we can't hear the difference between these two pulses if
> the area stays constant and the width is reduced from 2 microseconds to 1
> microsecond.  as long as no voltage limits are exceeded and everything
> stays LTI.  the inherent LPF in our ears and head will make those two thin
> pulses (but one is twice as thin as the other) sound the same.
>
> so i don't give a rat's ass about the difference between the nascent delta
> functions and the ideal impulse that they are approximating (which is "not
> a function", but is a "distribution").  but the "sampling" or "sifting"
> property of a single dirac impulse function loses *all* of the data of the
> (hopefully bandlimited) input signal that is sampled, except for the for
> the value at precisely the impulse time.  it doesn't matter what your ADC
> does, we can take the numbers that come outa the ADC and attach them to
> hypothetical impulses that are uniformly spaced in time, and pass that
> result through a brick-wall low-pass filter and get our original
> bandlimited continuous-time function back.  that's the sampling theorem.
>
> so, in interpolation, whether it's for resampling for either pitch
> shifting or for sample rate conversion or for precision delay, we
> mathematically emulate, with a hypothetically zero-stuff discrete-time
> input, a brick-wall filter (and, for an FIR, we need not include the
> hypothetical zero-stuffed samples in the FIR summation), and resampling at
> a different continuous time potentially between adjacent discrete sample
> instances in the dirac impulse train that isturned into a windowed-sinc()
> train by the practical brick wall filter.
>
> --
>
> r b-j                  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to