>Nope. Ever heard of multistage interpolation?

I'm well aware that multistage interpolation gives cost savings relative to
single-stage interpolation, generally. That is beside the point: the costs
of interpolation all still scale with oversampling ratio and quality
requirements, just like in single stage interpolation. There's no magic  to
multi-stage interpolation that avoids that relationship.

>that's just plain wrong and stupid, and that's what all advanced multirate
books
>will also tell you.

You've been told repeatedly that this kind of abusive, condescending
behavior is not welcome here, and you need to cut it out immediately.

>Tell me, you don't have an extra half kilobyte of memory in a typical
>computer?

There are lots of dsp applications that don't run on personal computers,
but rather on very lightweight embedded targets. Memory tends to be at a
premium on those platforms.

E










On Wed, Aug 19, 2015 at 3:55 PM, Peter S <peter.schoffhau...@gmail.com>
wrote:

> On 20/08/2015, Ethan Duni <ethan.d...@gmail.com> wrote:
> >
> > I don't dispute that linear fractional interpolation is the right choice
> if
> > you're going to oversample by a large ratio. The question is what is the
> > right balance overall, when considering the combined costs of
> > the oversampler and the fractional interpolator.
>
> It's hard to tell in general. It depends on various factors, including:
>
> - your desired/available CPU usage
> - your desired/available memory usage and cache size
> - the available instruction set of your CPU
> - your desired antialias filter steepness
> - your desired stopband attenuation
>
> ...and possibly other factors. Since these may vary largely, I think
> it is impossible to tell in general. What I read in multirate
> literature, and what is also my own experience, is that - when using a
> relatively large oversampling ratio - then it's more cost-effective to
> use linear interpolation at the higher stages (and that's Olli's
> conclusion as well).
>
> > You can leverage any finite interpolator to skip computations in an FIR
> > oversampler, not just linear. You get the most "skipping" in the case of
> > high oversampling ratio and linear interpolation, but the same trick
> still
> > works any time your oversampling ratio is greater than your interpolator
> > order.
>
> But to a varying degree. A FIR interpolator is still "heavy" if you
> skip samples where the coefficient is zero, compared to linear
> interpolation (but it is also higher quality).
>
> > The flipside is that the higher the oversampling ratio, the longer the
> FIR
> > oversampling filter needs to be in the first place.
>
> Nope. Ever heard of multistage interpolation? You may do a small FIR
> stage (say, 2x or 4x), and then a linear stage (or another,
> low-complexity FIR stage according to your desired specifications, or
> even further stages). Seems you still don't understand that you can
> oversample in multiple stages, and use a linear interpolator for the
> higher stages of oversampling... Which is almost always optimal than
> using a single costy FIR filter to do the interpolation. You don't
> need to use a 512x FIR at >100 dB stopband attentuation, that's just
> plain wrong and stupid, and that's what all advanced multirate books
> will also tell you.
>
> Same for IIR case.
>
> >>Since memory is usually not an issue,
> >
> > There are lots of dsp applications where memory is very much the main
> > constraint.
>
> Tell me, you don't have an extra half kilobyte of memory in a typical
> computer? I hear, those have 8-32 GB of RAM nowadays, and CPU cache
> sizes are like 32-128 KiB.
>
> > The performance of your oversampler will be garbage if you do that. And
> so
> > there will be no point in worrying about the quality of fractional
> > interpolation after that point, since the signal you'll be interpolating
> > will be full of aliasing to begin with.
>
> Exactly. But it won't be "heavy"! So it's not the "oversampling" what
> makes the process heavy, but rather, the interpolation / anti-aliasing
> filter!!
>
> > And that means it needs lots of resources, especially as the oversampling
> > ratio gets large. It's the required quality that drives the oversampler
> > costs (and filter design choices).
>
> Which is exactly what I said. If your specification is low, you can
> have a 128x oversampler that is (relatively) "low-cost". It's not the
> oversampling ratio what matters most.....
>
> > If you are willing to accept low quality in order to save on CPU (or
> maybe
> > there's nothing in the upper frequencies that you're worried about), then
> > there's no point in resampling at all. Just use a low order fractional
> > interpolator directly on the signal.
>
> Seems you still miss the whole point of multistage interpolation. I
> recommend you read some books / papers on multirate processing.
>
> >>It should also be noted that the linear interpolation can be used for
> >>the upsampling itself as well, reducing the cost of your oversampling,
> >
> > Again, that would add up to a very low quality upsampler.
>
> You're wrong. Read Olli Niemitalo's paper again (and some multirate
> books). When the oversampling ratio is high and the signal is already
> oversampled, linear interpolation is (nearly) optimal. That implies a
> multistage upsampler, which is typically computationally a lot more
> optimal than a single-stage one. Just as the multirate signal
> processing literature will tell you in detail.
>
> -P
> _______________________________________________
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
_______________________________________________
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to