>and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.

Well, you can compute those at runtime if you want - and you don't need a
terribly high order Lagrange interpolator if you're already oversampled, so
it's not necessarily a problematic overhead.

Meanwhile, the oversampler itself needs a table of coefficients. Assuming
we're talking about FIR interpolation, to avoid phase distortion. But
that's a single fixed table for supporting a single oversampling ratio, so
I can see how it would add up to a memory savings compared to a bank of
tables for different fractional interpolation points, if you're looking for
really fine/arbitrary granularity. If we're talking about a fixed
fractional delay, I'm not really seeing the advantage.

Obviously it will depend on the details of the application, it just seems
kind of unbalanced on its face to use heavy oversampling and then the
lightest possible fractional interpolator. It's not clear to me that a
moderate oversampling combined with a fractional interpolator of modestly
high order wouldn't be a better use of resources.

So it doesn't make a lot of sense to me to point to the low resource costs
of the first-order linear interpolator, when you're already devoting
resources to heavy oversampling in order to use it. They need to be
considered together and balanced, no? Your point about computing only the
subset of oversamples needed to drive the final fractional interpolator is
well-taken, but I think I need to see a more detailed accounting of that to
be convinced.

E

On Wed, Aug 19, 2015 at 1:00 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

> On 8/19/15 1:43 PM, Peter S wrote:
>
>> On 19/08/2015, Ethan Duni<ethan.d...@gmail.com>  wrote:
>>
>>> But why would you constrain yourself to use first-order linear
>>> interpolation?
>>>
>> Because it's computationally very cheap?
>>
>
> and it doesn't require a table of coefficients, like doing higher-order
> Lagrange or Hermite would.
>
> The oversampler itself is going to be a much higher order
>>> linear interpolator. So it seems strange to pour resources into that
>>>
>> Linear interpolation needs very little computation, compared to most
>> other types of interpolation. So I do not consider the idea of using
>> linear interpolation for higher stages of oversampling strange at all.
>> The higher the oversampling, the more optimal it is to use linear in
>> the higher stages.
>>
>>
> here, again, is where Peter and i are on the same page.
>
> So heavy oversampling seems strange, unless there's some hard
>>> constraint forcing you to use a first-order interpolator.
>>>
>> The hard constraint is CPU usage, which is higher in all other types
>> of interpolators.
>>
>>
> for plugins or embedded systems with a CPU-like core, computation burden
> is more of a cost issue than memory used.  but there are other embedded DSP
> situations where we are counting every word used.  8 years ago, i was
> working with a chip that offered for each processing block 8 instructions
> (there were multiple moves, 1 multiply, and 1 addition that could be done
> in a single instruction), 1 state (or 2 states, if you count the output as
> a state) and 4 scratch registers.  that's all i had.  ain't no table of
> coefficients to look up.  in that case memory is way more important than
> wasting a few instructions recomputing numbers that you might otherwise
> just look up.
>
>
>
>
>
> --
>
> r b-j                  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> _______________________________________________
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
_______________________________________________
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to