>rbj
>and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.

Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling? So why are we then worried about the table space for the
fractional interpolator?

I wonder if the salient design concern here is less about balancing
resources, and more about isolating and simplifying the portions of the
system needed to support arbitrary (as opposed to just very-high-but-fixed)
precision. I like the modularity of the high oversampling/linear interp
approach, since that it supports arbitrary precision with a minimum of
fussy variable components or arcane coefficient calculations. It's got a
lot going for it in software engineering terms. But I'm on the fence about
whether it's the tightest use of resources (for whatever constraints).
Typically those are the arcane ones that take a ton of debugging and
optimization :P

E



On Wed, Aug 19, 2015 at 1:00 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

> On 8/19/15 1:43 PM, Peter S wrote:
>
>> On 19/08/2015, Ethan Duni<ethan.d...@gmail.com>  wrote:
>>
>>> But why would you constrain yourself to use first-order linear
>>> interpolation?
>>>
>> Because it's computationally very cheap?
>>
>
> and it doesn't require a table of coefficients, like doing higher-order
> Lagrange or Hermite would.
>
> The oversampler itself is going to be a much higher order
>>> linear interpolator. So it seems strange to pour resources into that
>>>
>> Linear interpolation needs very little computation, compared to most
>> other types of interpolation. So I do not consider the idea of using
>> linear interpolation for higher stages of oversampling strange at all.
>> The higher the oversampling, the more optimal it is to use linear in
>> the higher stages.
>>
>>
> here, again, is where Peter and i are on the same page.
>
> So heavy oversampling seems strange, unless there's some hard
>>> constraint forcing you to use a first-order interpolator.
>>>
>> The hard constraint is CPU usage, which is higher in all other types
>> of interpolators.
>>
>>
> for plugins or embedded systems with a CPU-like core, computation burden
> is more of a cost issue than memory used.  but there are other embedded DSP
> situations where we are counting every word used.  8 years ago, i was
> working with a chip that offered for each processing block 8 instructions
> (there were multiple moves, 1 multiply, and 1 addition that could be done
> in a single instruction), 1 state (or 2 states, if you count the output as
> a state) and 4 scratch registers.  that's all i had.  ain't no table of
> coefficients to look up.  in that case memory is way more important than
> wasting a few instructions recomputing numbers that you might otherwise
> just look up.
>
>
>
>
>
> --
>
> r b-j                  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> _______________________________________________
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
_______________________________________________
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to