>i would say way more than 2x if you're using linear in between.  if memory
is cheap, i might oversample by perhaps as much as 512x >and then use
linear to get in between the subsamples (this will get you 120 dB S/N).

But why would you constrain yourself to use first-order linear
interpolation? The oversampler itself is going to be a much higher order
linear interpolator. So it seems strange to pour resources into that, just
so you can avoid putting them into the final fractional interpolator. Is
the justification that the oversampler is a fixed interpolator, whereas the
final stage is variable (so we don't want to muck around with anything too
complex there)? I've seen it claimed (by Julius Smith IIRC) that
oversampling by as little as 10% cuts the interpolation filter requirements
by over 50%. So heavy oversampling seems strange, unless there's some hard
constraint forcing you to use a first-order interpolator.

>quite familiar with it.

Yeah that was more for the list in general, to keep this discussion
(semi-)grounded.

E

On Wed, Aug 19, 2015 at 9:15 AM, robert bristow-johnson <
r...@audioimagination.com> wrote:

> On 8/18/15 11:46 PM, Ethan Duni wrote:
>
>> > for linear interpolation, if you are a delayed by 3.5 samples and you
>> keep that delay constant, the transfer function is
>> >
>> >   H(z)  =  (1/2)*(1 + z^-1)*z^-3
>> >
>> >that filter goes to -inf dB as omega gets closer to pi.
>>
>> Note that this holds for symmetric fractional delay filter of any odd
>> order (i.e., Lagrange interpolation filter, windowed sinc, etc). It's not
>> an artifact of the simple linear approach,
>>
>
> at precisely Nyquist, you're right.  as you approach Nyquist, linear
> interpolation is worser than cubic Hermite but better than cubic B-spline
> (better in terms of less roll-off, worser in terms of killing images).
>
> it's a feature of the symmetric, finite nature of the fractional
>> interpolator. Since there are good reasons for the symmetry constraint, we
>> are left to trade off oversampling and filter order/design to get the final
>> passband as flat as we need.
>>
>> My view is that if you are serious about maintaining fidelity across the
>> full bandwidth, you need to oversample by at least 2x.
>>
>
> i would say way more than 2x if you're using linear in between.  if memory
> is cheap, i might oversample by perhaps as much as 512x and then use linear
> to get in between the subsamples (this will get you 120 dB S/N).
>
> That way you can fit the transition band of your interpolation filter
>> above the signal band. In applications where you are less concerned about
>> full bandwidth fidelity, oversampling isn't required. Some argue that 48kHz
>> sample rate is already effectively oversampled for lots of natural
>> recordings, for example. If it's already at 96kHz or higher I would not
>> bother oversampling further.
>>
>
> i might **if** i want to resample by an arbitrary ratio and i am doing
> linear interpolation between the new over-sampled samples.
>
> remember, when we oversample for the purpose of resampling, if the
> prototype LPF is FIR (you know, the polyphase thingie), then you need not
> calculate all of the new over-sampled samples.  only the two you need to
> linear interpolate between.  so oversampling by a large factor only costs
> more in terms of memory for the coefficient storage.  not in computational
> effort.
>
> Also this is recommended reading for this thread:
>>
>> https://ccrma.stanford.edu/~jos/Interpolation/ <
>> https://ccrma.stanford.edu/%7Ejos/Interpolation/>
>>
>>
> quite familiar with it.
>
> --
>
> r b-j                  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> _______________________________________________
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
_______________________________________________
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to