Interesting story about the interpolation noise from very
high oversampled signal approximations. I tend to think ïf it doesn't
concern an actual sinc function of significant width and accuracy then
the up-sampling is wrong unless the signal is prepared for it.
I can imagine in sample processing
On 8/26/15 9:47 PM, Ethan Duni wrote:
>15.6 dB + (12.04 dB) * log2( Fs/(2B) )
Oh I see, you're actually taking the details of the sinc^2 into account.
really, just the fact that the sinc^2 has nice deep zeros at every
integer multiple of Fs (except 0).
What I had in mind was more of a wor
>15.6 dB + (12.04 dB) * log2( Fs/(2B) )
Oh I see, you're actually taking the details of the sinc^2 into account.
What I had in mind was more of a worst-case analysis where we just call the
sin() component 1 and then look at the 1/n^2 decay (which is 12dB per
octave). Which we see in the second t
On 8/25/15 7:08 PM, Ethan Duni wrote:
>if you can, with optimal coefficients designed with the tool of your
choice, so i am ignoring any images between B and Nyquist-B, >upsample
by 512x and then do linear interpolation between adjacent samples for
continuous-time interpolation, you can show th
On 25/08/2015 5:41 AM, robert bristow-johnson wrote:
maybe in an ASIC or an FPGA, but in DSP code or regular-old software, i
don't see the advantage of cubic or higher-order interpolation unless
memory is *really* tight and you gotta lotta MIPs to burn.
For discussion's sake, on Haswell you hav
>if you can, with optimal coefficients designed with the tool of your
choice, so i am ignoring any images between B and Nyquist-B, >upsample by
512x and then do linear interpolation between adjacent samples for
continuous-time interpolation, you can show that it's >something like 12 dB
S/N per octa
On 8/24/15 11:18 AM, Sampo Syreeni wrote:
On 2015-08-19, Ethan Duni wrote:
and it doesn't require a table of coefficients, like doing
higher-order Lagrange or Hermite would.
Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR
On 2015-08-19, Ethan Duni wrote:
and it doesn't require a table of coefficients, like doing
higher-order Lagrange or Hermite would.
Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling?
In m
On 22/08/2015, Sampo Syreeni wrote:
>
> The conjugate sine to +1, -1, +1, -1, ... is 0, 0, 0, 0... Just phase
> shift the original sine at the Nyquist frequence.
Let me ask what do you mean by "conjugate sine" ?
If you mean "complex conjugate", and assume the sine to be the real
part of comp
On 2015-08-18, Tom Duffy wrote:
In order to reconstruct that sinusoid, you'll need a filter with an
infinitely steep transition band. You've demonstrated that SR/2
aliases to 0Hz, i.e. DC. That digital stream of samples is not
reconstructable.
The conjugate sine to +1, -1, +1, -1, ... is 0,
On 2015-08-18, Ethan Duni wrote:
>Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1,
1, -1...
The sampling theorem requires that all frequencies be *below* the
Nyquist frequency.
Technically it doesn't. The basic form just doesn't really say anything
about the precise Nyqu
Okay, I'll risk "exceeding my daily message limit". If the
administrators think it is inappropriate, dealing with that is at
their discretion.
Here is another proof that the alias images in the spectrum are caused
by the sampling/upsampling, not the interpolation:
Let's replace linear interpolati
And besides, no one ever said that Olli's graph depicts analyitical
frequency responses of continuous time interpolators. The graphs come
from a musicdsp.org code entry:
http://musicdsp.org/archive.php?classid=5#49
There's no comment whatsover, just the code and the graphs.
If you read his 65 pa
So let me get this straight - you have an *imaginary* graph in your
head, depicting the frequency response of a continuous time linearly
interpolated signal, and you keep arguing about this *imaginary* graph
(maybe to "feed your fragile ego" and to prove that you "won").
That is *not* what you see
So you claim that the graph depicts a sinc^2 graph, and it shows the
frequency response of a continuous time linearly interpolated signal,
and involves no resampling.
That is false. That is not how Olli created his graph. First, the
continuous time signal (which, by the way, already contains an
in
On 22/08/2015, Ethan Duni wrote:
>
> So your whole point is that it's not *exactly* sinc^2, but a slightly noisy
> version thereof? My point was that there are no effects of resampling
> visible in the graphs.
And you're wrong - all those 88 alias images are "effects of resampling"...
> That has
>Naturally, there's going to be some jaggedness in the spectrum because
>of the noise. So, obviously, that is not sinc^2 then.
So your whole point is that it's not *exactly* sinc^2, but a slightly noisy
version thereof? My point was that there are no effects of resampling
visible in the graphs. Th
On 22/08/2015, Ethan Duni wrote:
>
> We've been over this repeatedly, including in the very post you are
> responding to. The fact that there are many ways to produce a graph of the
> interpolation spectrum is not in dispute, nor is it germaine to my point.
Earlier you disputed that there's no up
>1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
>upsampled white noise.
We've been over this repeatedly, including in the very post you are
responding to. The fact that there are many ways to produce a graph of the
interpolation spectrum is not in dispute, nor is it germaine to my p
Since you constantly derail this topic with irrelevant talk, let me
instead prove that
1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
upsampled white noise.
2) Olli Niemitalo's graph does *not* depict sinc(x)/sinc^2(x).
First I'll prove 1).
Using palette modification, I extracted
>Which contains alias images of the original spectrum, which was my point.
There is no "original spectrum" pictured in that graph. Only the responses
of the interpolators. There is no reference to any input signal at all.
>No one claimed there was fractional delay involved.
Fractional delay is a
On 21/08/2015, Ethan Duni wrote:
> The details of how the graphs were generated don't really matter.
Then why do you keep insisting that they're generated by plotting sinc^2(x) ?
> The point
> is that the only effect shown is the spectrum of the continuous-time
> polynomial interpolator.
Which
The details of how the graphs were generated don't really matter. The point
is that the only effect shown is the spectrum of the continuous-time
polynomial interpolator. The additional spectral effects of delaying and
resampling that continuous-time signal (to get fractional delay, for
example) are
On 21/08/2015, Ethan Duni wrote:
> So you agree that the effects of resampling are not shown, and all we see
> is the spectrum of the continuous time polynomial interpolators.
I claim that they are aliases of the original spectrum.
Just as you also call them:
"It shows the aliasing left by line
Also, you even contradict yourself. You claim that:
1) Olli's graph was created by graphing sinc(x), sinc^2(x), and not via FFT.
2) The artifacts from the resampling would be barely visible, because
the oversampling rate is quite high.
So, if - according to 2) - the artifacts are not visible bec
>Since that image is not meant to "illustrate the effects of
>resampling", but rather, to "illustrate the effects of interpolation",
>*obviously* it doesn't focus on the aliasing from the resampling.
So you agree that the effects of resampling are not shown, and all we see
is the spectrum of the c
A sampled signal contains an infinte number of aliases:
http://morpheus.spectralhead.com/img/sampling_aliases.png
"the spectrum is replicated infinitely often in both directions"
These are called aliases of the spectrum. You do not need to "fold
back" the aliasing via resampling for them to becom
On 21/08/2015, Ethan Duni wrote:
>>It shows *exactly* the aliasing
>
> It shows the aliasing left by linear interpolation into the continuous time
> domain. It doesn't show the additional aliasing produced by then delaying
> and sampling that signal. I.e., the images that would get folded back
>It shows *exactly* the aliasing
It shows the aliasing left by linear interpolation into the continuous time
domain. It doesn't show the additional aliasing produced by then delaying
and sampling that signal. I.e., the images that would get folded back onto
the new baseband, disturbing the sin
Let's repeat the same with a 50 Hz sine wave, sampled at 500 Hz, then
linearly interpolated and resampled at 44.1 kHz:
http://morpheus.spectralhead.com/img/sine_aliasing.png
The resulting alias frequencies are at: 450 Hz, 550 Hz, 950 Hz, 1050
Hz, 1450 Hz, 1550 Hz, 1950 Hz, 2050 Hz, 2450 Hz, 2550
On 21/08/2015, Ethan Duni wrote:
>>Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
>>*exactly* upsampling
>
> That is not what is shown in that graph. The graph simply shows the
> continuous-time frequency response of the interpolation polynomials,
> graphed up to 22kHz. No re
>Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
>*exactly* upsampling
That is not what is shown in that graph. The graph simply shows the
continuous-time frequency response of the interpolation polynomials,
graphed up to 22kHz. No resampling is depicted, or the frequency resp
Upsampling means, that the sampling rate increases. So if you have a
250 Hz signal, and create a 22000 Hz signal from it, that is - by
definition - upsampling.
That's *exactly* what upsampling means... You insert new samples
between the original ones, and interpolate between them (using
whatever i
On 21/08/2015, Ethan Duni wrote:
>>In this graph, the signal frequency seems to be 250 Hz, so this graph
>>shows the equivalent of about 22000/250 = 88x oversampling.
>
> That graph just shows the frequency responses of various interpolation
> polynomials. It's not related to oversampling.
Creati
>In this graph, the signal frequency seems to be 250 Hz, so this graph
>shows the equivalent of about 22000/250 = 88x oversampling.
That graph just shows the frequency responses of various interpolation
polynomials. It's not related to oversampling.
E
On Thu, Aug 20, 2015 at 5:40 PM, Peter S
wr
In the case of variable pitch playback with interpolation, here are
the frequency responses:
http://musicdsp.org/files/other001.gif
(graphs by Olli Niemitalo)
In this case, there's no zero at the original Nyquist freq, rather
there are zeros at the original sampling rate and its multiplies.
So i
In the starting post, it was not specified that resampling was also
used - the question was:
"Is it possible to use a filter to compensate for high frequency
signal loss due to interpolation? For example linear or hermite
interpolation."
Without specifying that variable rate playback is involved,
Let me just add, that in case of having a non-oversampled linearly
interpolated fractional delay line with exactly 0.5 sample delay (most
high-frequency roll-off position), the frequency response formula is
not sinc^2, but rather, sin(2*PI*f)/(2*sin(PI*f)), as I discussed
earlier.
In that case, th
>If all you're trying to do is mitigate the rolloff of linear interp
That's one concern, and by itself it implies that you need to oversample by
at least some margin to avoid having a zero at the top of your audio band
(along with a transition band below that).
But the larger concern is the overa
As far as the oversampling + linear interpolation approach goes, I have to
ask... why oversample so much (512x)?
Purely from a rolloff perspective, it seems you can figure out what your
returns are going to be by calculating sinc^2 at (1/upsample_ratio) for a
variety of oversampling ratios. Here's
Here's a graph of performance in mflops of varying length FFT
transforms from the fftw.org benchmark page, for Intel Pentium 4:
http://morpheus.spectralhead.com/img/fftw_benchmark_pentium4.png
Afaik Pentium 4 has 16 KB of L1 data cache. If you check the graph,
around 8-16k the performance starts
Let's analyze your suggestion of using a FIR filter at f = 0.5/512 =
0.0009765625 for an interpolation filter for 512x oversampling.
Here's the frequency response of a FIR filter of length 1000:
http://morpheus.spectralhead.com/img/fir512_1000.png
Closeup of the frequency range between 0-0.01 (cu
On 20/08/2015, Ethan Duni wrote:
>
> Wasn't the premise that memory
> was cheap, so we can store a big prototype FIR for high quality 512x
> oversampling? So why are we then worried about the table space for the
> fractional interpolator?
And the other reason - the coefficients for a 2000-point w
On 20/08/2015, Ethan Duni wrote:
>
> Wasn't the premise that memory
> was cheap, so we can store a big prototype FIR for high quality 512x
> oversampling? So why are we then worried about the table space for the
> fractional interpolator?
For the record, wasn't it you who said memory is often a c
On 20/08/2015, Ethan Duni wrote:
> But I'm on the fence about
> whether it's the tightest use of resources (for whatever constraints).
Then try and measure it yourself - you don't believe my words anyways.
-P
___
music-dsp mailing list
music-dsp@music.
Hi,
A suggestion for those working on practical implementations, and lighten
up the tone of the discussion with some people I know from worked on all
kinds of (semi-) pro implementations when I wasn't even into more than
basic DSP yet.
The tradeoffs about engineering and implementing on a pl
>rbj
>and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.
Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling? So why are we then worried about the
On 20/08/2015, Ethan Duni wrote:
> Ugh, I suppose this is what I get for attempting to engage with Peter S
> again. Not sure what I was thinking...
Well, you asked, "why use linear interpolation at all?" We told you
the advantages - fast computation, no coefficient table needed, and
(nearly) opti
Ugh, I suppose this is what I get for attempting to engage with Peter S
again. Not sure what I was thinking...
E
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
On 20/08/2015, Peter S wrote:
>
> No one said there is. Yet linear interpolation *can* reduce savings in
(*) correction: reduce costs
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
On 20/08/2015, Ethan Duni wrote:
>>Nope. Ever heard of multistage interpolation?
>
> I'm well aware that multistage interpolation gives cost savings relative to
> single-stage interpolation, generally.That is beside the point: the costs
> of interpolation all still scale with oversampling ratio an
>Nope. Ever heard of multistage interpolation?
I'm well aware that multistage interpolation gives cost savings relative to
single-stage interpolation, generally. That is beside the point: the costs
of interpolation all still scale with oversampling ratio and quality
requirements, just like in sing
"3.2 Multistage
3.2.1 Can I interpolate in multiple stages?
Yes, so long as the interpolation ratio, L, is not a prime number.
For example, to interpolate by a factor of 15, you could interpolate
by 3 then interpolate by 5. The more factors L has, the more choices
you have. For example you cou
On 20/08/2015, Ethan Duni wrote:
>
> I don't dispute that linear fractional interpolation is the right choice if
> you're going to oversample by a large ratio. The question is what is the
> right balance overall, when considering the combined costs of
> the oversampler and the fractional interpola
>To quote Olli Niemitalo:
>
>"The presented optimal interpolators make it possible to do
>transparent-quality resampling for even the most demanding
>applications with only 2x or 4x oversampling before the interpolation.
>However, in most cases simple linear interpolation combined with a
>very high
On 19/08/2015, Ethan Duni wrote:
>
> Obviously it will depend on the details of the application, it just seems
> kind of unbalanced on its face to use heavy oversampling and then the
> lightest possible fractional interpolator.
It should also be noted that the linear interpolation can be used for
On 19/08/2015, Ethan Duni wrote:
>
> Obviously it will depend on the details of the application, it just seems
> kind of unbalanced on its face to use heavy oversampling and then the
> lightest possible fractional interpolator. It's not clear to me that a
> moderate oversampling combined with a fr
>and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.
Well, you can compute those at runtime if you want - and you don't need a
terribly high order Lagrange interpolator if you're already oversampled, so
it's not necessarily a problematic overhead.
Me
On 8/19/15 1:43 PM, Peter S wrote:
On 19/08/2015, Ethan Duni wrote:
But why would you constrain yourself to use first-order linear
interpolation?
Because it's computationally very cheap?
and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.
Th
On 19/08/2015, Ethan Duni wrote:
>
> But why would you constrain yourself to use first-order linear
> interpolation?
Because it's computationally very cheap?
> The oversampler itself is going to be a much higher order
> linear interpolator. So it seems strange to pour resources into that
Linear
SOmetimes I feel the personal integrity about these undergrad level
scientific quests is nowhere to be found with some people, and that's a
shame.
Working on a decent subject like these mathematical approximations in
the digital signal processing should be accompanied with at least some
self-
>i would say way more than 2x if you're using linear in between. if memory
is cheap, i might oversample by perhaps as much as 512x >and then use
linear to get in between the subsamples (this will get you 120 dB S/N).
But why would you constrain yourself to use first-order linear
interpolation? Th
On 8/18/15 11:46 PM, Ethan Duni wrote:
> for linear interpolation, if you are a delayed by 3.5 samples and you
keep that delay constant, the transfer function is
>
> H(z) = (1/2)*(1 + z^-1)*z^-3
>
>that filter goes to -inf dB as omega gets closer to pi.
Note that this holds for symmetric fr
Comparison of the two formulas from previous post: (1) in blue, sinc^2
(2) in red:
http://morpheus.spectralhead.com/img/sinc.png
sin(pi*x*2)
-(1)
2*sin(pi*x)
(Formula from Steven W. Smith, absolute value taken on graph)
sin(pi*x)
On 18/08/2015, Nigel Redmon wrote:
> I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts,
> hence the “No?” in my reply to him).
>
> The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8
> dB at 0.5 of the sample rate...
A half-sample delay using lin
On 19/08/2015, Peter S wrote:
> Another way to show that half-sample delay has -Inf gain at Nyquist:
> see the pole-zero plot of the equivalent LTI filter a0=0.5, a1=0.5. It
> will have a zero at z=-1. A zero on the unit circle means -Inf gain,
> and z=-1 means Nyquist frequency. Therefore, a half
Another way to show that half-sample delay has -Inf gain at Nyquist:
see the pole-zero plot of the equivalent LTI filter a0=0.5, a1=0.5. It
will have a zero at z=-1. A zero on the unit circle means -Inf gain,
and z=-1 means Nyquist frequency. Therefore, a half-sample delay has
-Inf gain at Nyquist
> for linear interpolation, if you are a delayed by 3.5 samples and you
keep that delay constant, the transfer function is
>
> H(z) = (1/2)*(1 + z^-1)*z^-3
>
>that filter goes to -inf dB as omega gets closer to pi.
Note that this holds for symmetric fractional delay filter of any odd order
(i.
On 18/08/2015, Peter S wrote:
>
> Similarly, even if frequency f=0.5 may be considered ill-specified
> (because it's critical frequency), you can still approach it to
> arbitrary precision, and the gain will approach -infinity. So
>
> f=0.4
> f=0.49
> f=0.499
> f=0.4999
> f=0.49
> f=0.4999
On 18/08/2015, Peter S wrote:
> Even if f=0.5 is a critical frequency. f=0.4 isn't,
> and it's quite close to f=0.5.
(*) at 44.1 kHz sampling rate, that's precisely 22049.559 Hz.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https:
On 18/08/2015, Ethan Duni wrote:
>>You cannot calculate 1/x when x=0, can you? Since that's division by zero.
>>Yet you'll know when x tends to zero from right towards left, then 1/x
>>will tend to +infinity.
>
> Not sure what that is supposed to have to do with the present subject.
You cannot ca
>Okay, I get what you mean. But that doesn't change the frequency
>response of a half-sample delay, or doesn't mean that a half-sample
>delay doesn't have a specific gain at Nyquist.
Never said that it did. In fact, I explicitly said that this issue of
sampling of Nyquist frequency sinusoids has n
>You cannot calculate 1/x when x=0, can you? Since that's division by zero.
>Yet you'll know when x tends to zero from right towards left, then 1/x
>will tend to +infinity.
Not sure what that is supposed to have to do with the present subject.
If you want to put it in terms of simple arithmetic,
On 18/08/2015, Ethan Duni wrote:
>>In order to reconstruct that sinusoid, you'll need a filter with
>>an infinitely steep transition band.
>
> No, even an ideal reconstruction filter won't do it. You've got your
> +Nyquist component sitting right on top of your -Nyquist component. Hence
> the alia
On 8/18/15 5:01 PM, Emily Litella wrote:
... Never mind.
too late.
:-)
--
r b-j r...@audioimagination.com
"Imagination is more important than knowledge."
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.co
On 18/08/2015, Ethan Duni wrote:
>
> That class of signals is band limited to SR/2. The aliasing is in the
> amplitude/phase offset, not the frequency.
Okay, I get what you mean. But that doesn't change the frequency
response of a half-sample delay, or doesn't mean that a half-sample
delay doesn'
On 8/18/15 4:50 PM, Nigel Redmon wrote:
I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts,
hence the “No?” in my reply to him).
The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8 dB
at 0.5 of the sample rate...
i will try to spell out my po
>In order to reconstruct that sinusoid, you'll need a filter with
>an infinitely steep transition band.
No, even an ideal reconstruction filter won't do it. You've got your
+Nyquist component sitting right on top of your -Nyquist component. Hence
the aliasing. The information has been lost in the
>> well Peter, here again is where you overreach. assuming, without loss
>> of generality that the sampling period is 1, the continuous-time signals
>>
>> x(t) = 1/cos(theta) * cos(pi*t + theta)
>>
>> are all aliases for the signal described above (and incorrectly as
>> "contain[ing] no alia
OK, I looked back at Robert’s post, and see that the fact his reply was broken
up into segments (as he replied to segments of Peter’s comment) made me miss
his point. At first he was talking general (pitch shifting), but at that point
he was talking about strictly sliding into halfway between sa
On 18/08/2015, Tom Duffy wrote:
> In order to reconstruct that sinusoid, you'll need a filter with
> an infinitely steep transition band.
I can use an arbitrarily long sinc kernel to reconstruct / interpolate
it. Therefore, for any desired precision, you can find an appropriate
sinc kernel length
On 18/08/2015, robert bristow-johnson wrote:
> On 8/18/15 4:28 PM, Peter S wrote:
>>
>> 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
>> and contains no aliasing. That's the maximal allowed frequency without
>> any aliasing.
>
> well Peter, here again is where you overreach. assuming, w
I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts,
hence the “No?” in my reply to him).
The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8 dB
at 0.5 of the sample rate...
> On Aug 18, 2015, at 1:40 AM, Peter S wrote:
>
> On 18/08/2015, Nige
>What's causing you to be unable to reconstruct the waveform?
There are an infinite number of different nyquist-frequency sinusoids that,
when sampled, will all give the same ...,1, -1, 1, -1, ... sequence of
samples. The sampling is a many-to-one mapping in that case, and so cannot
be inverted.
On 18/08/2015, Ethan Duni wrote:
>
> But the example of the weird things that can happen when you try to sample
> a sine wave right at the nyquist rate and then process it is orthogonal to
> that point.
That's not weird, and that's *exactly* what you have in the highest
bin of an FFT.
The signal
In order to reconstruct that sinusoid, you'll need a filter with
an infinitely steep transition band.
You've demonstrated that SR/2 aliases to 0Hz, i.e. DC.
That digital stream of samples is not reconstructable.
On 8/18/2015 1:28 PM, Peter S wrote:
That's false. 1, -1, 1, -1, 1, -1 ... is a pro
On 8/18/15 4:28 PM, Peter S wrote:
1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
and contains no aliasing. That's the maximal allowed frequency without
any aliasing.
well Peter, here again is where you overreach. assuming, without loss
of generality that the sampling period is 1, t
On 18/08/2015, robert bristow-johnson wrote:
>
> *my* point is that as the delay slowly slides from a integer number of
> samples [...] to the integer + 1/2 sample (with gain above), this linear but
> time-variant system is going to sound like there is a LPF getting segued
> in.
Exactly. As the f
>*my* point is that as the delay slowly slides from a integer number of
samples, where the transfer function is
>
> H(z) = z^-N
>
>to the integer + 1/2 sample (with gain above), this linear but
time-variant system is going to sound like there is a LPF getting segued in.
>
>this, for me, is enough
On 18/08/2015, Ethan Duni wrote:
>>Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
> -1...
>
> The sampling theorem requires that all frequencies be *below* the Nyquist
> frequency. Sampling signals at exactly the Nyquist frequency is an edge
> case that sort-of works in s
On 8/18/15 3:44 PM, Ethan Duni wrote:
>Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1,
1, -1...
The sampling theorem requires that all frequencies be *below* the
Nyquist frequency. Sampling signals at exactly the Nyquist frequency
is an edge case that sort-of works in so
>Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
-1...
The sampling theorem requires that all frequencies be *below* the Nyquist
frequency. Sampling signals at exactly the Nyquist frequency is an edge
case that sort-of works in some limited special cases, but there is no
e
On 8/18/2015 6:41 AM, Jerry wrote:
I would think that polynomial interpolators of order 30 or 40 would
provide no end of unpleasant surprises due to the behavior of
high-order polynomials. I'm thinking of weird spikes, etc. Have you
actually used polynomial interpolators of this order?
I re
On Aug 17, 2015, at 9:38 AM, Esteban Maestre wrote:
> No experience with compensation filters here.
> But if you can afford to use a higher order interpolation scheme, I'd go for
> that.
>
> Using Newton's Backward Difference Formula, one can construct time-varying,
> table-free, efficient L
On 18/08/2015, Nigel Redmon wrote:
>>
>> well, if it's linear interpolation and your fractional delay slowly sweeps
>> from 0 to 1/2 sample, i think you may very well hear a LPF start to kick
>> in. something like -7.8 dB at Nyquist. no, that's not right. it's -inf
>> dB at Nyquist. pretty ser
> On Aug 17, 2015, at 7:23 PM, robert bristow-johnson
> wrote:
>
> On 8/17/15 7:29 PM, Sampo Syreeni wrote:
>>
>>> to me, it really depends on if you're doing a slowly-varying precision
>>> delay in which the pre-emphasis might also be slowly varying.
>>
>> In slowly varying delay it ought t
On 8/17/15 7:29 PM, Sampo Syreeni wrote:
On 2015-08-17, robert bristow-johnson wrote:
As I noted in the first reply to this thread, while it’s temping to
look at the sinc^2 rolloff of a linear interpolator, for example, and
think that compensation would be to boost the highs to undo the
rollo
Thanks for the suggestions and discussion.
In my application I'm playing back 44.1khz wavefiles with variable pitch
envelopes. I'm currently using hermite interpolation and the quality
seems fine for playback. It's only after resampling and running through
the audio engine multiple times does
And to add to what Robert said about “write code and sell it”, sometimes it’s
more comfortable to make general but helpful comments here, and stop short of
detailing the code that someone paid you a bunch of money for and might not
want to be generally known.
And before people assume that I mea
On 2015-08-17, robert bristow-johnson wrote:
As I noted in the first reply to this thread, while it’s temping to
look at the sinc^2 rolloff of a linear interpolator, for example, and
think that compensation would be to boost the highs to undo the
rolloff, that won’t work in the general case. E
1 - 100 of 114 matches
Mail list logo