Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread Sampo Syreeni

On 2015-08-16, Sham Beam wrote:

Is it possible to use a filter to compensate for high frequency signal 
loss due to interpolation? For example linear or hermite 
interpolation.


Are there any papers that detail what such a filter might look like?


Look at Vesa Välimäki's work, and his students'. They did fractional 
delay delay lines, which had just this problem in the high end. Also, 
Julius O. Smith's work with waveguides bumped into this very same thing, 
because they're implemented as (fractional) delay lines as well. Beyond 
that, most reverb designers could tell you about this sort of thing, 
only they tend to keep their secret sauce *most* secret. ;)


The usual thing you do is to go for higher order interpolation, with the 
interpolating polynomial being designed for flatter performance over the 
utility band than the linear spline. It's already very much better at 
3rd order, and if you do something like 4th to 5th order with 2x 
oversampling, it's essentially perfect.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread Nigel Redmon
As far as compensation: Taking linear as an example, we know that the response 
rolls off (“sinc^2"). Would you compensate by boosting the highs? Consider that 
for a linearly interpolated delay line, a delay of an integer number of 
samples, i, has no high frequency loss at all. But that the error is max it you 
need a delay of i + 0.5 samples. More difficult to compensate for, would be 
such a delay line where the delay time is modulated.

A well-published way of getting around the fractional problem is allpass 
compensation. But a lot of people seem to miss that this method doesn’t lend 
itself to modulation—it’s ideally suited for a fixed fractional delay. Here’s a 
paper that shows one possible solution, crossfading two allpass filters:

http://scandalis.com/jarrah/Documents/DelayLine.pdf

Obviously, the most straight-forward way to avoid the problem is to convert to 
a higher sample rate going into the delay line (using windowed sinc, etc.), 
then use linear, hermite, etc.


> On Aug 16, 2015, at 1:09 AM, Sham Beam  wrote:
> 
> Hi,
> 
> Is it possible to use a filter to compensate for high frequency signal loss 
> due to interpolation? For example linear or hermite interpolation.
> 
> Are there any papers that detail what such a filter might look like?
> 
> 
> Thanks
> Shannon
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread robert bristow-johnson

On 8/16/15 4:09 AM, Sham Beam wrote:

Hi,

Is it possible to use a filter to compensate for high frequency signal 
loss due to interpolation? For example linear or hermite interpolation.


Are there any papers that detail what such a filter might look like?



besides the well-known sinc^2 rolloff that comes with linear 
interpolation, this paper discussed interpolation using higher-order 
B-splines.


Udo Zölzer and Thomas Bolze, "Interpolation Algorithms: Theory and 
Application," http://www.aes.org/e-lib/browse.cfm?elib=6334 .  i thought 
i had a copy of it, Duane Wise and i referenced the paper in our 
2-decade-old paper about different polynomial interpolation effects and 
which were better for what.  (you can have a .pdf of that paper if you 
want.)


but Zölzer and Bolze (they might be hanging out on this list, i was 
pleasantly surprized to see JOS post here recently) *do* discuss 
pre-compensation of high-frequency rolloff due to interpolation 
polynomials that cause such rolloff.  you just design a filter (using 
MATLAB or whatever) that has magnitude response that is, in the 
frequency band of interest, approximately the reciprocal of the rolloff 
effect from the interpolation.


Zölzer and Bolze suggested Nth-order B-spline without really justifying 
why that is better than other polynomial kernels such as Lagrange or 
Hermite.  the Nth-order B-spline (at least how it was shown in their 
paper), is what you get when you convolve N unit-rectangular functions 
with each other (or N zero-order holds).  the frequency response of a 
Nth-order B-spline is sinc^(N+1).  this puts really deep and wide 
notches at integer multiples of the original sampling frequency (other 
than the integer 0) which is where all those images are that you want to 
kill.  linear interpolation is all of a 1st-order Lagrange, a 1st-order 
Hermite, and a 1st-order B-spline (and the ZOH or "drop-sample" 
interpolation is a 0th-order realization of those three).


an Nth-order polynomial interpolator will have, somewhere in the 
frequency response, a H(f) = (sinc(f/Fs))^(N+1) in there, but if it's 
not the simple B-spline, there will be lower order terms of sinc() that 
will add to (contaminate) the highest order sinc^(N+1) and make those 
notches less wide.  any other polynomial interpolation (at higher order 
than 1), will have at least one sinc() term with lower order than N+1.


so the cool thing about interpolating with B-splines is that it kills 
the images (which become aliases when you resample) the most, but it 
also has wicked LPFing that needs to be compensated unless your sampling 
frequency is *much* higher than twice the bandwidth (oversampled 
big-time).  but if you *are* experiencing that LPFing, as you have 
suspected, you can design a filter to undo that for much of the 
baseband.  not all of it.


--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread Robin Whittle
Hi Shannon,

If the number of reads from the delay line per sample cycle is high
enough, as a less expensive alternative to the most obvious solution
(higher order "interpolation" based on multiple samples before and
after, with some fancy set of coefficients calculated on the spot, or
looked up from a table of sufficiently high resolution, depending on the
fraction of a sample delay involved) you might like to consider
upsampling the input signal to twice the normal rate, and then doing
simple linear interpolation.

This would not be mathematically perfect, since the high frequency
response would be slightly reduced if the delay fraction was 0.25 or
0.75, whereas it would be flat for 0, and as flat as the upsampling
algorithm for 0.5 (I recall that the upsampling algorithm produces the
odd numbered samples in the final output, with the even ones being the
input samples).  However, assuming the sampling rate is 44.1kHz, 48kHz
or higher, I think this slight variation is unlikely to be perceptible
to human ears.

  Robin


On 2015-08-16 6:09 PM, Sham Beam wrote:
> Hi,
> 
> Is it possible to use a filter to compensate for high frequency signal
> loss due to interpolation? For example linear or hermite interpolation.
> 
> Are there any papers that detail what such a filter might look like?
> 
> 
> Thanks
> Shannon
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-16 Thread Marcus Hobbs

Is this Robin Whittle of Devilfish fame?  I bought a Devilfish from you back in 
the mid-1990s.  Best mod ever!

> On Aug 16, 2015, at 8:07 PM, Robin Whittle  wrote:
> 
> Hi Shannon,
> 
> If the number of reads from the delay line per sample cycle is high
> enough, as a less expensive alternative to the most obvious solution
> (higher order "interpolation" based on multiple samples before and
> after, with some fancy set of coefficients calculated on the spot, or
> looked up from a table of sufficiently high resolution, depending on the
> fraction of a sample delay involved) you might like to consider
> upsampling the input signal to twice the normal rate, and then doing
> simple linear interpolation.
> 
> This would not be mathematically perfect, since the high frequency
> response would be slightly reduced if the delay fraction was 0.25 or
> 0.75, whereas it would be flat for 0, and as flat as the upsampling
> algorithm for 0.5 (I recall that the upsampling algorithm produces the
> odd numbered samples in the final output, with the even ones being the
> input samples).  However, assuming the sampling rate is 44.1kHz, 48kHz
> or higher, I think this slight variation is unlikely to be perceptible
> to human ears.
> 
>  Robin
> 
> 
> On 2015-08-16 6:09 PM, Sham Beam wrote:
>> Hi,
>> 
>> Is it possible to use a filter to compensate for high frequency signal
>> loss due to interpolation? For example linear or hermite interpolation.
>> 
>> Are there any papers that detail what such a filter might look like?
>> 
>> 
>> Thanks
>> Shannon
>> ___
>> music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Theo Verelst


For people including scientific oriented it always surprises me how 
little actual science is involved in this talk about tradeoffs.


First, what it is you want to achieve by preserving high frequencies 
(which of course I'm all for)? Second, is it really only at the level of 
first order interpolations ? And if so, isn't the compensation 
interpolation much more expensive than a solution that tries to qualify, 
and preferably quantify the errors involved.


Using least squares and error estimates is a bit too easy for our 
sampling issues because of at least the mid and high frequencies getting 
interpreted but the DAC reconstruction filter, subsequent digital signal 
processing or as I prefer: the perfect reconstruction interpretation of 
the resulting digital signal streams.


IMO, high frequencies will be most served by leaving them alone as much 
as possible, and honoring the studio and post processing that has 
checked them out and pre-averaged them for normal sound reproduction. 
However, no one here besides RBJ and a few brave souls seems to even 
care much about real subjects.


Now I get it, everyone has a sound card and endless supplies of digital 
materials, and from my student efforts I recall it is fun to understand 
the theoretics of interpolation curves (and (hyper-) surfaces), but 
unfortunately they correlate only very very loosely with useful sampled 
signal theories, unless you want an effort for a particular niche.


T.V.

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread STEFFAN DIEDRICHSEN
I could write a few lines over the topic as well, since I made such a 
compensation filter about 17 years ago. 
So, there are people, that do care about that topic, but there are only some, 
that do find time to write up something. 

;-)

Steffan 


> On 17.08.2015|KW34, at 17:50, Theo Verelst  wrote:
> 
> However, no one here besides RBJ and a few brave souls seems to even care 
> much about real subjects.

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Esteban Maestre

No experience with compensation filters here.
But if you can afford to use a higher order interpolation scheme, I'd go 
for that.


Using Newton's Backward Difference Formula, one can construct 
time-varying, table-free, efficient Lagrange interpolation schemes of 
arbitrary order (up to 30-th or 40-th order) which stay within linear 
complexity while allowing for run-time modulation of the interpolation 
order.


https://ccrma.stanford.edu/~jos/Interpolation/Lagrange_Interpolation.html

Cheers,
Esteban


On 8/17/2015 12:07 PM, STEFFAN DIEDRICHSEN wrote:
I could write a few lines over the topic as well, since I made such a 
compensation filter about 17 years ago.
So, there are people, that do care about that topic, but there are 
only some, that do find time to write up something.


;-)

Steffan


On 17.08.2015|KW34, at 17:50, Theo Verelst > wrote:


However, no one here besides RBJ and a few brave souls seems to even 
care much about real subjects.




___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Nigel Redmon
Since compensation filtering has been mentioned by a few, can I ask if someone 
could get specific on an implementation (including a description of constraints 
under which it operates)? I’d prefer keeping it simple by restricting to linear 
interpolation, where it’s most needed, and perhaps these comments will make 
clearer what I’m after:

As I noted in the first reply to this thread, while it’s temping to look at the 
sinc^2 rolloff of a linear interpolator, for example, and think that 
compensation would be to boost the highs to undo the rolloff, that won’t work 
in the general case. Even in Olli Niemitalo’s most excellent paper on 
interpolation methods 
(http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf), he seems to 
suggest doing this with pre-emphasis, which seems to be a mistake, unless I 
misunderstood his intentions.

In Olli’s case, he specifically recommended pre-emphasis (which I believe will 
not work except for special cases of resampling at fixed fractions between real 
samples) over post, as post becomes more complicated. (It seems that you could 
do it post, taking into account the fractional part of a particular lookup and 
avoiding the use of recursive filters—personally I’d just upsample to begin 
with.)

It just occurred to me that perhaps one possible implementation is to 
cross-fade between a pre-emphasized and normal delay line, depending on the 
fractional position (0.5 gets all pre-emph, 0.0 gets all normal). This sort of 
thing didn’t seem to be what Olli was getting at, since he only gave the 
worst-case rolloff curve and didn’t discuss it further.

I also think about the possibility of crossfading between emphasis and none, 
depending on the fractional position (full emphasis for 

I’m not asking because I need to do this—I’m asking for the sake of the thread, 
where people are talking about compensation, but not explaining the 
implementation they have in mind, and not necessarily explaining the conditions 
under which it works.


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Ethan Duni
Yeah I am also curious. It's not obvious to me where it would make sense to
spend resources compensating for interpolation rather than just juicing up
the interpolation scheme in the first place.

E

On Mon, Aug 17, 2015 at 11:39 AM, Nigel Redmon 
wrote:

> Since compensation filtering has been mentioned by a few, can I ask if
> someone could get specific on an implementation (including a description of
> constraints under which it operates)?
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread robert bristow-johnson

On 8/17/15 12:07 PM, STEFFAN DIEDRICHSEN wrote:
I could write a few lines over the topic as well, since I made such a 
compensation filter about 17 years ago.
So, there are people, that do care about that topic, but there are 
only some, that do find time to write up something.


;-)

Steffan


On 17.08.2015|KW34, at 17:50, Theo Verelst > wrote:


However, no one here besides RBJ and a few brave souls seems to even 
care much about real subjects.


Theo, there are a lotta heavyweights here (like Steffan).  if you want a 
3-letter acronym to toss around, try "JOS".   i think there are plenty 
on this list that care deeply about reality because they write code and 
sell it.  my soul is chicken-shit in the context.


--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Peter S
On 17/08/2015, STEFFAN DIEDRICHSEN  wrote:
> I could write a few lines over the topic as well, since I made such a
> compensation filter about 17 years ago.
> So, there are people, that do care about that topic, but there are only
> some, that do find time to write up something.

I also made a compensation filter for linear interpolation. Definitely doable.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread robert bristow-johnson

On 8/17/15 2:39 PM, Nigel Redmon wrote:

Since compensation filtering has been mentioned by a few, can I ask if someone 
could get specific on an implementation (including a description of constraints 
under which it operates)? I’d prefer keeping it simple by restricting to linear 
interpolation, where it’s most needed, and perhaps these comments will make 
clearer what I’m after:

As I noted in the first reply to this thread, while it’s temping to look at the 
sinc^2 rolloff of a linear interpolator, for example, and think that 
compensation would be to boost the highs to undo the rolloff, that won’t work 
in the general case. Even in Olli Niemitalo’s most excellent paper on 
interpolation methods 
(http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf), he seems to 
suggest doing this with pre-emphasis, which seems to be a mistake, unless I 
misunderstood his intentions.

In Olli’s case, he specifically recommended pre-emphasis (which I believe will 
not work except for special cases of resampling at fixed fractions between real 
samples) over post, as post becomes more complicated. (It seems that you could 
do it post, taking into account the fractional part of a particular lookup and 
avoiding the use of recursive filters—personally I’d just upsample to begin 
with.)


to me, it really depends on if you're doing a slowly-varying precision 
delay in which the pre-emphasis might also be slowly varying.


but if the application is sample-rate conversion or similar (like pitch 
shifting) where the fractional delay is varying all over the place, i 
think a fixed compensation for sinc^2 might be a good idea.  i don't see 
how it would hurt.  especially for the over-sampled case.


i like Olli's "pink-elephant" paper, too.  and i think (since he was 
picking up on Duane's and my old and incomplete paper) it was more about 
the fast-varying fractional delay.  and i think that the Zölzer/Bolze 
paper suggested the same thing (since it was even "worse" than linear 
interp).



--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Nigel Redmon
OK, Robert, I did consider the slow versus fast issue. But there have been few 
caveats posted in this thread, so I thought it might be misleading to some to 
not be specific about context. The worst case would be a precision delay of an 
arbitrary constant. (For example, at 44.1 kHz SR, there would be a significant 
frequency response difference between 250 ms and 250.01 ms, despite no 
perceptible difference in time. Of course, in some cases, even when using 
interpolated delays, you can quantize the delay time to a sample boundary—say 
if modulation is transient and the steady state is the main concern.)

So, yes, the context means a lot, so we should be clear. (And can you tell I’m 
doing something with delays right now?)

Personally, I’m a fan of upsampling, when needed.


> On Aug 17, 2015, at 1:55 PM, robert bristow-johnson 
>  wrote:
> 
> On 8/17/15 2:39 PM, Nigel Redmon wrote:
>> Since compensation filtering has been mentioned by a few, can I ask if 
>> someone could get specific on an implementation (including a description of 
>> constraints under which it operates)? I’d prefer keeping it simple by 
>> restricting to linear interpolation, where it’s most needed, and perhaps 
>> these comments will make clearer what I’m after:
>> 
>> As I noted in the first reply to this thread, while it’s temping to look at 
>> the sinc^2 rolloff of a linear interpolator, for example, and think that 
>> compensation would be to boost the highs to undo the rolloff, that won’t 
>> work in the general case. Even in Olli Niemitalo’s most excellent paper on 
>> interpolation methods 
>> (http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf), he seems to 
>> suggest doing this with pre-emphasis, which seems to be a mistake, unless I 
>> misunderstood his intentions.
>> 
>> In Olli’s case, he specifically recommended pre-emphasis (which I believe 
>> will not work except for special cases of resampling at fixed fractions 
>> between real samples) over post, as post becomes more complicated. (It seems 
>> that you could do it post, taking into account the fractional part of a 
>> particular lookup and avoiding the use of recursive filters—personally I’d 
>> just upsample to begin with.)
> 
> to me, it really depends on if you're doing a slowly-varying precision delay 
> in which the pre-emphasis might also be slowly varying.
> 
> but if the application is sample-rate conversion or similar (like pitch 
> shifting) where the fractional delay is varying all over the place, i think a 
> fixed compensation for sinc^2 might be a good idea.  i don't see how it would 
> hurt.  especially for the over-sampled case.
> 
> i like Olli's "pink-elephant" paper, too.  and i think (since he was picking 
> up on Duane's and my old and incomplete paper) it was more about the 
> fast-varying fractional delay.  and i think that the Zölzer/Bolze paper 
> suggested the same thing (since it was even "worse" than linear interp).
> 
> 
> -- 
> 
> r b-j  r...@audioimagination.com
> 
> "Imagination is more important than knowledge."


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Sampo Syreeni

On 2015-08-17, robert bristow-johnson wrote:

As I noted in the first reply to this thread, while it’s temping to 
look at the sinc^2 rolloff of a linear interpolator, for example, and 
think that compensation would be to boost the highs to undo the 
rolloff, that won’t work in the general case. Even in Olli Niemitalo’s 
most excellent paper on interpolation methods 
(http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf), he seems 
to suggest doing this with pre-emphasis, which seems to be a mistake, 
unless I misunderstood his intentions.


Actually it's not that simple. Substandard interpolation methods do lead 
to high frequency rolloff, which can be corrected to a degree with a 
complementary filter. But the trouble is, at the same time they lead to 
aliasing and even nonlinear artifacts, whose high frequency content will 
be amplified by the compensatory filter as well. As such, that approach 
is basically sound...but at the same time only within a very narrowly 
parametrized envelope.


to me, it really depends on if you're doing a slowly-varying precision 
delay in which the pre-emphasis might also be slowly varying.


In slowly varying delay it ought to work no matter what.

but if the application is sample-rate conversion or similar (like 
pitch shifting) where the fractional delay is varying all over the 
place, i think a fixed compensation for sinc^2 might be a good idea. 
i don't see how it would hurt. especially for the over-sampled case.


It doesn't necessarily hurt, but here it isn't guaranteed to do any good 
either. And it's close to doing something bad instead.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Nigel Redmon
And to add to what Robert said about “write code and sell it”, sometimes it’s 
more comfortable to make general but helpful comments here, and stop short of 
detailing the code that someone paid you a bunch of money for and might not 
want to be generally known.

And before people assume that I mean strictly “keep the secret sauce secret”, 
there’s also the fact that marketing might not want it known that every detail 
of their expensive plug-in is not 256x oversampled, 128-bit floating point data 
path throughout, dithered every stage. :-D

> On Aug 17, 2015, at 1:46 PM, robert bristow-johnson 
>  wrote:
> 
> On 8/17/15 12:07 PM, STEFFAN DIEDRICHSEN wrote:
>> I could write a few lines over the topic as well, since I made such a 
>> compensation filter about 17 years ago.
>> So, there are people, that do care about that topic, but there are only 
>> some, that do find time to write up something.
>> 
>> ;-)
>> 
>> Steffan
>> 
>> 
>>> On 17.08.2015|KW34, at 17:50, Theo Verelst >> > wrote:
>>> 
>>> However, no one here besides RBJ and a few brave souls seems to even care 
>>> much about real subjects.
> 
> Theo, there are a lotta heavyweights here (like Steffan).  if you want a 
> 3-letter acronym to toss around, try "JOS".   i think there are plenty on 
> this list that care deeply about reality because they write code and sell it. 
>  my soul is chicken-shit in the context.
> 
> -- 
> 
> r b-j  r...@audioimagination.com
> 
> "Imagination is more important than knowledge."


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Sham Beam

Thanks for the suggestions and discussion.

In my application I'm playing back 44.1khz wavefiles with variable pitch 
envelopes. I'm currently using hermite interpolation and the quality 
seems fine for playback. It's only after resampling and running through 
the audio engine multiple times does the high frequency roll off become 
a problem. I'll try adding in some oversampling.



Cheers
Shannon
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread robert bristow-johnson

On 8/17/15 7:29 PM, Sampo Syreeni wrote:

On 2015-08-17, robert bristow-johnson wrote:

As I noted in the first reply to this thread, while it’s temping to 
look at the sinc^2 rolloff of a linear interpolator, for example, and 
think that compensation would be to boost the highs to undo the 
rolloff, that won’t work in the general case. Even in Olli 
Niemitalo’s most excellent paper on interpolation methods 
(http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf), he seems 
to suggest doing this with pre-emphasis, which seems to be a mistake, 
unless I misunderstood his intentions.


i don't think i wrote that.   (but i did write the below quotes.)


Actually it's not that simple.


correct.  there is more to interpolation than filtering.

Substandard interpolation methods do lead to high frequency rolloff, 
which can be corrected to a degree with a complementary filter. But 
the trouble is, at the same time they lead to aliasing


well, i guess that's a given in any case.  if aliasing wasn't a worry, 
we'd all just be doing drop-sample interpolation, i guess.


and even nonlinear artifacts, whose high frequency content will be 
amplified by the compensatory filter as well. As such, that approach 
is basically sound...but at the same time only within a very narrowly 
parametrized envelope.


yes.  a little bit of 1/sinc^2 until you get to some high frequency and 
then a hard LPF cutoff.  easier to do if you're oversampled to begin with.




to me, it really depends on if you're doing a slowly-varying 
precision delay in which the pre-emphasis might also be slowly varying.


In slowly varying delay it ought to work no matter what.



well, if it's linear interpolation and your fractional delay slowly 
sweeps from 0 to 1/2 sample, i think you may very well hear a LPF start 
to kick in.  something like -7.8 dB at Nyquist.  no, that's not right.  
it's -inf dB at Nyquist.  pretty serious LPF to just slide into.


but if the application is sample-rate conversion or similar (like 
pitch shifting) where the fractional delay is varying all over the 
place, i think a fixed compensation for sinc^2 might be a good idea. 
i don't see how it would hurt. especially for the over-sampled case.


It doesn't necessarily hurt, but here it isn't guaranteed to do any 
good either.


long ago, in my Eventide days, i did a pitch shifter using linear 
interpolation (and linear crossfading).  cheap and "dirty".  took only 
about 50 instructions per sample in the 56K.  guitars were pretty LPF to 
begin with, but with other broadbanded input, even my 
rock-n-roll-damaged ears could hear the rolloff.



And it's close to doing something bad instead.


well, you don't wanna compensate for -inf dB, that's bad.  but you might 
wanna try to compensate the lower baseband frequencies, at least a 
little bit.


--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Nigel Redmon

> On Aug 17, 2015, at 7:23 PM, robert bristow-johnson 
>  wrote:
> 
> On 8/17/15 7:29 PM, Sampo Syreeni wrote:
>> 
>>> to me, it really depends on if you're doing a slowly-varying precision 
>>> delay in which the pre-emphasis might also be slowly varying.
>> 
>> In slowly varying delay it ought to work no matter what.
> 
> well, if it's linear interpolation and your fractional delay slowly sweeps 
> from 0 to 1/2 sample, i think you may very well hear a LPF start to kick in.  
> something like -7.8 dB at Nyquist.  no, that's not right.  it's -inf dB at 
> Nyquist.  pretty serious LPF to just slide into.

Right the first time, -7.8 dB at the Nyquist frequency, -inf at the sampling 
frequency. No?


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Nigel Redmon  wrote:
>>
>> well, if it's linear interpolation and your fractional delay slowly sweeps
>> from 0 to 1/2 sample, i think you may very well hear a LPF start to kick
>> in.  something like -7.8 dB at Nyquist.  no, that's not right.  it's -inf
>> dB at Nyquist.  pretty serious LPF to just slide into.
>
> Right the first time, -7.8 dB at the Nyquist frequency, -inf at the sampling
> frequency. No?

-Inf at Nyquist when you're halfway between two samples.

Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1, -1...
After interpolating with fraction=0.5, it becomes a constant signal
0,0,0,0,0,0,0...
(because (-1+1)/2 = 0)
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Jerry

On Aug 17, 2015, at 9:38 AM, Esteban Maestre  wrote:

> No experience with compensation filters here. 
> But if you can afford to use a higher order interpolation scheme, I'd go for 
> that.
> 
> Using Newton's Backward Difference Formula, one can construct time-varying, 
> table-free, efficient Lagrange interpolation schemes of arbitrary order (up 
> to 30-th or 40-th order) which stay within linear complexity while allowing 
> for run-time modulation of the interpolation order.
> 
> https://ccrma.stanford.edu/~jos/Interpolation/Lagrange_Interpolation.html
> 
> Cheers,
> Esteban

I would think that polynomial interpolators of order 30 or 40 would provide no 
end of unpleasant surprises due to the behavior of high-order polynomials. I'm 
thinking of weird spikes, etc. Have you actually used polynomial interpolators 
of this order?

Jerry
> 
> 
> On 8/17/2015 12:07 PM, STEFFAN DIEDRICHSEN wrote:
>> I could write a few lines over the topic as well, since I made such a 
>> compensation filter about 17 years ago. 
>> So, there are people, that do care about that topic, but there are only 
>> some, that do find time to write up something. 
>> 
>> ;-)
>> 
>> Steffan 
>> 
>> 
>>> On 17.08.2015|KW34, at 17:50, Theo Verelst  wrote:
>>> 
>>> However, no one here besides RBJ and a few brave souls seems to even care 
>>> much about real subjects.
>> 
>> 
>> 
>> ___
>> music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 
> -- 
> 
> Esteban Maestre
> CIRMMT/CAML - McGill Univ
> MTG - Univ Pompeu Fabra
> http://ccrma.stanford.edu/~esteban 
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Esteban Maestre



On 8/18/2015 6:41 AM, Jerry wrote:
I would think that polynomial interpolators of order 30 or 40 would 
provide no end of unpleasant surprises due to the behavior of 
high-order polynomials. I'm thinking of weird spikes, etc. Have you 
actually used polynomial interpolators of this order?


I remember going even above 40-th order with no problems.
But I also remember having problems with 80-th order interpolation.
I think it's called the /Runge phenomenon/.

Esteban



--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
>Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
-1...

The sampling theorem requires that all frequencies be *below* the Nyquist
frequency. Sampling signals at exactly the Nyquist frequency is an edge
case that sort-of works in some limited special cases, but there is no
expectation that digital processing of such a signal is going to work
properly in general.

But even given that, the interpolator outputting the zero signal in that
case is exactly correct. That's what you would have gotten if you'd sampled
the same sine wave (*not* square wave - that would imply frequencies above
Nyquist) with a half-sample offset from the 1, -1, 1, -1, ... case. The
incorrect behavior arises when you try to go in the other direction (i.e.,
apply a second half-sample delay), and you still get only DC.

But, again, that doesn't really say anything about interpolation. It just
says that you sampled the signal improperly in the first place, and so
digital processing can't be relied upon to work appropriately.

E

On Tue, Aug 18, 2015 at 1:40 AM, Peter S 
wrote:

> On 18/08/2015, Nigel Redmon  wrote:
> >>
> >> well, if it's linear interpolation and your fractional delay slowly
> sweeps
> >> from 0 to 1/2 sample, i think you may very well hear a LPF start to kick
> >> in.  something like -7.8 dB at Nyquist.  no, that's not right.  it's
> -inf
> >> dB at Nyquist.  pretty serious LPF to just slide into.
> >
> > Right the first time, -7.8 dB at the Nyquist frequency, -inf at the
> sampling
> > frequency. No?
>
> -Inf at Nyquist when you're halfway between two samples.
>
> Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
> -1...
> After interpolating with fraction=0.5, it becomes a constant signal
> 0,0,0,0,0,0,0...
> (because (-1+1)/2 = 0)
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread robert bristow-johnson

On 8/18/15 3:44 PM, Ethan Duni wrote:
>Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 
1, -1...


The sampling theorem requires that all frequencies be *below* the 
Nyquist frequency. Sampling signals at exactly the Nyquist frequency 
is an edge case that sort-of works in some limited special cases, but 
there is no expectation that digital processing of such a signal is 
going to work properly in general.


But even given that, the interpolator outputting the zero signal in 
that case is exactly correct. That's what you would have gotten if 
you'd sampled the same sine wave (*not* square wave - that would imply 
frequencies above Nyquist) with a half-sample offset from the 1, -1, 
1, -1, ... case. The incorrect behavior arises when you try to go in 
the other direction (i.e., apply a second half-sample delay), and you 
still get only DC.


But, again, that doesn't really say anything about interpolation. It 
just says that you sampled the signal improperly in the first place, 
and so digital processing can't be relied upon to work appropriately.




as suprizing as it may first appear, i think Peter S and me are totally 
on the same page here.


regarding *linear* interpolation, *if* you use linear interpolation in a 
precision delay (an LTI thingie, or at least quasi-time-invariant) and 
you delay by some integer + 1/2 sample, the filter you get has 
coefficients and transfer function


   H(z) =  (1/2)*(1 + z^-1)*z^-N

(where N is the integer part of the delay).

the gain of that filter, as you approach Nyquist, approaches -inf dB.

*my* point is that as the delay slowly slides from a integer number of 
samples, where the transfer function is


   H(z) = z^-N

to the integer + 1/2 sample (with gain above), this linear but 
time-variant system is going to sound like there is a LPF getting segued in.


this, for me, is enough to decide never to use solely linear 
interpolation for a modulateable delay widget.  if i vary delay, i want 
only the delay to change.  and i would prefer if the delay was the same 
for all frequencies, which makes the APF fractional delay thingie 
problematic.


bestest,

r b-j



On Tue, Aug 18, 2015 at 1:40 AM, Peter S > wrote:


On 18/08/2015, Nigel Redmon mailto:earle...@earlevel.com>> wrote:
>>
>> well, if it's linear interpolation and your fractional delay
slowly sweeps
>> from 0 to 1/2 sample, i think you may very well hear a LPF start
to kick
>> in.  something like -7.8 dB at Nyquist.  no, that's not right. 
it's -inf

>> dB at Nyquist.  pretty serious LPF to just slide into.
>
> Right the first time, -7.8 dB at the Nyquist frequency, -inf at
the sampling
> frequency. No?

-Inf at Nyquist when you're halfway between two samples.

Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1,
-1, 1, -1...
After interpolating with fraction=0.5, it becomes a constant signal
0,0,0,0,0,0,0...
(because (-1+1)/2 = 0)



--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Ethan Duni  wrote:
>>Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
> -1...
>
> The sampling theorem requires that all frequencies be *below* the Nyquist
> frequency. Sampling signals at exactly the Nyquist frequency is an edge
> case that sort-of works in some limited special cases, but there is no
> expectation that digital processing of such a signal is going to work
> properly in general.

Not necessarily, at least in theory.

In practice, an anti-alias filter will filter out a signal exactly at
Nyquist freq, both when sampling it (A/D conversion), and both when
reconstructing it (D/A conversion). But that doesn't mean that a
half-sample delay doesn't have -Inf dB gain at Nyquist frequency. It's
another thing that the anti-alias filter of a converter will typically
filter it out anyways when reconstructing - but we weren't talking
about reconstruction, so that is irrelevant here.

A Nyquist frequency signal (1, -1, 1, -1, ...) is a perfectly valid
bandlimited signal.

> But even given that, the interpolator outputting the zero signal in that
> case is exactly correct. That's what you would have gotten if you'd sampled
> the same sine wave (*not* square wave - that would imply frequencies above
> Nyquist) with a half-sample offset from the 1, -1, 1, -1, ... case.

More precisely: a bandlimited Nyquist frequency square wave *equals* a
Nyquist frequency sine wave. Or any other harmonic waveform for that
matter (triangle, saw, etc.) In all cases, only the fundamental
partial is there (1, -1, 1, -1, ... = Nyquist frequency sine), all the
other partials are filtered out from the bandlimiting.

So the signal 1, -1, 1, -1, *is* a Nyquist frequency bandlimited
square wave, and also a sine-wave as well. They're identical. It *is*
a bandlimited square wave - that's what you get when you take a
Nyquist frequency square wave, and bandlimit it by removing all
partials above Nyquist freq (say, via DFT). You may call it a square,
a sine, saw, doesn't matter - when bandlimited, they're identical.

> The
> incorrect behavior arises when you try to go in the other direction (i.e.,
> apply a second half-sample delay), and you still get only DC.

What would be "incorrect" about it? I'm not sure what is your
assumption. Of course if you apply any kind of filtering to a zero DC
signal, you'll still have a zero DC signal. -Inf + -Inf = -Inf...  Not
sure what you're trying to achieve by "applying a second half-sample
delay"... That also has -Inf dB gain at Nyquist, so you'll still have
a zero DC signal after that. Since a half-sample delay has -Inf gain
at Nyquist, you cannot "undo" it by applying another half-sample
delay...

> But, again, that doesn't really say anything about interpolation.It just
> says that you sampled the signal improperly in the first place, and so
> digital processing can't be relied upon to work appropriately.

That's false. 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
and contains no aliasing. That's the maximal allowed frequency without
any aliasing. It is a bandlimited Nyquist frequency square wave (which
is equivalent to a Nyquist frequency sine wave). From that, you can
reconstruct a perfect alias-free sinusoid of frequency SR/2.

What's causing you to be unable to reconstruct the waveform?

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
>*my* point is that as the delay slowly slides from a integer number of
samples, where the transfer function is
>
>   H(z) = z^-N
>
>to the integer + 1/2 sample (with gain above), this linear but
time-variant system is going to sound like there is a LPF getting segued in.
>
>this, for me, is enough to decide never to use solely linear interpolation
for a modulateable delay widget.  if i vary delay, i want only the >delay
to change.

Yeah, absolutely. The variable suppression of high frequencies when
fractional delay changes is undesirable, and indicates that better
interpolation schemes should be used there.

But the example of the weird things that can happen when you try to sample
a sine wave right at the nyquist rate and then process it is orthogonal to
that point.

E

On Tue, Aug 18, 2015 at 1:16 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

> On 8/18/15 3:44 PM, Ethan Duni wrote:
>
>> >Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
>> -1...
>>
>> The sampling theorem requires that all frequencies be *below* the Nyquist
>> frequency. Sampling signals at exactly the Nyquist frequency is an edge
>> case that sort-of works in some limited special cases, but there is no
>> expectation that digital processing of such a signal is going to work
>> properly in general.
>>
>> But even given that, the interpolator outputting the zero signal in that
>> case is exactly correct. That's what you would have gotten if you'd sampled
>> the same sine wave (*not* square wave - that would imply frequencies above
>> Nyquist) with a half-sample offset from the 1, -1, 1, -1, ... case. The
>> incorrect behavior arises when you try to go in the other direction (i.e.,
>> apply a second half-sample delay), and you still get only DC.
>>
>> But, again, that doesn't really say anything about interpolation. It just
>> says that you sampled the signal improperly in the first place, and so
>> digital processing can't be relied upon to work appropriately.
>>
>>
> as suprizing as it may first appear, i think Peter S and me are totally on
> the same page here.
>
> regarding *linear* interpolation, *if* you use linear interpolation in a
> precision delay (an LTI thingie, or at least quasi-time-invariant) and you
> delay by some integer + 1/2 sample, the filter you get has coefficients and
> transfer function
>
>H(z) =  (1/2)*(1 + z^-1)*z^-N
>
> (where N is the integer part of the delay).
>
> the gain of that filter, as you approach Nyquist, approaches -inf dB.
>
> *my* point is that as the delay slowly slides from a integer number of
> samples, where the transfer function is
>
>H(z) = z^-N
>
> to the integer + 1/2 sample (with gain above), this linear but
> time-variant system is going to sound like there is a LPF getting segued in.
>
> this, for me, is enough to decide never to use solely linear interpolation
> for a modulateable delay widget.  if i vary delay, i want only the delay to
> change.  and i would prefer if the delay was the same for all frequencies,
> which makes the APF fractional delay thingie problematic.
>
> bestest,
>
> r b-j
>
>
>> On Tue, Aug 18, 2015 at 1:40 AM, Peter S > > wrote:
>>
>> On 18/08/2015, Nigel Redmon > > wrote:
>> >>
>> >> well, if it's linear interpolation and your fractional delay
>> slowly sweeps
>> >> from 0 to 1/2 sample, i think you may very well hear a LPF start
>> to kick
>> >> in.  something like -7.8 dB at Nyquist.  no, that's not right.
>>  it's -inf
>> >> dB at Nyquist.  pretty serious LPF to just slide into.
>> >
>> > Right the first time, -7.8 dB at the Nyquist frequency, -inf at
>> the sampling
>> > frequency. No?
>>
>> -Inf at Nyquist when you're halfway between two samples.
>>
>> Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1,
>> -1, 1, -1...
>> After interpolating with fraction=0.5, it becomes a constant signal
>> 0,0,0,0,0,0,0...
>> (because (-1+1)/2 = 0)
>>
>>
> --
>
> r b-j  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
>
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, robert bristow-johnson  wrote:
>
> *my* point is that as the delay slowly slides from a integer number of
> samples [...] to the integer + 1/2 sample (with gain above), this linear but
> time-variant system is going to sound like there is a LPF getting segued
> in.

Exactly. As the fractional delay varies between 0..1, it will sound
like a fluttering LP filter that closes and opens as the delay varies,
having the most 'muffled' (LPF'ed) sound when fraction = 1/2.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread robert bristow-johnson

On 8/18/15 4:28 PM, Peter S wrote:


1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
and contains no aliasing. That's the maximal allowed frequency without
any aliasing.


well Peter, here again is where you overreach.  assuming, without loss 
of generality that the sampling period is 1, the continuous-time signals



   x(t)  =  1/cos(theta) * cos(pi*t + theta)

are all aliases for the signal described above (and incorrectly as 
"contain[ing] no aliasing").


--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Tom Duffy

In order to reconstruct that sinusoid, you'll need a filter with
an infinitely steep transition band.
You've demonstrated that SR/2 aliases to 0Hz, i.e. DC.
That digital stream of samples is not reconstructable.

On 8/18/2015 1:28 PM, Peter S wrote:


That's false. 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
and contains no aliasing. That's the maximal allowed frequency without
any aliasing. It is a bandlimited Nyquist frequency square wave (which
is equivalent to a Nyquist frequency sine wave). From that, you can
reconstruct a perfect alias-free sinusoid of frequency SR/2.


NOTICE: This electronic mail message and its contents, including any attachments hereto 
(collectively, "this e-mail"), is hereby designated as "confidential and 
proprietary." This e-mail may be viewed and used only by the person to whom it has been sent 
and his/her employer solely for the express purpose for which it has been disclosed and only in 
accordance with any confidentiality or non-disclosure (or similar) agreement between TEAC 
Corporation or its affiliates and said employer, and may not be disclosed to any other person or 
entity.



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Ethan Duni  wrote:
>
> But the example of the weird things that can happen when you try to sample
> a sine wave right at the nyquist rate and then process it is orthogonal to
> that point.

That's not weird, and that's *exactly* what you have in the highest
bin of an FFT.

The signal 1, -1, 1, -1, 1, -1 ... is the highest frequency basis
function of the DFT:
http://www.dspguide.com/graphics/F_8_5.gif

If you think that's weird, then I guess you think that the Fourier
transformation is weird.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
>What's causing you to be unable to reconstruct the waveform?

There are an infinite number of different nyquist-frequency sinusoids that,
when sampled, will all give the same ...,1, -1, 1, -1, ... sequence of
samples. The sampling is a many-to-one mapping in that case, and so cannot
be inverted.

See here:
https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem#Critical_frequency

Or consider what happens if you shift a nyquist-frequency sinusoid by half
a period before sampling it. You get ..., 0, 0, 0, 0, ... - which is quite
obviously the zero signal. It is not going to reproduce a nyquist frequency
sinusoid when you run it through a DAC.

E

On Tue, Aug 18, 2015 at 1:28 PM, Peter S 
wrote:

> On 18/08/2015, Ethan Duni  wrote:
> >>Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1,
> > -1...
> >
> > The sampling theorem requires that all frequencies be *below* the Nyquist
> > frequency. Sampling signals at exactly the Nyquist frequency is an edge
> > case that sort-of works in some limited special cases, but there is no
> > expectation that digital processing of such a signal is going to work
> > properly in general.
>
> Not necessarily, at least in theory.
>
> In practice, an anti-alias filter will filter out a signal exactly at
> Nyquist freq, both when sampling it (A/D conversion), and both when
> reconstructing it (D/A conversion). But that doesn't mean that a
> half-sample delay doesn't have -Inf dB gain at Nyquist frequency. It's
> another thing that the anti-alias filter of a converter will typically
> filter it out anyways when reconstructing - but we weren't talking
> about reconstruction, so that is irrelevant here.
>
> A Nyquist frequency signal (1, -1, 1, -1, ...) is a perfectly valid
> bandlimited signal.
>
> > But even given that, the interpolator outputting the zero signal in that
> > case is exactly correct. That's what you would have gotten if you'd
> sampled
> > the same sine wave (*not* square wave - that would imply frequencies
> above
> > Nyquist) with a half-sample offset from the 1, -1, 1, -1, ... case.
>
> More precisely: a bandlimited Nyquist frequency square wave *equals* a
> Nyquist frequency sine wave. Or any other harmonic waveform for that
> matter (triangle, saw, etc.) In all cases, only the fundamental
> partial is there (1, -1, 1, -1, ... = Nyquist frequency sine), all the
> other partials are filtered out from the bandlimiting.
>
> So the signal 1, -1, 1, -1, *is* a Nyquist frequency bandlimited
> square wave, and also a sine-wave as well. They're identical. It *is*
> a bandlimited square wave - that's what you get when you take a
> Nyquist frequency square wave, and bandlimit it by removing all
> partials above Nyquist freq (say, via DFT). You may call it a square,
> a sine, saw, doesn't matter - when bandlimited, they're identical.
>
> > The
> > incorrect behavior arises when you try to go in the other direction
> (i.e.,
> > apply a second half-sample delay), and you still get only DC.
>
> What would be "incorrect" about it? I'm not sure what is your
> assumption. Of course if you apply any kind of filtering to a zero DC
> signal, you'll still have a zero DC signal. -Inf + -Inf = -Inf...  Not
> sure what you're trying to achieve by "applying a second half-sample
> delay"... That also has -Inf dB gain at Nyquist, so you'll still have
> a zero DC signal after that. Since a half-sample delay has -Inf gain
> at Nyquist, you cannot "undo" it by applying another half-sample
> delay...
>
> > But, again, that doesn't really say anything about interpolation.It just
> > says that you sampled the signal improperly in the first place, and so
> > digital processing can't be relied upon to work appropriately.
>
> That's false. 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
> and contains no aliasing. That's the maximal allowed frequency without
> any aliasing. It is a bandlimited Nyquist frequency square wave (which
> is equivalent to a Nyquist frequency sine wave). From that, you can
> reconstruct a perfect alias-free sinusoid of frequency SR/2.
>
> What's causing you to be unable to reconstruct the waveform?
>
> -P
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Nigel Redmon
I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts, 
hence the “No?” in my reply to him).

The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8 dB 
at 0.5 of the sample rate...


> On Aug 18, 2015, at 1:40 AM, Peter S  wrote:
> 
> On 18/08/2015, Nigel Redmon  wrote:
>>> 
>>> well, if it's linear interpolation and your fractional delay slowly sweeps
>>> from 0 to 1/2 sample, i think you may very well hear a LPF start to kick
>>> in.  something like -7.8 dB at Nyquist.  no, that's not right.  it's -inf
>>> dB at Nyquist.  pretty serious LPF to just slide into.
>> 
>> Right the first time, -7.8 dB at the Nyquist frequency, -inf at the sampling
>> frequency. No?
> 
> -Inf at Nyquist when you're halfway between two samples.
> 
> Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1, -1...
> After interpolating with fraction=0.5, it becomes a constant signal
> 0,0,0,0,0,0,0...
> (because (-1+1)/2 = 0)
> __


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, robert bristow-johnson  wrote:
> On 8/18/15 4:28 PM, Peter S wrote:
>>
>> 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
>> and contains no aliasing. That's the maximal allowed frequency without
>> any aliasing.
>
> well Peter, here again is where you overreach.  assuming, without loss
> of generality that the sampling period is 1, the continuous-time signals
>
> x(t)  =  1/cos(theta) * cos(pi*t + theta)
>
> are all aliases for the signal described above (and incorrectly as
> "contain[ing] no aliasing").

Well, strictly speaking, that is true. But I assumed the signal to be
bandlimited to 0..SR/2. In that case, you can perfectly reconstruct
it, as you have no other alias between 0..SR/2.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Tom Duffy  wrote:
> In order to reconstruct that sinusoid, you'll need a filter with
> an infinitely steep transition band.

I can use an arbitrarily long sinc kernel to reconstruct / interpolate
it. Therefore, for any desired precision, you can find an appropriate
sinc kernel length. Where's the problem?

I can also oversample the signal arbitrarily, using an arbitrarily
long sinc kernel.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Nigel Redmon
OK, I looked back at Robert’s post, and see that the fact his reply was broken 
up into segments (as he replied to segments of Peter’s comment) made me miss 
his point. At first he was talking general (pitch shifting), but at that point 
he was talking about strictly sliding into halfway between samples in the 
interpolation. Never mind.


> On Aug 18, 2015, at 1:50 PM, Nigel Redmon  wrote:
> 
> I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts, 
> hence the “No?” in my reply to him).
> 
> The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8 
> dB at 0.5 of the sample rate...
> 
> 
>> On Aug 18, 2015, at 1:40 AM, Peter S  wrote:
>> 
>> On 18/08/2015, Nigel Redmon  wrote:
 
 well, if it's linear interpolation and your fractional delay slowly sweeps
 from 0 to 1/2 sample, i think you may very well hear a LPF start to kick
 in.  something like -7.8 dB at Nyquist.  no, that's not right.  it's -inf
 dB at Nyquist.  pretty serious LPF to just slide into.
>>> 
>>> Right the first time, -7.8 dB at the Nyquist frequency, -inf at the sampling
>>> frequency. No?
>> 
>> -Inf at Nyquist when you're halfway between two samples.
>> 
>> Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1, 
>> -1...
>> After interpolating with fraction=0.5, it becomes a constant signal
>> 0,0,0,0,0,0,0...
>> (because (-1+1)/2 = 0)
>> __


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
>> well Peter, here again is where you overreach.  assuming, without loss
>> of generality that the sampling period is 1, the continuous-time signals
>>
>> x(t)  =  1/cos(theta) * cos(pi*t + theta)
>>
>> are all aliases for the signal described above (and incorrectly as
>> "contain[ing] no aliasing").
>
>Well, strictly speaking, that is true. But I assumed the signal to be
>bandlimited to 0..SR/2. In that case, you can perfectly reconstruct
>it, as you have no other alias between 0..SR/2.

That class of signals is band limited to SR/2. The aliasing is in the
amplitude/phase offset, not the frequency.

There are an infinite number of combinations of amplitude/phase of a
nyquist-frequency sinusoid that will all result in the same sampled
sequence. So you can't invert the sampling.

You can construct a DAC that will output some well-behaved nyquist
frequency sinusoid when presented with the input ..., 1, -1, 1, -1, 1, ...,
but you can't guarantee that it will resemble an analog sinusoid that was
sampled to produce such a digital sequence. You don't have enough info to
disambiguate the phase and amplitude.

E

On Tue, Aug 18, 2015 at 1:51 PM, Peter S 
wrote:

> On 18/08/2015, robert bristow-johnson  wrote:
> > On 8/18/15 4:28 PM, Peter S wrote:
> >>
> >> 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
> >> and contains no aliasing. That's the maximal allowed frequency without
> >> any aliasing.
> >
> > well Peter, here again is where you overreach.  assuming, without loss
> > of generality that the sampling period is 1, the continuous-time signals
> >
> > x(t)  =  1/cos(theta) * cos(pi*t + theta)
> >
> > are all aliases for the signal described above (and incorrectly as
> > "contain[ing] no aliasing").
>
> Well, strictly speaking, that is true. But I assumed the signal to be
> bandlimited to 0..SR/2. In that case, you can perfectly reconstruct
> it, as you have no other alias between 0..SR/2.
>
> -P
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
>In order to reconstruct that sinusoid, you'll need a filter with
>an infinitely steep transition band.

No, even an ideal reconstruction filter won't do it. You've got your
+Nyquist component sitting right on top of your -Nyquist component. Hence
the aliasing. The information has been lost in the sampling, there's no way
to reconstruct without some additional side information.

E

On Tue, Aug 18, 2015 at 1:45 PM, Tom Duffy  wrote:

> In order to reconstruct that sinusoid, you'll need a filter with
> an infinitely steep transition band.
> You've demonstrated that SR/2 aliases to 0Hz, i.e. DC.
> That digital stream of samples is not reconstructable.
>
> On 8/18/2015 1:28 PM, Peter S wrote:
>
> That's false. 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
>> and contains no aliasing. That's the maximal allowed frequency without
>> any aliasing. It is a bandlimited Nyquist frequency square wave (which
>> is equivalent to a Nyquist frequency sine wave). From that, you can
>> reconstruct a perfect alias-free sinusoid of frequency SR/2.
>>
>
> NOTICE: This electronic mail message and its contents, including any
> attachments hereto (collectively, "this e-mail"), is hereby designated as
> "confidential and proprietary." This e-mail may be viewed and used only by
> the person to whom it has been sent and his/her employer solely for the
> express purpose for which it has been disclosed and only in accordance with
> any confidentiality or non-disclosure (or similar) agreement between TEAC
> Corporation or its affiliates and said employer, and may not be disclosed
> to any other person or entity.
>
>
>
>
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread robert bristow-johnson

On 8/18/15 4:50 PM, Nigel Redmon wrote:

I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts, 
hence the “No?” in my reply to him).

The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8 dB 
at 0.5 of the sample rate...


i will try to spell out my point.

there are probably a zillion applications of fractional-sample 
interpolation.  Vesa Valimaki's IEEE article "Splitting the Unit Delay" 
from the 90s is sorta equivalently seminal as fred harris's classic 
windowing paper about this.


but, within the zillion of applications, i can think of two classes of 
application in which all applications will fall into one or the other:


  1.  slowly-varying (or constant) delay with a fractional component.   
this would be a precision delay we might use to time-align things (like 
speakers) or for effects like flanging or to compensate for the delay of 
some other component like a filter.


  2.  rapidly-varying delay (again with a fractional component).  this 
would be sample-rate-conversion (SRC), resampling sound files, pitch 
shifting (either the splicing thing a Harmonizer might do or the 
sample-playback and looping a sampler might do), and wild-assed delay 
effects.


it's only in the second class of application that i think the sinc^2 
frequency rolloff (assuming linear interpolation) is a valid model (or 
hand-wavy approximation of a model).


in the first class of application, i think the model should be what you 
get if the delay was constant.  for linear interpolation, if you are a 
delayed by 3.5 samples and you keep that delay constant, the transfer 
function is


   H(z)  =  (1/2)*(1 + z^-1)*z^-3

that filter goes to -inf dB as omega gets closer to pi.

--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Ethan Duni  wrote:
>
> That class of signals is band limited to SR/2. The aliasing is in the
> amplitude/phase offset, not the frequency.

Okay, I get what you mean. But that doesn't change the frequency
response of a half-sample delay, or doesn't mean that a half-sample
delay doesn't have a specific gain at Nyquist.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread robert bristow-johnson

On 8/18/15 5:01 PM, Emily Litella wrote:

... Never mind.



too late.

:-)

--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Ethan Duni  wrote:
>>In order to reconstruct that sinusoid, you'll need a filter with
>>an infinitely steep transition band.
>
> No, even an ideal reconstruction filter won't do it. You've got your
> +Nyquist component sitting right on top of your -Nyquist component. Hence
> the aliasing. The information has been lost in the sampling, there's no way
> to reconstruct without some additional side information.

You cannot calculate 1/x when x=0, can you? Since that's division by zero.
Yet you'll know when x tends to zero from right towards left, then 1/x
will tend to +infinity.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
>You cannot calculate 1/x when x=0, can you? Since that's division by zero.
>Yet you'll know when x tends to zero from right towards left, then 1/x
>will tend to +infinity.

Not sure what that is supposed to have to do with the present subject.

If you want to put it in terms of simple arithmetic, the aliasing issue
works like this: I add two numbers together, and find that the answer is X.
I tell you X, and then ask you to determine what the two numbers were. Can
you do it?

E

On Tue, Aug 18, 2015 at 2:13 PM, Peter S 
wrote:

> On 18/08/2015, Ethan Duni  wrote:
> >>In order to reconstruct that sinusoid, you'll need a filter with
> >>an infinitely steep transition band.
> >
> > No, even an ideal reconstruction filter won't do it. You've got your
> > +Nyquist component sitting right on top of your -Nyquist component. Hence
> > the aliasing. The information has been lost in the sampling, there's no
> way
> > to reconstruct without some additional side information.
>
> You cannot calculate 1/x when x=0, can you? Since that's division by zero.
> Yet you'll know when x tends to zero from right towards left, then 1/x
> will tend to +infinity.
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
>Okay, I get what you mean. But that doesn't change the frequency
>response of a half-sample delay, or doesn't mean that a half-sample
>delay doesn't have a specific gain at Nyquist.

Never said that it did. In fact, I explicitly said that this issue of
sampling of Nyquist frequency sinusoids has no bearing on the frequency
response of fractional interpolators. I'd suggest dropping this whole
derail, if you are no longer hung up on this point.

E

On Tue, Aug 18, 2015 at 2:08 PM, Peter S 
wrote:

> On 18/08/2015, Ethan Duni  wrote:
> >
> > That class of signals is band limited to SR/2. The aliasing is in the
> > amplitude/phase offset, not the frequency.
>
> Okay, I get what you mean. But that doesn't change the frequency
> response of a half-sample delay, or doesn't mean that a half-sample
> delay doesn't have a specific gain at Nyquist.
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Ethan Duni  wrote:
>>You cannot calculate 1/x when x=0, can you? Since that's division by zero.
>>Yet you'll know when x tends to zero from right towards left, then 1/x
>>will tend to +infinity.
>
> Not sure what that is supposed to have to do with the present subject.

You cannot calculate 1/x when x=0, because that's division by zero,
yet you can calculate the limit of 1/x as x tends towards zero.
Meaning that you can approach zero arbitrarily, and 1/x will approach
+infinity arbitrarily.

Similarly, even if frequency f=0.5 may be considered ill-specified
(because it's critical frequency), you can still approach it to
arbitrary precision, and the gain will approach -infinity. So

f=0.4
f=0.49
f=0.499
f=0.4999
f=0.49
f=0.499
etc.

The more you approach f=0.5, the more the gain will approach
-infinity. Even if f=0.5 is a critical frequency. f=0.4 isn't,
and it's quite close to f=0.5.

That's what I mean.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Peter S  wrote:
> Even if f=0.5 is a critical frequency. f=0.4 isn't,
> and it's quite close to f=0.5.

(*) at 44.1 kHz sampling rate, that's precisely 22049.559 Hz.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Peter S
On 18/08/2015, Peter S  wrote:
>
> Similarly, even if frequency f=0.5 may be considered ill-specified
> (because it's critical frequency), you can still approach it to
> arbitrary precision, and the gain will approach -infinity. So
>
> f=0.4
> f=0.49
> f=0.499
> f=0.4999
> f=0.49
> f=0.499
> etc.
>
> The more you approach f=0.5, the more the gain will approach
> -infinity.

I made an actual test program to confirm this numerically:
http://morpheus.spectralhead.com/txt/fracdelay.tcl.txt

Here are the results, testing half-sample delay gain at varying frequencies:

f = 0.4 => -10.2 dB
f = 0.49 => -30 dB
f = 0.499 => -58.3 dB
f = 0.4999 => -98 dB
f = 0.4 => -138 dB
f = 0.49 => -178 dB
f = 0.499 => -218 dB
f = 0.4999 => -258 dB
f = 0.4 => -297.9 dB
f = 0.49 => -325.1 dB
f = 0.499 => gives -Inf dB, because it reaches limit of 64-bit precision

At 44.1 kHz sampling rate, f=0.49 equals 22049.9559 Hz,
that's the closest I can get to f=0.5 using 64 bit floating point
precision. At that frequency, measured gain is -325.1 dB (~=
5.55e-17), which is practically zero. For comparison, 24 bit PCM
signals' dynamic range is only 144 dB. To represent -325.1 dB in fixed
point, you'd need at least 54 bits precision.

Using arbitrary precision maths, you can approach f=0.5 arbitrary
closely, and the gain will tend towards -Inf decibels. (But I doubt
you'll ever want to reach 22050 Hz with less than 0.1 Hz error...)

Now I could make a fancy graph of this, but right now I won't bother.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
> for linear interpolation, if you are a delayed by 3.5 samples and you
keep that delay constant, the transfer function is
>
>   H(z)  =  (1/2)*(1 + z^-1)*z^-3
>
>that filter goes to -inf dB as omega gets closer to pi.

Note that this holds for symmetric fractional delay filter of any odd order
(i.e., Lagrange interpolation filter, windowed sinc, etc). It's not an
artifact of the simple linear approach, it's a feature of the symmetric,
finite nature of the fractional interpolator. Since there are good reasons
for the symmetry constraint, we are left to trade off oversampling and
filter order/design to get the final passband as flat as we need.

My view is that if you are serious about maintaining fidelity across the
full bandwidth, you need to oversample by at least 2x. That way you can fit
the transition band of your interpolation filter above the signal band. In
applications where you are less concerned about full bandwidth fidelity,
oversampling isn't required. Some argue that 48kHz sample rate is already
effectively oversampled for lots of natural recordings, for example. If
it's already at 96kHz or higher I would not bother oversampling further.

Also this is recommended reading for this thread:

https://ccrma.stanford.edu/~jos/Interpolation/

E

On Tue, Aug 18, 2015 at 1:45 PM, Tom Duffy  wrote:

> In order to reconstruct that sinusoid, you'll need a filter with
> an infinitely steep transition band.
> You've demonstrated that SR/2 aliases to 0Hz, i.e. DC.
> That digital stream of samples is not reconstructable.
>
> On 8/18/2015 1:28 PM, Peter S wrote:
>
> That's false. 1, -1, 1, -1, 1, -1 ... is a proper bandlimited signal,
>> and contains no aliasing. That's the maximal allowed frequency without
>> any aliasing. It is a bandlimited Nyquist frequency square wave (which
>> is equivalent to a Nyquist frequency sine wave). From that, you can
>> reconstruct a perfect alias-free sinusoid of frequency SR/2.
>>
>
> NOTICE: This electronic mail message and its contents, including any
> attachments hereto (collectively, "this e-mail"), is hereby designated as
> "confidential and proprietary." This e-mail may be viewed and used only by
> the person to whom it has been sent and his/her employer solely for the
> express purpose for which it has been disclosed and only in accordance with
> any confidentiality or non-disclosure (or similar) agreement between TEAC
> Corporation or its affiliates and said employer, and may not be disclosed
> to any other person or entity.
>
>
>
>
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
Another way to show that half-sample delay has -Inf gain at Nyquist:
see the pole-zero plot of the equivalent LTI filter a0=0.5, a1=0.5. It
will have a zero at z=-1. A zero on the unit circle means -Inf gain,
and z=-1 means Nyquist frequency. Therefore, a half-sample delay has
-Inf gain at Nyquist frequency.

It would be ill-advised to dismiss Nyquist frequency because it may
alias to DC signal when sampling. The zero on the unit circle is at
Nyquist (z=-1), not at DC (z=1).

Frequency response graphs of linear interpolation, according to JOS:
https://ccrma.stanford.edu/~jos/Interpolation/Frequency_Responses_Linear_Interpolation.html
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 19/08/2015, Peter S  wrote:
> Another way to show that half-sample delay has -Inf gain at Nyquist:
> see the pole-zero plot of the equivalent LTI filter a0=0.5, a1=0.5. It
> will have a zero at z=-1. A zero on the unit circle means -Inf gain,
> and z=-1 means Nyquist frequency. Therefore, a half-sample delay has
> -Inf gain at Nyquist frequency.

It looks like this:
http://morpheus.spectralhead.com/img/halfsample_delay_zplane.png

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 18/08/2015, Nigel Redmon  wrote:
> I’m sorry, I’m missing your point here, Peter (and perhaps I missed Roberts,
> hence the “No?” in my reply to him).
>
> The frequency response of linear interpolation is (sin(pi*x)/(pi*x))^2, -7.8
> dB at 0.5 of the sample rate...

A half-sample delay using linear interpolation is equivalent to a
filter with the following time domain transfer function:

y[n] = 0.5*x[n] + 0.5*x[n-1]

Which equals

y[n] = (x[n] + x[n-1])/2

In other words, it equals a moving average filter of length 2, since
we're taking the running average of two adjancent samples.

According to [1], the frequency response of a moving average filter of
length M is

  sin(PI * f * M)
H(f) = -
  M * sin(PI * f)

Therefore, for f=0.5 and M=2, this formula gives

sin(PI * 0.5 * 2)
H(0.5) = ---
2 * sin(PI * 0.5)

Since sin(PI) equals zero, this equation equals zero. Therefore, the
gain of a half-sample delay with linear interpolation equals -Inf
decibels at Nyquist (f=0.5).

(For other frequencies, I think you need to take absolute value of the
above formula, otherwise you'll get negative ampltiudes.)

Probably you got your sinc^2 formula from JOS[2]. I think that doesn't
apply here, and it also contradicts JOS[3]. If you compare the
following two graphs from [2] and [3]:

https://ccrma.stanford.edu/~jos/pasp/img983.png
https://ccrma.stanford.edu/~jos/Interpolation/img9.png

they'll give different results (assuming normalized frequency 1 =
2*PI). First figure gives > -10dB for f=0.5 (w=PI), second figure
gives < -20dB for f=0.5 (w=PI). So either of them is not true in this
case.

-P

References:
[1] Steven W. Smith, "The Scientist and Engineer's Guide to Digital
Signal Processing"
http://www.dspguide.com/ch15/3.htm

[2] Julius O. Smith, "Physical Audio Signal Processing"
https://ccrma.stanford.edu/~jos/pasp/Linear_Interpolation_Frequency_Response.html

[3] Julius O. Smith, "Bandlimited Interpolation, Fractional Delay
Filtering, and Optimal FIR Filter Design"
https://ccrma.stanford.edu/~jos/Interpolation/Frequency_Responses_Linear_Interpolation.html

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
Comparison of the two formulas from previous post: (1) in blue, sinc^2
(2) in red:
http://morpheus.spectralhead.com/img/sinc.png

   sin(pi*x*2)
-(1)
   2*sin(pi*x)

(Formula from Steven W. Smith, absolute value taken on graph)

   sin(pi*x)
--- ^2   = sinc^2(x)(2)
 pi*x

(Formula from JOS, Nigel R.)

(1) and (2) (blue and red curve) are quite different.

Let's test how equation (1) compares against measured frequency
response of a LTI filter with coeffs [0.5, 0.5]:

http://morpheus.spectralhead.com/img/halfsample_delay_response.png

The maximum error between formula (1) and the measured frequency
response of the filter (a0=0.5, a1=0.5) is 3.3307e-16, or -310 dB,
which about equals the limits of the floating point precision at 64
bits. The frequency response was measured using Octave's freqz()
function, using 512 points.

Conclusion: Steven W. Smith's formula (1) seems correct.

Frequency response of the same filter in decibel scale:
http://morpheus.spectralhead.com/img/halfsample_delay_response2.png

(this graph is normalized to 0..1 rad, not 0..0.5)

The pole-zero plot was shown earlier, having a zero at z=-1, meaning
-Inf gain at Nyquist.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread robert bristow-johnson

On 8/18/15 11:46 PM, Ethan Duni wrote:
> for linear interpolation, if you are a delayed by 3.5 samples and you 
keep that delay constant, the transfer function is

>
>   H(z)  =  (1/2)*(1 + z^-1)*z^-3
>
>that filter goes to -inf dB as omega gets closer to pi.

Note that this holds for symmetric fractional delay filter of any odd 
order (i.e., Lagrange interpolation filter, windowed sinc, etc). It's 
not an artifact of the simple linear approach,


at precisely Nyquist, you're right.  as you approach Nyquist, linear 
interpolation is worser than cubic Hermite but better than cubic 
B-spline (better in terms of less roll-off, worser in terms of killing 
images).


it's a feature of the symmetric, finite nature of the fractional 
interpolator. Since there are good reasons for the symmetry 
constraint, we are left to trade off oversampling and filter 
order/design to get the final passband as flat as we need.


My view is that if you are serious about maintaining fidelity across 
the full bandwidth, you need to oversample by at least 2x.


i would say way more than 2x if you're using linear in between.  if 
memory is cheap, i might oversample by perhaps as much as 512x and then 
use linear to get in between the subsamples (this will get you 120 dB S/N).


That way you can fit the transition band of your interpolation filter 
above the signal band. In applications where you are less concerned 
about full bandwidth fidelity, oversampling isn't required. Some argue 
that 48kHz sample rate is already effectively oversampled for lots of 
natural recordings, for example. If it's already at 96kHz or higher I 
would not bother oversampling further.


i might **if** i want to resample by an arbitrary ratio and i am doing 
linear interpolation between the new over-sampled samples.


remember, when we oversample for the purpose of resampling, if the 
prototype LPF is FIR (you know, the polyphase thingie), then you need 
not calculate all of the new over-sampled samples.  only the two you 
need to linear interpolate between.  so oversampling by a large factor 
only costs more in terms of memory for the coefficient storage.  not in 
computational effort.



Also this is recommended reading for this thread:

https://ccrma.stanford.edu/~jos/Interpolation/ 





quite familiar with it.

--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
>i would say way more than 2x if you're using linear in between.  if memory
is cheap, i might oversample by perhaps as much as 512x >and then use
linear to get in between the subsamples (this will get you 120 dB S/N).

But why would you constrain yourself to use first-order linear
interpolation? The oversampler itself is going to be a much higher order
linear interpolator. So it seems strange to pour resources into that, just
so you can avoid putting them into the final fractional interpolator. Is
the justification that the oversampler is a fixed interpolator, whereas the
final stage is variable (so we don't want to muck around with anything too
complex there)? I've seen it claimed (by Julius Smith IIRC) that
oversampling by as little as 10% cuts the interpolation filter requirements
by over 50%. So heavy oversampling seems strange, unless there's some hard
constraint forcing you to use a first-order interpolator.

>quite familiar with it.

Yeah that was more for the list in general, to keep this discussion
(semi-)grounded.

E

On Wed, Aug 19, 2015 at 9:15 AM, robert bristow-johnson <
r...@audioimagination.com> wrote:

> On 8/18/15 11:46 PM, Ethan Duni wrote:
>
>> > for linear interpolation, if you are a delayed by 3.5 samples and you
>> keep that delay constant, the transfer function is
>> >
>> >   H(z)  =  (1/2)*(1 + z^-1)*z^-3
>> >
>> >that filter goes to -inf dB as omega gets closer to pi.
>>
>> Note that this holds for symmetric fractional delay filter of any odd
>> order (i.e., Lagrange interpolation filter, windowed sinc, etc). It's not
>> an artifact of the simple linear approach,
>>
>
> at precisely Nyquist, you're right.  as you approach Nyquist, linear
> interpolation is worser than cubic Hermite but better than cubic B-spline
> (better in terms of less roll-off, worser in terms of killing images).
>
> it's a feature of the symmetric, finite nature of the fractional
>> interpolator. Since there are good reasons for the symmetry constraint, we
>> are left to trade off oversampling and filter order/design to get the final
>> passband as flat as we need.
>>
>> My view is that if you are serious about maintaining fidelity across the
>> full bandwidth, you need to oversample by at least 2x.
>>
>
> i would say way more than 2x if you're using linear in between.  if memory
> is cheap, i might oversample by perhaps as much as 512x and then use linear
> to get in between the subsamples (this will get you 120 dB S/N).
>
> That way you can fit the transition band of your interpolation filter
>> above the signal band. In applications where you are less concerned about
>> full bandwidth fidelity, oversampling isn't required. Some argue that 48kHz
>> sample rate is already effectively oversampled for lots of natural
>> recordings, for example. If it's already at 96kHz or higher I would not
>> bother oversampling further.
>>
>
> i might **if** i want to resample by an arbitrary ratio and i am doing
> linear interpolation between the new over-sampled samples.
>
> remember, when we oversample for the purpose of resampling, if the
> prototype LPF is FIR (you know, the polyphase thingie), then you need not
> calculate all of the new over-sampled samples.  only the two you need to
> linear interpolate between.  so oversampling by a large factor only costs
> more in terms of memory for the coefficient storage.  not in computational
> effort.
>
> Also this is recommended reading for this thread:
>>
>> https://ccrma.stanford.edu/~jos/Interpolation/ <
>> https://ccrma.stanford.edu/%7Ejos/Interpolation/>
>>
>>
> quite familiar with it.
>
> --
>
> r b-j  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Theo Verelst
SOmetimes I feel the personal integrity about these undergrad level 
scientific quests is nowhere to be found with some people, and that's a 
shame.


Working on a decent subject like these mathematical approximations in 
the digital signal processing should be accompanied with at least some 
self-respect in the treatment of subjects one involves oneself in, 
obviously apart from chatter and stories and so on, because otherwise 
people might feel hurt to be contributing only as it were to "feed da 
Man" or something of that nature, and that's not cool in my opinion.


T.V.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 19/08/2015, Ethan Duni  wrote:
>
> But why would you constrain yourself to use first-order linear
> interpolation?

Because it's computationally very cheap?

> The oversampler itself is going to be a much higher order
> linear interpolator. So it seems strange to pour resources into that

Linear interpolation needs very little computation, compared to most
other types of interpolation. So I do not consider the idea of using
linear interpolation for higher stages of oversampling strange at all.
The higher the oversampling, the more optimal it is to use linear in
the higher stages.

> So heavy oversampling seems strange, unless there's some hard
> constraint forcing you to use a first-order interpolator.

The hard constraint is CPU usage, which is higher in all other types
of interpolators.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread robert bristow-johnson

On 8/19/15 1:43 PM, Peter S wrote:

On 19/08/2015, Ethan Duni  wrote:

But why would you constrain yourself to use first-order linear
interpolation?

Because it's computationally very cheap?


and it doesn't require a table of coefficients, like doing higher-order 
Lagrange or Hermite would.



The oversampler itself is going to be a much higher order
linear interpolator. So it seems strange to pour resources into that

Linear interpolation needs very little computation, compared to most
other types of interpolation. So I do not consider the idea of using
linear interpolation for higher stages of oversampling strange at all.
The higher the oversampling, the more optimal it is to use linear in
the higher stages.



here, again, is where Peter and i are on the same page.


So heavy oversampling seems strange, unless there's some hard
constraint forcing you to use a first-order interpolator.

The hard constraint is CPU usage, which is higher in all other types
of interpolators.



for plugins or embedded systems with a CPU-like core, computation burden 
is more of a cost issue than memory used.  but there are other embedded 
DSP situations where we are counting every word used.  8 years ago, i 
was working with a chip that offered for each processing block 8 
instructions (there were multiple moves, 1 multiply, and 1 addition that 
could be done in a single instruction), 1 state (or 2 states, if you 
count the output as a state) and 4 scratch registers.  that's all i 
had.  ain't no table of coefficients to look up.  in that case memory is 
way more important than wasting a few instructions recomputing numbers 
that you might otherwise just look up.





--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
>and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.

Well, you can compute those at runtime if you want - and you don't need a
terribly high order Lagrange interpolator if you're already oversampled, so
it's not necessarily a problematic overhead.

Meanwhile, the oversampler itself needs a table of coefficients. Assuming
we're talking about FIR interpolation, to avoid phase distortion. But
that's a single fixed table for supporting a single oversampling ratio, so
I can see how it would add up to a memory savings compared to a bank of
tables for different fractional interpolation points, if you're looking for
really fine/arbitrary granularity. If we're talking about a fixed
fractional delay, I'm not really seeing the advantage.

Obviously it will depend on the details of the application, it just seems
kind of unbalanced on its face to use heavy oversampling and then the
lightest possible fractional interpolator. It's not clear to me that a
moderate oversampling combined with a fractional interpolator of modestly
high order wouldn't be a better use of resources.

So it doesn't make a lot of sense to me to point to the low resource costs
of the first-order linear interpolator, when you're already devoting
resources to heavy oversampling in order to use it. They need to be
considered together and balanced, no? Your point about computing only the
subset of oversamples needed to drive the final fractional interpolator is
well-taken, but I think I need to see a more detailed accounting of that to
be convinced.

E

On Wed, Aug 19, 2015 at 1:00 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

> On 8/19/15 1:43 PM, Peter S wrote:
>
>> On 19/08/2015, Ethan Duni  wrote:
>>
>>> But why would you constrain yourself to use first-order linear
>>> interpolation?
>>>
>> Because it's computationally very cheap?
>>
>
> and it doesn't require a table of coefficients, like doing higher-order
> Lagrange or Hermite would.
>
> The oversampler itself is going to be a much higher order
>>> linear interpolator. So it seems strange to pour resources into that
>>>
>> Linear interpolation needs very little computation, compared to most
>> other types of interpolation. So I do not consider the idea of using
>> linear interpolation for higher stages of oversampling strange at all.
>> The higher the oversampling, the more optimal it is to use linear in
>> the higher stages.
>>
>>
> here, again, is where Peter and i are on the same page.
>
> So heavy oversampling seems strange, unless there's some hard
>>> constraint forcing you to use a first-order interpolator.
>>>
>> The hard constraint is CPU usage, which is higher in all other types
>> of interpolators.
>>
>>
> for plugins or embedded systems with a CPU-like core, computation burden
> is more of a cost issue than memory used.  but there are other embedded DSP
> situations where we are counting every word used.  8 years ago, i was
> working with a chip that offered for each processing block 8 instructions
> (there were multiple moves, 1 multiply, and 1 addition that could be done
> in a single instruction), 1 state (or 2 states, if you count the output as
> a state) and 4 scratch registers.  that's all i had.  ain't no table of
> coefficients to look up.  in that case memory is way more important than
> wasting a few instructions recomputing numbers that you might otherwise
> just look up.
>
>
>
>
>
> --
>
> r b-j  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 19/08/2015, Ethan Duni  wrote:
>
> Obviously it will depend on the details of the application, it just seems
> kind of unbalanced on its face to use heavy oversampling and then the
> lightest possible fractional interpolator. It's not clear to me that a
> moderate oversampling combined with a fractional interpolator of modestly
> high order wouldn't be a better use of resources.

To quote Olli Niemitalo:

"The presented optimal interpolators make it possible to do
transparent-quality resampling for even the most demanding
applications with only 2x or 4x oversampling before the interpolation.
However, in most cases simple linear interpolation combined with a
very high-ratio oversampling (perhaps 512x) is the optimal tradeoff.
The computational costs depend on the platform and the oversampling
implementation."

Linear interpolation is so cheap, that - depending on the situation -
it may often be cheaper to use more oversampling with linear
interpolation, than less oversampling with some heavier resampling
filter.

> So it doesn't make a lot of sense to me to point to the low resource costs
> of the first-order linear interpolator, when you're already devoting
> resources to heavy oversampling in order to use it.

Apparently you're missing the whole point - it's the linear
interpolation that makes the oversampling "cheap"(er) and not (as)
"heavy". Since memory is usually not an issue, the costs of your
oversampling depends mostly on what kind of resampling filters you
use, which you can choose freely. If you use cheaper filters, it won't
be that "heavy". If I use 128x oversampling with zero order hold or
zero stuffing, that won't be "heavy", since I'm merely copying
samples. What's "heavy" is the resampling filters...

> They need to be
> considered together and balanced, no? Your point about computing only the
> subset of oversamples needed to drive the final fractional interpolator is
> well-taken, but I think I need to see a more detailed accounting of that to
> be convinced.

See:
Olli Niemitalo, "Polynomial Interpolators for High-Quality Resampling
of Oversampled Audio"
http://yehar.com/blog/wp-content/uploads/2009/08/deip.pdf

See for example section "4.1 2-point, 3rd-order optimal" on page 19 -
as the amount of oversampling increases, the "optimal" interpolator
filter converges towards linear interpolation.

Quote:
"As can be seen from the following, 2-point, 3rd-order optimal
interpolators converge
to linear interpolation as the oversampling ratio increases. This is
an indication to
use linear interpolation at very high oversampling ratios."

Alternatively, see literature on multirate processing.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 19/08/2015, Ethan Duni  wrote:
>
> Obviously it will depend on the details of the application, it just seems
> kind of unbalanced on its face to use heavy oversampling and then the
> lightest possible fractional interpolator.

It should also be noted that the linear interpolation can be used for
the upsampling itself as well, reducing the cost of your oversampling,
not just as your fractional delay. A potential method to do fractional
delay is to upsample by a large factor, then delay by an integer
number of samples, and then downsample, without the use of an actual
fractional delay.

Say, if the fraction is 0.37, then you may upsamle by 512x, then delay
the upsampled signal by round(512*0.37) = 189 samples, then downsample
back. So you did a fractional delay without using actual fractional
interpolation for the delay - you delayed by an integer number of
samples. You'll also have a little error - your delay is 0.369140625
instead of the desired 0.37, since it's quantized to 512 steps, so the
error is -0.000859375. I'm not saying this is ideal, I'm just saying
this is one possible way of doing a fractional delay.

This is discussed by JOS[1]:

"In discrete time processing, the operation Eq.(4.5) can be
approximated arbitrarily closely by digital upsampling by a large
integer factor M, delaying by L samples (an integer), then finally
downsampling by M, as depicted in Fig.4.7 [96]. The integers L and M
are chosen so that eta ~= L/M, where eta the desired fractional
delay."

[1] Julius O. Smith, "Physical Audio Signal Processing"
https://ccrma.stanford.edu/~jos/pasp/Convolution_Interpretation.html

Ref. [96] is:
R. Crochiere, L. Rabiner, and R. Shively, ``A novel implementation
of digital phase shifters,'' Bell System Technical Journal, vol. 65,
pp. 1497-1502, Oct. 1975.

Abstract:
"A novel technique is presented for implementing a variable digital
phase shifter which is capable of realizing noninteger delays. The
theory behind the technique is based on the idea of first
interpolating the signal to a high sampling rate, then using an
integer delay, and finally decimating the signal back to the original
sampling rate. Efficient methods for performing these processes are
discussed in this paper. In particular, it is shown that the digital
phase shifter can be implemented by means of a simple convolution at
the sampling rate of the original signal."

In short, there are a zillion ways of implementing both oversampling
and fractional delays, and they can be combined arbitrarily.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
>To quote Olli Niemitalo:
>
>"The presented optimal interpolators make it possible to do
>transparent-quality resampling for even the most demanding
>applications with only 2x or 4x oversampling before the interpolation.
>However, in most cases simple linear interpolation combined with a
>very high-ratio oversampling (perhaps 512x) is the optimal tradeoff.
>The computational costs depend on the platform and the oversampling
>implementation."

You should include the rest of that paragraph:

"Therefore, which interpolator is the best is not concluded here. You must
first decide what quality you need (for example around 90dB modified SNR
for a transparency of 16 bits) and then see what alternatives the table
given in the summary has to suggest for the oversampling ratios you can
afford."

Also, earlier in the same reference:

"It is outside the scope of this paper to make guesses of the most
profitable over- sampling ratio."

I don't dispute that linear fractional interpolation is the right choice if
you're going to oversample by a large ratio. The question is what is the
right balance overall, when considering the combined costs of
the oversampler and the fractional interpolator. Ollie's paper isn't trying
to address that, he's leaving the oversampler considerations out of scope
and just showing what your best options are for a given oversampling ratio.
The approach there is that you start with a decision of what oversampling
ratio you can afford, and then use his tables to figure out what
interpolator you're going to need to get the desired quality. Note also the
implication that the oversampler is itself the main thing driving the
resource considerations.

The sentence about how 512x oversampling if the optimal trade-off in most
cases is a bit out of place there, considering that there is nothing in the
paper that establishes that, and several instances in which Ollie makes it
explicit that such conclusions are out of scope of the paper.

>Apparently you're missing the whole point - it's the linear
>interpolation that makes the oversampling "cheap"(er) and not (as)
>"heavy".

You can leverage any finite interpolator to skip computations in an FIR
oversampler, not just linear. You get the most "skipping" in the case of
high oversampling ratio and linear interpolation, but the same trick still
works any time your oversampling ratio is greater than your interpolator
order.

The flipside is that the higher the oversampling ratio, the longer the FIR
oversampling filter needs to be in the first place. An FIR lowpass with
cutoff at a normalized frequency of 1/512 and >100dB stop band rejection is
going to require a quite high order. Move the cutoff up to 1/4 or 1/2 and
the required filter order drops dramatically. You can use IIR instead, but
then you have to compute all of the oversamples, not just the (tiny) subset
you require to drive the interpolator - and you have the same growth in the
required filter order as the oversampling ratio increases. And you get
phase distortion, of course.

>Since memory is usually not an issue,

There are lots of dsp applications where memory is very much the main
constraint.

>the costs of your oversampling depends mostly on what kind of resampling
filters you
>use, which you can choose freely. If you use cheaper filters, it won't
>be that "heavy". If I use 128x oversampling with zero order hold or
>zero stuffing, that won't be "heavy", since I'm merely copying
>samples.

The performance of your oversampler will be garbage if you do that. And so
there will be no point in worrying about the quality of fractional
interpolation after that point, since the signal you'll be interpolating
will be full of aliasing to begin with. If you want high quality fractional
interpolation, then the oversampling stage needs to itself be high quality.
And that means it needs lots of resources, especially as the oversampling
ratio gets large. It's the required quality that drives the oversampler
costs (and filter design choices).

If you are willing to accept low quality in order to save on CPU (or maybe
there's nothing in the upper frequencies that you're worried about), then
there's no point in resampling at all. Just use a low order fractional
interpolator directly on the signal.

>It should also be noted that the linear interpolation can be used for
>the upsampling itself as well, reducing the cost of your oversampling,

Again, that would add up to a very low quality upsampler.

E



On Wed, Aug 19, 2015 at 2:06 PM, Peter S 
wrote:

> On 19/08/2015, Ethan Duni  wrote:
> >
> > Obviously it will depend on the details of the application, it just seems
> > kind of unbalanced on its face to use heavy oversampling and then the
> > lightest possible fractional interpolator. It's not clear to me that a
> > moderate oversampling combined with a fractional interpolator of modestly
> > high order wouldn't be a better use of resources.
>
> To quote Olli Niemitalo:
>
> "The presented optimal interpolators m

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 20/08/2015, Ethan Duni  wrote:
>
> I don't dispute that linear fractional interpolation is the right choice if
> you're going to oversample by a large ratio. The question is what is the
> right balance overall, when considering the combined costs of
> the oversampler and the fractional interpolator.

It's hard to tell in general. It depends on various factors, including:

- your desired/available CPU usage
- your desired/available memory usage and cache size
- the available instruction set of your CPU
- your desired antialias filter steepness
- your desired stopband attenuation

...and possibly other factors. Since these may vary largely, I think
it is impossible to tell in general. What I read in multirate
literature, and what is also my own experience, is that - when using a
relatively large oversampling ratio - then it's more cost-effective to
use linear interpolation at the higher stages (and that's Olli's
conclusion as well).

> You can leverage any finite interpolator to skip computations in an FIR
> oversampler, not just linear. You get the most "skipping" in the case of
> high oversampling ratio and linear interpolation, but the same trick still
> works any time your oversampling ratio is greater than your interpolator
> order.

But to a varying degree. A FIR interpolator is still "heavy" if you
skip samples where the coefficient is zero, compared to linear
interpolation (but it is also higher quality).

> The flipside is that the higher the oversampling ratio, the longer the FIR
> oversampling filter needs to be in the first place.

Nope. Ever heard of multistage interpolation? You may do a small FIR
stage (say, 2x or 4x), and then a linear stage (or another,
low-complexity FIR stage according to your desired specifications, or
even further stages). Seems you still don't understand that you can
oversample in multiple stages, and use a linear interpolator for the
higher stages of oversampling... Which is almost always optimal than
using a single costy FIR filter to do the interpolation. You don't
need to use a 512x FIR at >100 dB stopband attentuation, that's just
plain wrong and stupid, and that's what all advanced multirate books
will also tell you.

Same for IIR case.

>>Since memory is usually not an issue,
>
> There are lots of dsp applications where memory is very much the main
> constraint.

Tell me, you don't have an extra half kilobyte of memory in a typical
computer? I hear, those have 8-32 GB of RAM nowadays, and CPU cache
sizes are like 32-128 KiB.

> The performance of your oversampler will be garbage if you do that. And so
> there will be no point in worrying about the quality of fractional
> interpolation after that point, since the signal you'll be interpolating
> will be full of aliasing to begin with.

Exactly. But it won't be "heavy"! So it's not the "oversampling" what
makes the process heavy, but rather, the interpolation / anti-aliasing
filter!!

> And that means it needs lots of resources, especially as the oversampling
> ratio gets large. It's the required quality that drives the oversampler
> costs (and filter design choices).

Which is exactly what I said. If your specification is low, you can
have a 128x oversampler that is (relatively) "low-cost". It's not the
oversampling ratio what matters most.

> If you are willing to accept low quality in order to save on CPU (or maybe
> there's nothing in the upper frequencies that you're worried about), then
> there's no point in resampling at all. Just use a low order fractional
> interpolator directly on the signal.

Seems you still miss the whole point of multistage interpolation. I
recommend you read some books / papers on multirate processing.

>>It should also be noted that the linear interpolation can be used for
>>the upsampling itself as well, reducing the cost of your oversampling,
>
> Again, that would add up to a very low quality upsampler.

You're wrong. Read Olli Niemitalo's paper again (and some multirate
books). When the oversampling ratio is high and the signal is already
oversampled, linear interpolation is (nearly) optimal. That implies a
multistage upsampler, which is typically computationally a lot more
optimal than a single-stage one. Just as the multirate signal
processing literature will tell you in detail.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
"3.2 Multistage
3.2.1 Can I interpolate in multiple stages?

Yes, so long as the interpolation ratio, L, is not a prime number.
For example, to interpolate by a factor of 15, you could interpolate
by 3 then interpolate by 5. The more factors L has, the more choices
you have. For example you could interpolate by 16 in:

- one stage: 16
- two stages: 4 and 4
- three stages: 2, 2, and 4
- four stages: 2, 2, 2, and 2

3.2.2 Cool. But why bother with all that?

Just as with decimation, the computational and memory requirements
of interpolation filtering can often be reduced by using multiple
stages.

3.2.3 OK, so how do I figure out the optimum number of stages, and the
interpolation ratio at each stage?

There isn't a simple answer to this one: the answer varies depending
on many things. However, here are a couple of rules of thumb:

- Using two or three stages is usually optimal or near-optimal.
- Interpolate in order of the smallest to largest factors. For
example, when interpolating by a factor of 60 in three stages,
interpolate by 3, then by 4, then by 5. (Use the largest ratio on the
highest rate.)"

http://dspguru.com/dsp/faqs/multirate/interpolation
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
>Nope. Ever heard of multistage interpolation?

I'm well aware that multistage interpolation gives cost savings relative to
single-stage interpolation, generally. That is beside the point: the costs
of interpolation all still scale with oversampling ratio and quality
requirements, just like in single stage interpolation. There's no magic  to
multi-stage interpolation that avoids that relationship.

>that's just plain wrong and stupid, and that's what all advanced multirate
books
>will also tell you.

You've been told repeatedly that this kind of abusive, condescending
behavior is not welcome here, and you need to cut it out immediately.

>Tell me, you don't have an extra half kilobyte of memory in a typical
>computer?

There are lots of dsp applications that don't run on personal computers,
but rather on very lightweight embedded targets. Memory tends to be at a
premium on those platforms.

E










On Wed, Aug 19, 2015 at 3:55 PM, Peter S 
wrote:

> On 20/08/2015, Ethan Duni  wrote:
> >
> > I don't dispute that linear fractional interpolation is the right choice
> if
> > you're going to oversample by a large ratio. The question is what is the
> > right balance overall, when considering the combined costs of
> > the oversampler and the fractional interpolator.
>
> It's hard to tell in general. It depends on various factors, including:
>
> - your desired/available CPU usage
> - your desired/available memory usage and cache size
> - the available instruction set of your CPU
> - your desired antialias filter steepness
> - your desired stopband attenuation
>
> ...and possibly other factors. Since these may vary largely, I think
> it is impossible to tell in general. What I read in multirate
> literature, and what is also my own experience, is that - when using a
> relatively large oversampling ratio - then it's more cost-effective to
> use linear interpolation at the higher stages (and that's Olli's
> conclusion as well).
>
> > You can leverage any finite interpolator to skip computations in an FIR
> > oversampler, not just linear. You get the most "skipping" in the case of
> > high oversampling ratio and linear interpolation, but the same trick
> still
> > works any time your oversampling ratio is greater than your interpolator
> > order.
>
> But to a varying degree. A FIR interpolator is still "heavy" if you
> skip samples where the coefficient is zero, compared to linear
> interpolation (but it is also higher quality).
>
> > The flipside is that the higher the oversampling ratio, the longer the
> FIR
> > oversampling filter needs to be in the first place.
>
> Nope. Ever heard of multistage interpolation? You may do a small FIR
> stage (say, 2x or 4x), and then a linear stage (or another,
> low-complexity FIR stage according to your desired specifications, or
> even further stages). Seems you still don't understand that you can
> oversample in multiple stages, and use a linear interpolator for the
> higher stages of oversampling... Which is almost always optimal than
> using a single costy FIR filter to do the interpolation. You don't
> need to use a 512x FIR at >100 dB stopband attentuation, that's just
> plain wrong and stupid, and that's what all advanced multirate books
> will also tell you.
>
> Same for IIR case.
>
> >>Since memory is usually not an issue,
> >
> > There are lots of dsp applications where memory is very much the main
> > constraint.
>
> Tell me, you don't have an extra half kilobyte of memory in a typical
> computer? I hear, those have 8-32 GB of RAM nowadays, and CPU cache
> sizes are like 32-128 KiB.
>
> > The performance of your oversampler will be garbage if you do that. And
> so
> > there will be no point in worrying about the quality of fractional
> > interpolation after that point, since the signal you'll be interpolating
> > will be full of aliasing to begin with.
>
> Exactly. But it won't be "heavy"! So it's not the "oversampling" what
> makes the process heavy, but rather, the interpolation / anti-aliasing
> filter!!
>
> > And that means it needs lots of resources, especially as the oversampling
> > ratio gets large. It's the required quality that drives the oversampler
> > costs (and filter design choices).
>
> Which is exactly what I said. If your specification is low, you can
> have a 128x oversampler that is (relatively) "low-cost". It's not the
> oversampling ratio what matters most.
>
> > If you are willing to accept low quality in order to save on CPU (or
> maybe
> > there's nothing in the upper frequencies that you're worried about), then
> > there's no point in resampling at all. Just use a low order fractional
> > interpolator directly on the signal.
>
> Seems you still miss the whole point of multistage interpolation. I
> recommend you read some books / papers on multirate processing.
>
> >>It should also be noted that the linear interpolation can be used for
> >>the upsampling itself as well, reducing the cost of your oversampling,
> >
> > Again, that would a

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 20/08/2015, Ethan Duni  wrote:
>>Nope. Ever heard of multistage interpolation?
>
> I'm well aware that multistage interpolation gives cost savings relative to
> single-stage interpolation, generally.That is beside the point: the costs
> of interpolation all still scale with oversampling ratio and quality
> requirements, just like in single stage interpolation. There's no magic  to
> multi-stage interpolation that avoids that relationship.

No one said there is. Yet linear interpolation *can* reduce savings in
a signifcant amount of cases, which was the point. *How* much
oversampling you're gonna actually use, depends on your desired
specification. As been said already.

> You've been told repeatedly that this kind of abusive, condescending
> behavior is not welcome here, and you need to cut it out immediately.

Sorry, you need to learn more about interpolation first. Doing 512x
upsampling in a single FIR step with >100 dB stopband attenuation as
you suggested, is just plain stupid. There's no better way of
expressing it. You need to learn more multirate litereture. Also, I'm
not taking stylistic lessons from strangers.

> There are lots of dsp applications that don't run on personal computers,
> but rather on very lightweight embedded targets. Memory tends to be at a
> premium on those platforms.

Just as been said already. Seems we're back at the "tell everything 5
times" stage again.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 20/08/2015, Peter S  wrote:
>
> No one said there is. Yet linear interpolation *can* reduce savings in

(*) correction: reduce costs
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
Ugh, I suppose this is what I get for attempting to engage with Peter S
again. Not sure what I was thinking...

E
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Peter S
On 20/08/2015, Ethan Duni  wrote:
> Ugh, I suppose this is what I get for attempting to engage with Peter S
> again. Not sure what I was thinking...

Well, you asked, "why use linear interpolation at all?" We told you
the advantages - fast computation, no coefficient table needed, and
(nearly) optimal for high oversampling ratios, and you were given some
literature.

If you don't believe it - well, not my problem... it's still true. #notmyloss
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
>rbj
>and it doesn't require a table of coefficients, like doing higher-order
Lagrange or Hermite would.

Robert I think this is where you lost me. Wasn't the premise that memory
was cheap, so we can store a big prototype FIR for high quality 512x
oversampling? So why are we then worried about the table space for the
fractional interpolator?

I wonder if the salient design concern here is less about balancing
resources, and more about isolating and simplifying the portions of the
system needed to support arbitrary (as opposed to just very-high-but-fixed)
precision. I like the modularity of the high oversampling/linear interp
approach, since that it supports arbitrary precision with a minimum of
fussy variable components or arcane coefficient calculations. It's got a
lot going for it in software engineering terms. But I'm on the fence about
whether it's the tightest use of resources (for whatever constraints).
Typically those are the arcane ones that take a ton of debugging and
optimization :P

E



On Wed, Aug 19, 2015 at 1:00 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

> On 8/19/15 1:43 PM, Peter S wrote:
>
>> On 19/08/2015, Ethan Duni  wrote:
>>
>>> But why would you constrain yourself to use first-order linear
>>> interpolation?
>>>
>> Because it's computationally very cheap?
>>
>
> and it doesn't require a table of coefficients, like doing higher-order
> Lagrange or Hermite would.
>
> The oversampler itself is going to be a much higher order
>>> linear interpolator. So it seems strange to pour resources into that
>>>
>> Linear interpolation needs very little computation, compared to most
>> other types of interpolation. So I do not consider the idea of using
>> linear interpolation for higher stages of oversampling strange at all.
>> The higher the oversampling, the more optimal it is to use linear in
>> the higher stages.
>>
>>
> here, again, is where Peter and i are on the same page.
>
> So heavy oversampling seems strange, unless there's some hard
>>> constraint forcing you to use a first-order interpolator.
>>>
>> The hard constraint is CPU usage, which is higher in all other types
>> of interpolators.
>>
>>
> for plugins or embedded systems with a CPU-like core, computation burden
> is more of a cost issue than memory used.  but there are other embedded DSP
> situations where we are counting every word used.  8 years ago, i was
> working with a chip that offered for each processing block 8 instructions
> (there were multiple moves, 1 multiply, and 1 addition that could be done
> in a single instruction), 1 state (or 2 states, if you count the output as
> a state) and 4 scratch registers.  that's all i had.  ain't no table of
> coefficients to look up.  in that case memory is way more important than
> wasting a few instructions recomputing numbers that you might otherwise
> just look up.
>
>
>
>
>
> --
>
> r b-j  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Theo Verelst

Hi,

A suggestion for those working on practical implementations, and lighten 
up the tone of the discussion with some people I know from worked on all 
kinds of (semi-) pro implementations when I wasn't even into more than 
basic DSP yet.


The tradeoffs about engineering and implementing on a platform with 
given limitations (or for advanced people making filters: possibly even 
trading off the computation properties required for a self-designed DSP 
unit) including memory use, required clock speed, and heat build-up (not 
so important nowadays for simple filters) can be more accurately met by 
being specific about the requirements in terms of the quality and the 
quantification of the error bounds, as in this case "how much high 
frequency loss can I prevent, at which engineering (or research) cost, 
and how many extra clock cycles of my DSP/CPU".


In some cases, it can pay to do the extra effort of separating your 
audio frequency range in a couple of bands, so say you make an 
interpolator for low frequencies (e.g. simple zero-order) for 
mid-frequencies (with some attention for artifacts in the oh so 
sensitive m3 kHz range, and for instance for frequencies above 10kHz, 
where you can then pay most attention to the way the damping of the 
higher frequencies come across more than the exact accuracy of the short 
time convolution filter you use. Such a in this case limited multi-band 
approach costs a few filters and a little thinking about how those bands 
will later add back up properly to a decent signal, but it can make 
audio quality higher without requiring extreme resources.


T.V.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
On 20/08/2015, Ethan Duni  wrote:
> But I'm on the fence about
> whether it's the tightest use of resources (for whatever constraints).

Then try and measure it yourself - you don't believe my words anyways.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
On 20/08/2015, Ethan Duni  wrote:
>
> Wasn't the premise that memory
> was cheap, so we can store a big prototype FIR for high quality 512x
> oversampling? So why are we then worried about the table space for the
> fractional interpolator?

For the record, wasn't it you who said memory is often a constraint?
Quote from you:
"There are lots of dsp applications where memory is very much the main
constraint."

So apparently your premise is that memory can be expensive and a
constraint, and now you ask "why are we worried about using extra
memory".

At least make up your mind, whether you consider memory cheap or expensive...

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
On 20/08/2015, Ethan Duni  wrote:
>
> Wasn't the premise that memory
> was cheap, so we can store a big prototype FIR for high quality 512x
> oversampling? So why are we then worried about the table space for the
> fractional interpolator?

And the other reason - the coefficients for a 2000-point windowed sinc
FIR will not fit into the CPU registers. So it will keep the
coefficients in the L1 cache, or when there's a cache miss, in the L2
cache (or even in the RAM in the worst case). So your CPU will spend a
lot of time waiting for the coefficients to arrive from the
cache/memory, instead of keeping it in fast and efficient registers.
But that's just another reason why your 2000-point 512x windowed sinc
filter is going to be really slow.

"Memory constraint" is usually applied on 5 levels:

1) size of your registers (fastest memory)
2) size of your L1 cache (still quite fast)
3) size of your L2 cache (slower)
4) size of your RAM (about 10x slower than registers)
5) size of your hard disk (very slow)

The fastest algorithm is that fits into the registers, because the
registers are the fastest memory. It's not that you do not have memory
available, it's that both register spilling and cache misses will
cause performance loss, because the CPU will be waiting for the data
to travel between the registers<->L1 cache<->L2 cache<->MMU<->RAM.

Here's how your CPU gets data from RAM[1]:

- Get the pointer to the data being loaded. (Said pointer is
probably in a register.)
- Send that pointer off to the MMU.
- The MMU translates the virtual address in the pointer to a
physical address.
- Send the physical address to the memory controller.
- Memory controller figures out what bank of RAM the data is in
and asks the RAM.
- The RAM figures out particular chunk the data is in, and asks that chunk.
- Step 6 may repeat a couple of more times before narrowing it
down to a single array of cells.
- Load the data from the array.
- Send it back to the memory controller.
- Send it back to the CPU.
- Use it!

Naturally, this is a quite slow process. RAM is in fact, very slow,
compared to storing data in registers.

Source:
[1] 
https://www.mikeash.com/pyblog/friday-qa-2013-10-11-why-registers-are-fast-and-ram-is-slow.html

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
Let's analyze your suggestion of using a FIR filter at f = 0.5/512 =
0.0009765625 for an interpolation filter for 512x oversampling.

Here's the frequency response of a FIR filter of length 1000:
http://morpheus.spectralhead.com/img/fir512_1000.png

Closeup of the frequency range between 0-0.01 (cutoff marked with black line):
http://morpheus.spectralhead.com/img/fir512_1000_closeup.png

Apparently that's a pretty crappy anti-alias filter, the transition
band is very wide.

So let's try a FIR filter of length 5000:
http://morpheus.spectralhead.com/img/fir512_5000_closeup.png

Better, but still quite a lot of aliasing above the cutoff freq.

FIR filter of length 20,000:
http://morpheus.spectralhead.com/img/fir512_2_closeup.png

Now this starts to look like a proper-ish anti-alias filter.

The problem is - its length is 20,000 samples, so assuming 32-bit
float representation, the coefficients alone need about 80 KB of
memory... meaning that there's a high chance that it won't even fit
into the L1 cache, causing a lot of cache misses, so this filter will
be extra slow, since your CPU will be constantly waiting for the
coefficients to arrive from the L2 cache and/or RAM. Also consider how
much CPU power you need to do convolution with a 20,000 sample long
kernel at 512x oversampled rate... I bet you're not trying to do this
in realtime, are you?

So, that's not exactly the brightest way to do 512x oversampling,
unless you prefer to waste a lot of resources and spend a week on
upsampling. In that case, it is ideal.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
Here's a graph of performance in mflops of varying length FFT
transforms from the fftw.org benchmark page, for Intel Pentium 4:

http://morpheus.spectralhead.com/img/fftw_benchmark_pentium4.png

Afaik Pentium 4 has 16 KB of L1 data cache. If you check the graph,
around 8-16k the performance starts to drop drastically. I believe the
main reason for this is that the data doesn't fit into the L1 data
cache any more, which is 16 KB. You'll see similar graphs for most
other CPU types as well, there's a dropoff near the L1 cache size.

So using more memory is only free(ish) until a certain point - if your
data doesn't fit into the L1 cache any more, it will cause cache
misses and give you a performance penalty, because the CPU needs to
fetch the data from the L2 cache, which is several times slower. In
the graph, you can see ~3-4x performance difference between transforms
that fit into the L1 cache, and transforms that don't. For this
reason, very large filters have a notable performance penalty. The
coefficients for a FIR filter of length 20,000 will certainly not fit
into a 16 KB L1 data cache.

Here's the memory topology of AMD Bulldozer server microarchitecture:
https://upload.wikimedia.org/wikipedia/commons/9/95/Hwloc.png

Each core has a 16 KB L1 data cache. The further away you go from the
CPU core, the slower the memory access gets. L2 cache is 2 MB, and
there's a shared 8 MB L3 cache across cores. There's a 64 KB
instruction cache per two cores.

Similar cache architectures are common among computer processors
(sometimes without L3 cache). There's a document that discusses this
in depth:

Ulrich Drepper, "What Every Programmer Should Know About Memory"
http://morpheus.spectralhead.com/pdf/cpumemory.pdf

This document gives the following memory access times for Intel Pentium M:

Register: <= 1 cycle
L1 data cache: ~3 cycles
L2 cache: ~14 cycles
RAM: ~240 cycles

So this means, on a Pentium M, accessing data in the L1 cache is ~3x
slower, accessing data in the the L2 cache is ~14x slower, and
accessing data in the RAM is ~240x slower than accessing data in a
register. (Earlier I wrongly said RAM is about 10x slower, rather
that's about the L2 cache speed.) So if the data doesn't fit into the
L1 cache and needs to be fetched from the L2 cache, that's nearly 5x
slower on the Pentium M. A notable part of the L2 cache penalty is
caused by the physical limits of the universe - data travels in wires
at the speed of light, which is about 1 foot (30 cm) per nanosecond.
The larger the cache, the longer the wires, hence the longer the data
access delay.

For further details and detailed performance analysis, see the above paper.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Chris Santoro
As far as the oversampling + linear interpolation approach goes, I have to
ask... why oversample so much (512x)?

Purely from a rolloff perspective, it seems you can figure out what your
returns are going to be by calculating sinc^2 at (1/upsample_ratio) for a
variety of oversampling ratios. Here's the python code to run the numbers...

#-
import numpy as np

#normalized frequency points
X = [1.0/512.0, 1.0/256.0, 1.0/128.0, 1.0/64.0, 1.0/32.0, 1.0/16.0,
1.0/8.0, 1.0/4.0]
#find attenuation at frequency points due to linear interpolation worst
case (halfway in between)
S = np.sinc(X)
S = 20*np.log10(S*S)

print S
#---

and here's what it spits out for various attenuation values at what would
be nyquist in the baseband:

2X:   -7.8 dB
4X:   -1.8 dB
8X:   -0.44 dB
16X: -0.11 dB
32X: -0.027 dB
64X: -0.0069 dB
128X:   -0.0017 dB
256X:   -0.00043 dB
512X:   -0.00010 dB

If all you're trying to do is mitigate the rolloff of linear interp, it
looks like there's diminishing returns beyond 16X or 32X, where you're
talking about a tenth of a dB or less at nyquist, which most people can't
even hear in that range. Your anti-aliasing properties are going to be
determined by your choice of upsampling/windowed-sync/anti-imaging filter
and how long you want to let that be. Or am I missing something? It just
doesn't seem worth it go to that high.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Ethan Duni
>If all you're trying to do is mitigate the rolloff of linear interp

That's one concern, and by itself it implies that you need to oversample by
at least some margin to avoid having a zero at the top of your audio band
(along with a transition band below that).

But the larger concern is the overall accuracy of the interpolator. At low
oversampling ratios, the sinc^2 rolloff of the linear interpolator response
isn't effective at squashing the signal images, so you end up with aliasing
corrupting your results. Hence the need for higher order interpolation at
lower oversampling ratios, as described in Ollie's paper. If you want to
get high SNR out of linear interpolation, you need to crank up the
oversampling considerably - far beyond what is needed just to avoid the
attenuation of high frequencies of the in-band component, in order to
sufficiently squash the images.

E

On Thu, Aug 20, 2015 at 12:18 PM, Chris Santoro 
wrote:

> As far as the oversampling + linear interpolation approach goes, I have to
> ask... why oversample so much (512x)?
>
> Purely from a rolloff perspective, it seems you can figure out what your
> returns are going to be by calculating sinc^2 at (1/upsample_ratio) for a
> variety of oversampling ratios. Here's the python code to run the numbers...
>
> #-
> import numpy as np
>
> #normalized frequency points
> X = [1.0/512.0, 1.0/256.0, 1.0/128.0, 1.0/64.0, 1.0/32.0, 1.0/16.0,
> 1.0/8.0, 1.0/4.0]
> #find attenuation at frequency points due to linear interpolation worst
> case (halfway in between)
> S = np.sinc(X)
> S = 20*np.log10(S*S)
>
> print S
> #---
>
> and here's what it spits out for various attenuation values at what would
> be nyquist in the baseband:
>
> 2X:   -7.8 dB
> 4X:   -1.8 dB
> 8X:   -0.44 dB
> 16X: -0.11 dB
> 32X: -0.027 dB
> 64X: -0.0069 dB
> 128X:   -0.0017 dB
> 256X:   -0.00043 dB
> 512X:   -0.00010 dB
>
> If all you're trying to do is mitigate the rolloff of linear interp, it
> looks like there's diminishing returns beyond 16X or 32X, where you're
> talking about a tenth of a dB or less at nyquist, which most people can't
> even hear in that range. Your anti-aliasing properties are going to be
> determined by your choice of upsampling/windowed-sync/anti-imaging filter
> and how long you want to let that be. Or am I missing something? It just
> doesn't seem worth it go to that high.
>
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
Let me just add, that in case of having a non-oversampled linearly
interpolated fractional delay line with exactly 0.5 sample delay (most
high-frequency roll-off position), the frequency response formula is
not sinc^2, but rather, sin(2*PI*f)/(2*sin(PI*f)), as I discussed
earlier.

In that case, the results are slightly different:

# Tcl code -
set pi 3.141592653589793238
set freqs {1/4. 1/8. 1/16. 1/32. 1/64. 1/128. 1/256. 1/512. 1/1024.}
set amt 2
foreach freq $freqs {
set amp [expr sin(2*$pi*$freq)/(2*sin($pi*$freq))]
set db [expr 20.0 * log($amp)/log(10)]
puts "[format %-8s ${amt}X:][format %f $db] dB"
set amt [expr $amt*2]
}
# End of code --

Results:
2X: -3.01 dB
4X: -0.688 dB
8X: -0.169 dB
16X:-0.0419 dB
32X:-0.0105 dB
64X:-0.00262 dB
128X:   -0.00065 dB
256X:   -0.000164 dB
512X:   -0.41 dB
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
In the starting post, it was not specified that resampling was also
used - the question was:

"Is it possible to use a filter to compensate for high frequency
signal loss due to interpolation? For example linear or hermite
interpolation."

Without specifying that variable rate playback is involved, that could
be understood in various ways - for example, at first I thought the
interpolation was for the purpose of a (static or modulated)
fractional delay line. A third possible situation is using
linear/hermite interpolation as an upsampling filter in a 2^N
oversampler.

It was only specified 18 posts later, that the interpolation is used
for variable pitch playback.

These three situations all are different, and different formulas apply...

And the combination oversampling and linear/hermite interpolation can
also be meant in multiple ways.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Peter S
In the case of variable pitch playback with interpolation, here are
the frequency responses:

http://musicdsp.org/files/other001.gif
(graphs by Olli Niemitalo)

In this case, there's no zero at the original Nyquist freq, rather
there are zeros at the original sampling rate and its multiplies.

So it's useful to specify what you mean by "high frequency signal loss
due to interpolation", beacause that term is ambiguous and can mean
various things.

In this graph, the signal frequency seems to be 250 Hz, so this graph
shows the equivalent of about 22000/250 = 88x oversampling. At that
oversampling rate, gain of alias images of linear interpolation is -84
dB. High amounts of oversampling for high SNR ratios may be
necessitated by the slow rolloff of aliasing. (This was not mentioned
in the question in this thread, but is relevant.)

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Ethan Duni
>In this graph, the signal frequency seems to be 250 Hz, so this graph
>shows the equivalent of about 22000/250 = 88x oversampling.

That graph just shows the frequency responses of various interpolation
polynomials. It's not related to oversampling.

E

On Thu, Aug 20, 2015 at 5:40 PM, Peter S 
wrote:

> In the case of variable pitch playback with interpolation, here are
> the frequency responses:
>
> http://musicdsp.org/files/other001.gif
> (graphs by Olli Niemitalo)
>
> In this case, there's no zero at the original Nyquist freq, rather
> there are zeros at the original sampling rate and its multiplies.
>
> So it's useful to specify what you mean by "high frequency signal loss
> due to interpolation", beacause that term is ambiguous and can mean
> various things.
>
> In this graph, the signal frequency seems to be 250 Hz, so this graph
> shows the equivalent of about 22000/250 = 88x oversampling. At that
> oversampling rate, gain of alias images of linear interpolation is -84
> dB. High amounts of oversampling for high SNR ratios may be
> necessitated by the slow rolloff of aliasing. (This was not mentioned
> in the question in this thread, but is relevant.)
>
> -P
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
On 21/08/2015, Ethan Duni  wrote:
>>In this graph, the signal frequency seems to be 250 Hz, so this graph
>>shows the equivalent of about 22000/250 = 88x oversampling.
>
> That graph just shows the frequency responses of various interpolation
> polynomials. It's not related to oversampling.

Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
*exactly* upsampling - the sampling rate changes by a factor of 88x.
It's not bandlimited interpolation (using a windowed sinc
interpolator), hence there is a lot of aliasing above Nyquist.
Irregardless, it's still oversampling - the resulting signal is
sampled with a 88x higher frequency than the original. It's equivalent
to creating a 3,880,800 Hz signal from a 44100 Hz signal.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
Upsampling means, that the sampling rate increases. So if you have a
250 Hz signal, and create a 22000 Hz signal from it, that is - by
definition - upsampling.

That's *exactly* what upsampling means... You insert new samples
between the original ones, and interpolate between them (using
whatever interpolation filter of your preference).

And that is often used synonymously with 'oversampling', and that's
what happens in an "oversampled" D/A converter. (Though 'oversampling'
has a different meaning in A/D context.)

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
>Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
>*exactly* upsampling

That is not what is shown in that graph. The graph simply shows the
continuous-time frequency response of the interpolation polynomials,
graphed up to 22kHz. No resampling is depicted, or the frequency responses
would show the aliasing associated with that. It's just showing the sinc^2
response of the linear interpolator, and similar for the other polynomials.
This is what you'd get if you used those interpolation polynomials to
convert a 250Hz sampled signal into a continuous time signal, not a
discrete time signal of whatever sampling rate.

E

On Fri, Aug 21, 2015 at 2:09 AM, Peter S 
wrote:

> On 21/08/2015, Ethan Duni  wrote:
> >>In this graph, the signal frequency seems to be 250 Hz, so this graph
> >>shows the equivalent of about 22000/250 = 88x oversampling.
> >
> > That graph just shows the frequency responses of various interpolation
> > polynomials. It's not related to oversampling.
>
> Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
> *exactly* upsampling - the sampling rate changes by a factor of 88x.
> It's not bandlimited interpolation (using a windowed sinc
> interpolator), hence there is a lot of aliasing above Nyquist.
> Irregardless, it's still oversampling - the resulting signal is
> sampled with a 88x higher frequency than the original. It's equivalent
> to creating a 3,880,800 Hz signal from a 44100 Hz signal.
>
> -P
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
On 21/08/2015, Ethan Duni  wrote:
>>Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
>>*exactly* upsampling
>
> That is not what is shown in that graph. The graph simply shows the
> continuous-time frequency response of the interpolation polynomials,
> graphed up to 22kHz. No resampling is depicted, or the frequency responses
> would show the aliasing associated with that.

It shows *exactly* the aliasing
http://morpheus.spectralhead.com/img/interpolation_aliasing.png

There are about 88 alias images visible on the graph.
The linear interpolation curve is not "smooth", so it contains aliasing.

> It's just showing the sinc^2
> response of the linear interpolator, and similar for the other polynomials.

If the signal you interpolate is white noise, and the spectrum of the
signal is a flat spectrum rectangle like the one displayed, then after
resampling, you get *exactly* the spectrum you see on the graph,
showing 88 alias images.

Proof:
I created 60 seconds of white noise sampled at 500 Hz, then resampled
it to 44.1 kHz using linear interpolation. After the upsampling, it
sounds like this:

http://morpheus.spectralhead.com/wav/noise_resampled.wav

Its spectrum looks like this:
http://morpheus.spectralhead.com/img/noise_resampled.png

Looks familiar? Oh, it's the *exact* same graph! (Minus some
difference above 20 kHz, due to my soundcard's anti-alias filter.) It
is an FFT graph of the upsampled white noise, and it shows *exactly*
the aliasing. Good morning!

> This is what you'd get if you used those interpolation polynomials to
> convert a 250Hz sampled signal into a continuous time signal, not a
> discrete time signal of whatever sampling rate.

Nope. You get the same graph if you sample that continuous time signal
at a 44.1 kHz sampling rate (with some further aliasing from the
sampling). Just as I've shown.

Besides, I think the graph was created via numerical means using FFT,
because it has noise at the low ampliutes (marked on the image).
Therefore, it doesn't show a continuous time sinc^2 graph, because
that wouldn't be noisy.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
Let's repeat the same with a 50 Hz sine wave, sampled at 500 Hz, then
linearly interpolated and resampled at 44.1 kHz:

http://morpheus.spectralhead.com/img/sine_aliasing.png

The resulting alias frequencies are at: 450 Hz, 550 Hz, 950 Hz, 1050
Hz, 1450 Hz, 1550 Hz, 1950 Hz, 2050 Hz, 2450 Hz, 2550 Hz, ...

I think it should be obvious that these are all alias frequencies of
50 Hz, since if you sample any of these sinusoids at 500 Hz rate, they
will all alias to 50 Hz. Hence, they are - by definition - aliases of
the 50 Hz sinusoid.

Welcome to sampling theory 101.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
>It shows *exactly* the aliasing

It shows the aliasing left by linear interpolation into the continuous time
domain. It doesn't show the additional aliasing produced by then delaying
and sampling that signal. I.e., the images that would get folded back onto
the new baseband, disturbing the sinc^2 curve. This is how we end up with a
zero at Nyquist when we do half-sample delay, for example. And also how we
end up with a perfectly flat response if we do the trivial resampling
(original rate, no delay).

Those differences would be quite small for resampling to 44.1kHz with no
delay, since the oversampling ratio is considerable, so you'd have to look
carefully to see them. This is a big hint that they are not portrayed:
Ollie knows what he is doing, so if he wanted to illustrate the effects of
the resampling, he would have constructed a scenario where they are easily
visible. And probably mentioned a second sample rate, explicitly shown both
the sinc^2 and its aliased counterpart, etc. The effect would be shown in a
visible, explicit manner, if that was what the graph was supposed to show.
But all of those things depend on parameters like oversampling ratio and
delay, so it would be a much more complicated picture. What we're shown
here is just the effects of polynomial interpolation to get to the
continuous time domain. The additional effects of delaying and then
sampling that signal back into the discrete time domain are not visible.

It seems that you have assumed that some resampling must be happening
because the graph only goes up to 22kHz. But that's just the range of the
graph, you don't need to do any resampling of anything to graph sinc^2 over
any particular range of frequencies.

>Oh, it's the *exact* same graph! (Minus some
>difference above 20 kHz, due to my soundcard's anti-alias filter.)
>You get the same graph if you sample that continuous time signal
>at a 44.1 kHz sampling rate (with some further aliasing from the
>sampling).

But that's not quite the exact same graph. And why are you putting a sound
card in the loop? This is all just digital processing in question here. You
don't even need to process any signals, there are analytic expressions for
all of the quantities involved. That's how Ollie generated graphs of them
without reference to any particular signals.

Again, the differences in question are small due to the high oversampling
ratio, so it's going to be quite difficult to see them in macroscopic
graphs like this. If you want to see the differences, just make a plot of
both sinc^2 and its aliased versions (for whatever oversampling ratios
and/or delays), and look at the differences. It won't be interesting for
high oversampling ratios and zero delay - which is exactly why that
scenario is a poor choice for illustrating the effects in question.

The fact that sampling a continuous time signal at a very high rate results
in a spectrum that closely resembles the continuous time spectrum (over the
sampled bandwidth) is beside the point. It just means that you're operating
in a regime where the effects are very hard to spot. It doesn't follow from
that resemblance that resampling must be occurring to get a plot of the
spectrum of the continuous time signal.

E

On Fri, Aug 21, 2015 at 10:51 AM, Peter S 
wrote:

> On 21/08/2015, Ethan Duni  wrote:
> >>Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is
> >>*exactly* upsampling
> >
> > That is not what is shown in that graph. The graph simply shows the
> > continuous-time frequency response of the interpolation polynomials,
> > graphed up to 22kHz. No resampling is depicted, or the frequency
> responses
> > would show the aliasing associated with that.
>
> It shows *exactly* the aliasing
> http://morpheus.spectralhead.com/img/interpolation_aliasing.png
>
> There are about 88 alias images visible on the graph.
> The linear interpolation curve is not "smooth", so it contains aliasing.
>
> > It's just showing the sinc^2
> > response of the linear interpolator, and similar for the other
> polynomials.
>
> If the signal you interpolate is white noise, and the spectrum of the
> signal is a flat spectrum rectangle like the one displayed, then after
> resampling, you get *exactly* the spectrum you see on the graph,
> showing 88 alias images.
>
> Proof:
> I created 60 seconds of white noise sampled at 500 Hz, then resampled
> it to 44.1 kHz using linear interpolation. After the upsampling, it
> sounds like this:
>
> http://morpheus.spectralhead.com/wav/noise_resampled.wav
>
> Its spectrum looks like this:
> http://morpheus.spectralhead.com/img/noise_resampled.png
>
> Looks familiar? Oh, it's the *exact* same graph! (Minus some
> difference above 20 kHz, due to my soundcard's anti-alias filter.) It
> is an FFT graph of the upsampled white noise, and it shows *exactly*
> the aliasing. Good morning!
>
> > This is what you'd get if you used those interpolation polynomials to
> > convert a 250Hz sampled signal into a continuous

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
On 21/08/2015, Ethan Duni  wrote:
>>It shows *exactly* the aliasing
>
> It shows the aliasing left by linear interpolation into the continuous time
> domain. It doesn't show the additional aliasing produced by then delaying
> and sampling that signal. I.e., the images that would get folded back onto
> the new baseband, disturbing the sinc^2 curve.

This image doesn't involve any fractional delay.

> Those differences would be quite small for resampling to 44.1kHz with no
> delay, since the oversampling ratio is considerable, so you'd have to look
> carefully to see them.

I think they're actually on the image:
http://morpheus.spectralhead.com/img/resampling_aliasing.png

They're hard to notice, because the other aliasing masks it.

> This is a big hint that they are not portrayed:
> Ollie knows what he is doing, so if he wanted to illustrate the effects of
> the resampling, he would have constructed a scenario where they are easily
> visible.

Since that image is not meant to "illustrate the effects of
resampling", but rather, to "illustrate the effects of interpolation",
*obviously* it doesn't focus on the aliasing from the resampling.

Therefore, it is not a "hint" at all, and your argument is invalid.

> And probably mentioned a second sample rate, explicitly shown both
> the sinc^2 and its aliased counterpart, etc. The effect would be shown in a
> visible, explicit manner, if that was what the graph was supposed to show.

The fact that this graph is not supposed to demonstrate the aliasing
from the resampling, does not mean that

1) it's not there on the graph (it's just barely visible)

2) the images of the continuous time interpolated signal are not
aliasing. That's also called aliasing!!!

> But all of those things depend on parameters like oversampling ratio and
> delay, so it would be a much more complicated picture.

Yes, and that's all entirely irrelevant here... Because the images in
the continuous time signal before the resampling are also called
aliasing!!! They're all aliases of the original spectrum, and they all
alias back to the original spectrum when sampled at the original
sampling rate! They're called aliasing even before you resample them!

> What we're shown
> here is just the effects of polynomial interpolation to get to the
> continuous time domain.

False. I've shown the FFT frequency spectra of actual upsampled signals.

> The additional effects of delaying and then
> sampling that signal back into the discrete time domain are not visible.

There was no delaying involved at all.

The effects of "sampling that signal back" are not visible, because
there's 88x oversampling, just as I pointed out. If you want, you can
repeat the same with less oversampling, and present us your results.

> It seems that you have assumed that some resampling must be happening
> because the graph only goes up to 22kHz. But that's just the range of the
> graph, you don't need to do any resampling of anything to graph sinc^2 over
> any particular range of frequencies.

I never said you need do to resampling of the continuous time signal
to graph sinc^2.

I said: the images in the frequency spectrum of the continuous time
signal are aliases of the original spectrum, and they alias back to
the original spectrum when the continuous time signal is sampled at
the original rate!

> But that's not quite the exact same graph.

It's essentially the exact same graph.

> And why are you putting a sound card in the loop?

That was the most convenient way to record the signal.

> This is all just digital processing in question here. You
> don't even need to process any signals, there are analytic expressions for
> all of the quantities involved.

That's just one way of drawing fancy graphs.
FFT is another way of drawing fancy graphs.
Why would I restrict myself to one method?

> That's how Ollie generated graphs of them
> without reference to any particular signals.

How do you know? Prove it! I'm convinced he generated it via numerical
means and FFT.

> Again, the differences in question are small due to the high oversampling
> ratio, so it's going to be quite difficult to see them in macroscopic
> graphs like this.

Let me point out again, that all those spectral images in the
continunous time signal before the resampling, *are* aliasing, as
they're aliases of the original spectrum, and are *very* visible on
the graph!

> If you want to see the differences, just make a plot of
> both sinc^2 and its aliased versions (for whatever oversampling ratios
> and/or delays), and look at the differences. It won't be interesting for
> high oversampling ratios and zero delay - which is exactly why that
> scenario is a poor choice for illustrating the effects in question.

And you're entirely missing the point what it is supposed to illustrate.

> The fact that sampling a continuous time signal at a very high rate results
> in a spectrum that closely resembles the continuous time spectrum (over the
> sampled bandwidth) is b

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
A sampled signal contains an infinte number of aliases:
http://morpheus.spectralhead.com/img/sampling_aliases.png

"the spectrum is replicated infinitely often in both directions"

These are called aliases of the spectrum. You do not need to "fold
back" the aliasing via resampling for them to become aliases...
They're aliases already - when sampled at the original rate, they
would all alias back to the original signal.

This is because exp(i*x) is periodic, and after 2*PI radians, you get
back to the same frequency... hence, frequencies that are 2*PI apart
from each other, are all "aliases"...

If you fail to understand that, I think you fail to understand even
the basics of sampling theory.
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
>Since that image is not meant to "illustrate the effects of
>resampling", but rather, to "illustrate the effects of interpolation",
>*obviously* it doesn't focus on the aliasing from the resampling.

So you agree that the effects of resampling are not shown, and all we see
is the spectrum of the continuous time polynomial interpolators.

I'm going to accept that concession of my point and move on. If I were you,
I'd quit haranguing people over irrelevancies and straw men, and generally
trying to pretend to superiority. Nobody is buying it, and it just
highlights your insecurity.

E

On Fri, Aug 21, 2015 at 1:24 PM, Peter S 
wrote:

> On 21/08/2015, Ethan Duni  wrote:
> >>It shows *exactly* the aliasing
> >
> > It shows the aliasing left by linear interpolation into the continuous
> time
> > domain. It doesn't show the additional aliasing produced by then delaying
> > and sampling that signal. I.e., the images that would get folded back
> onto
> > the new baseband, disturbing the sinc^2 curve.
>
> This image doesn't involve any fractional delay.
>
> > Those differences would be quite small for resampling to 44.1kHz with no
> > delay, since the oversampling ratio is considerable, so you'd have to
> look
> > carefully to see them.
>
> I think they're actually on the image:
> http://morpheus.spectralhead.com/img/resampling_aliasing.png
>
> They're hard to notice, because the other aliasing masks it.
>
> > This is a big hint that they are not portrayed:
> > Ollie knows what he is doing, so if he wanted to illustrate the effects
> of
> > the resampling, he would have constructed a scenario where they are
> easily
> > visible.
>
> Since that image is not meant to "illustrate the effects of
> resampling", but rather, to "illustrate the effects of interpolation",
> *obviously* it doesn't focus on the aliasing from the resampling.
>
> Therefore, it is not a "hint" at all, and your argument is invalid.
>
> > And probably mentioned a second sample rate, explicitly shown both
> > the sinc^2 and its aliased counterpart, etc. The effect would be shown
> in a
> > visible, explicit manner, if that was what the graph was supposed to
> show.
>
> The fact that this graph is not supposed to demonstrate the aliasing
> from the resampling, does not mean that
>
> 1) it's not there on the graph (it's just barely visible)
>
> 2) the images of the continuous time interpolated signal are not
> aliasing. That's also called aliasing!!!
>
> > But all of those things depend on parameters like oversampling ratio and
> > delay, so it would be a much more complicated picture.
>
> Yes, and that's all entirely irrelevant here... Because the images in
> the continuous time signal before the resampling are also called
> aliasing!!! They're all aliases of the original spectrum, and they all
> alias back to the original spectrum when sampled at the original
> sampling rate! They're called aliasing even before you resample them!
>
> > What we're shown
> > here is just the effects of polynomial interpolation to get to the
> > continuous time domain.
>
> False. I've shown the FFT frequency spectra of actual upsampled signals.
>
> > The additional effects of delaying and then
> > sampling that signal back into the discrete time domain are not visible.
>
> There was no delaying involved at all.
>
> The effects of "sampling that signal back" are not visible, because
> there's 88x oversampling, just as I pointed out. If you want, you can
> repeat the same with less oversampling, and present us your results.
>
> > It seems that you have assumed that some resampling must be happening
> > because the graph only goes up to 22kHz. But that's just the range of the
> > graph, you don't need to do any resampling of anything to graph sinc^2
> over
> > any particular range of frequencies.
>
> I never said you need do to resampling of the continuous time signal
> to graph sinc^2.
>
> I said: the images in the frequency spectrum of the continuous time
> signal are aliases of the original spectrum, and they alias back to
> the original spectrum when the continuous time signal is sampled at
> the original rate!
>
> > But that's not quite the exact same graph.
>
> It's essentially the exact same graph.
>
> > And why are you putting a sound card in the loop?
>
> That was the most convenient way to record the signal.
>
> > This is all just digital processing in question here. You
> > don't even need to process any signals, there are analytic expressions
> for
> > all of the quantities involved.
>
> That's just one way of drawing fancy graphs.
> FFT is another way of drawing fancy graphs.
> Why would I restrict myself to one method?
>
> > That's how Ollie generated graphs of them
> > without reference to any particular signals.
>
> How do you know? Prove it! I'm convinced he generated it via numerical
> means and FFT.
>
> > Again, the differences in question are small due to the high oversampling
> > ratio, so it's going to be quite difficult to see them in macrosco

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
Also, you even contradict yourself. You claim that:

1) Olli's graph was created by graphing sinc(x), sinc^2(x), and not via FFT.

2) The artifacts from the resampling would be barely visible, because
the oversampling rate is quite high.

So, if - according to 2) - the artifacts are not visible because the
oversampling is high and the graph doesn't focus on that, then how do
you know that 1) is true? You claim that the resampling artifacts
wouldn't be visible anyways.

If that's true, then how would you prove that FFT was not used for
creating Olli's graph?

Also, even you yourself acknowledge that

"It shows the aliasing left by linear interpolation into the
continuous time domain."

So, we agree that the graph shows aliasing, right?

I do not know where you get your idea of "additional aliasing" - it's
the very same aliasing, except the resampling folds it back...
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
On 21/08/2015, Ethan Duni  wrote:
> So you agree that the effects of resampling are not shown, and all we see
> is the spectrum of the continuous time polynomial interpolators.

I claim that they are aliases of the original spectrum.

Just as you also call them:

"It shows the aliasing left by linear interpolation into the
continuous time domain."

I never claimed anything about the folding back of alias frequencies
from the resampling at 44.1k rate.

> If I were you,
> I'd quit haranguing people over irrelevancies and straw men,

It's rather you who argue about irrelevant things and straw man
arguments - for that matter, I never claimed that the folded back
aliases from the resampling at 44.1k are visible on Olli's graph. It's
you who is forcing this irrelevant argument.

So maybe you should listen to your own advice, first.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
The details of how the graphs were generated don't really matter. The point
is that the only effect shown is the spectrum of the continuous-time
polynomial interpolator. The additional spectral effects of delaying and
resampling that continuous-time signal (to get fractional delay, for
example) are not shown. There is no "resampling" to be seen in the graphs.

>I claim that they are aliases of the original spectrum.

What we see in the graph is simply the spectra of the continuous-time
interpolators. Since the spectra extend beyond the original nyquist rate,
there will indeed be images of the original signal weighted by the
interpolator spectrum present in the continuous-time interpolated signal.
Whether those are ultimately expressed as aliases depends on what you then
do with that continuous time signal. If you resample to the original rate
(in order to implement a fractional delay, say), then those weighted images
will be folded back to the same place they came from. In that case, there
is no aliasing, you just end up with a modified frequency response of your
fractional interpolator. This is where the zero at Nyquist comes from when
we do a half-sample delay - the linear phase term corresponding to a
half-sample delay causes the signal images to become out of phase with each
other as you approach Nyquist, so they cancel out and you get a zero.

It is only if the interpolated continuous-time signal is resampled at a
different rate, or just used directly, that those signal images end up
expressed as aliases.

The rest of your accusations are your usual misreadings and straw men. I
won't be legitimating them by responding, and I hope you will accept that
and give up on these childish tactics. It would be better for everyone if
you could make a point of engaging in good faith and trying to stick to the
subject rather than attacking the intellects of others.

E

On Fri, Aug 21, 2015 at 2:05 PM, Peter S 
wrote:

> Also, you even contradict yourself. You claim that:
>
> 1) Olli's graph was created by graphing sinc(x), sinc^2(x), and not via
> FFT.
>
> 2) The artifacts from the resampling would be barely visible, because
> the oversampling rate is quite high.
>
> So, if - according to 2) - the artifacts are not visible because the
> oversampling is high and the graph doesn't focus on that, then how do
> you know that 1) is true? You claim that the resampling artifacts
> wouldn't be visible anyways.
>
> If that's true, then how would you prove that FFT was not used for
> creating Olli's graph?
>
> Also, even you yourself acknowledge that
>
> "It shows the aliasing left by linear interpolation into the
> continuous time domain."
>
> So, we agree that the graph shows aliasing, right?
>
> I do not know where you get your idea of "additional aliasing" - it's
> the very same aliasing, except the resampling folds it back...
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
On 21/08/2015, Ethan Duni  wrote:
> The details of how the graphs were generated don't really matter.

Then why do you keep insisting that they're generated by plotting sinc^2(x) ?

> The point
> is that the only effect shown is the spectrum of the continuous-time
> polynomial interpolator.

Which contains alias images of the original spectrum, which was my point.

> The additional spectral effects of delaying and
> resampling that continuous-time signal (to get fractional delay, for
> example) are not shown.

No one claimed there was fractional delay involved.

> There is no "resampling" to be seen in the graphs.

I recreated the exact same graph via resampling a signal, proving that
is one method of generating that graph.

>>I claim that they are aliases of the original spectrum.
>
> What we see in the graph is simply the spectra of the continuous-time
> interpolators.

Then how do you explain that taking noise sampled at 500 Hz, and
resampling it to 44.1 kHz gives an identical FFT graph?

How do you explain that an 50 Hz sine wave, resampled to 44.1 kHz,
contains alias frequencies at 450 Hz, 550 Hz, 950 Hz, 1050 Hz, 1450
Hz, 1550 Hz, etc. ? What are those, if not "aliases" ?

> Whether those are ultimately expressed as aliases depends on what you then
> do with that continuous time signal.

They're already "aliases"... You may filter them out, or do whatever
you want with them - that doesn't change the fact that they're aliases
of the original spectrum...

> If you resample to the original rate
> (in order to implement a fractional delay, say), then those weighted images
> will be folded back to the same place they came from.

That's exactly why they're called aliases.

> In that case, there
> is no aliasing, you just end up with a modified frequency response of your
> fractional interpolator.

Which is not the case on Olli's graph.

> It is only if the interpolated continuous-time signal is resampled at a
> different rate, or just used directly, that those signal images end up
> expressed as aliases.

Which was presented on Olli's graph, and that's what we're talking about.

> The rest of your accusations are your usual misreadings and straw men. I
> won't be legitimating them by responding, and I hope you will accept that
> and give up on these childish tactics. It would be better for everyone if
> you could make a point of engaging in good faith and trying to stick to the
> subject rather than attacking the intellects of others.

I spent (wasted?) a considerate amount of time creating various
demonstrations and FFT graphs showing my point. And you accuse me of
"childish tactics". You are lame.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
>Which contains alias images of the original spectrum, which was my point.

There is no "original spectrum" pictured in that graph. Only the responses
of the interpolators. There is no reference to any input signal at all.

>No one claimed there was fractional delay involved.

Fractional delay is a primary topic of this thread, and a major motivation
for interest in polynomial interpolation in dsp in general.

>Then how do you explain that taking noise sampled at 500 Hz, and
>resampling it to 44.1 kHz gives an identical FFT graph?

We've been over this already. It's because you're resampling the signal at
such a large rate that the effects of the sampling are not visible. And
you've chosen a signal with a flat spectrum, so there are no features of
the signal spectrum visible - only the interpolator response. This goes
exactly to the point that no resampling effects are present in the graphs.
All we see are the interpolator spectra.

The fact that there are various ways to generate a graph of an interpolator
spectrum is entirely beside the point.

>> If you resample to the original rate
>> (in order to implement a fractional delay, say), then those weighted
images
>> will be folded back to the same place they came from.
>That's exactly why they're called aliases.

No, if you fold the images back to the same spots they originated, they are
not aliases. All of the frequencies are mapped back to their original
locations, none end up at other frequencies. Aliases are when signal images
end up in new locations corresponding to different frequency bands.

This distinction is crucial to understanding the operation of fractional
delay interpolators: it's why they don't produce aliasing at their output.
We just get a fractional delay filter with an imperfect spectrum. It's only
the frequency response of the interpolator that gets aliased (introducing
the zero at Nyquist for half-sample delay, for example), not the underlying
signal content. That's why it's important to graph the frequency response
of the interpolators directly, without worrying about signal spectra - to
figure out what happens in the final digital interpolator, you take that
continuous time interpolator spectrum, add a linear phase term for whatever
delay you want, and then alias it according to your new sampling rate to
get the final response of the digital interpolation filter. Signal aliasing
only results if that involves a change in sampling rate.

>Which is not the case on Olli's graph.

Right, Ollie's graph shows only the intermediate stage, the spectrum of the
polynomial interpolator in continuous time. This is an analytical
convenience, we never actually produce any such signal. It's used as an
input to figure out what the final response of a digital interpolator based
on one of these polynomials will be. You can of course sample that at a
very high rate and so neglect the aliasing of the interpolator response,
but what is the point of that? You wouldn't use any of these interpolators
if what you're trying to do is upsample a 500Hz sampled signal to 44.1kHz,
the graphs show that they're crap for that.

>I spent (wasted?) a considerate amount of time creating various
>demonstrations and FFT graphs showing my point.

Your time would be better spent figuring out a point that is relevant to
what I'm saying in the first place. It is indeed a waste of your time to
invent equivalent ways to generate graphs, since that is not the point.

E



On Fri, Aug 21, 2015 at 2:56 PM, Peter S 
wrote:

> On 21/08/2015, Ethan Duni  wrote:
> > The details of how the graphs were generated don't really matter.
>
> Then why do you keep insisting that they're generated by plotting
> sinc^2(x) ?
>
> > The point
> > is that the only effect shown is the spectrum of the continuous-time
> > polynomial interpolator.
>
> Which contains alias images of the original spectrum, which was my point.
>
> > The additional spectral effects of delaying and
> > resampling that continuous-time signal (to get fractional delay, for
> > example) are not shown.
>
> No one claimed there was fractional delay involved.
>
> > There is no "resampling" to be seen in the graphs.
>
> I recreated the exact same graph via resampling a signal, proving that
> is one method of generating that graph.
>
> >>I claim that they are aliases of the original spectrum.
> >
> > What we see in the graph is simply the spectra of the continuous-time
> > interpolators.
>
> Then how do you explain that taking noise sampled at 500 Hz, and
> resampling it to 44.1 kHz gives an identical FFT graph?
>
> How do you explain that an 50 Hz sine wave, resampled to 44.1 kHz,
> contains alias frequencies at 450 Hz, 550 Hz, 950 Hz, 1050 Hz, 1450
> Hz, 1550 Hz, etc. ? What are those, if not "aliases" ?
>
> > Whether those are ultimately expressed as aliases depends on what you
> then
> > do with that continuous time signal.
>
> They're already "aliases"... You may filter them out, or do whatever
> you want with them - that doesn'

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
Since you constantly derail this topic with irrelevant talk, let me
instead prove that

1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
upsampled white noise.
2) Olli Niemitalo's graph does *not* depict sinc(x)/sinc^2(x).

First I'll prove 1).

Using palette modification, I extracted the linear interpolation curve
from Olli's figure:
http://morpheus.spectralhead.com/img/other001b.gif

Then I sampled white noise at 500 Hz, and resampled it to 44.1 kHz
using linear interpolation. I got this spectrum:

http://morpheus.spectralhead.com/img/resampled_noise_spectrum.gif

To do a proper A/B comparison between the two spectra, I tried to
align and match them as much as possible, and created an animated GIF
file that blinks between the two graphs at a 500 ms rate:

http://morpheus.spectralhead.com/img/olli_vs_resampled_noise.gif

Although the alignment is not 100% exact, to my eyes, they look like
totally equivalent graphs.

This proves that upsampled white noise has the same spectrum as the
graph shown on Olli's graph for linear interpolation.

Second, I'll prove 2).

Have you actually looked at Olli Niemitalo's graph closely?
Here is proof that it is NOT a graph of sinc(x)/sinc^2(x):

http://morpheus.spectralhead.com/img/other001-analysis.gif

It is NOT sinc(x)/sinc^2(x), and you're blind as a bat if you do not see that.

Since I proved both 1) and 2), it is totally irrelevant what you say,
because none of what you could ever say would disprove this.

Sinc(x) does not have a jagged/noisy look, therefore it is 100%
certain it is not what you see on Olli's graph. Point proven, end of
discussion.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
>1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
>upsampled white noise.

We've been over this repeatedly, including in the very post you are
responding to. The fact that there are many ways to produce a graph of the
interpolation spectrum is not in dispute, nor is it germaine to my point.
I'm not sure what you're trying to accomplish by harping on this point,
while ignoring everything I say. Certainly, it is not convincing me that
you have some worthwhile response to my points, or even that you are
understanding them in the first place. It's seems like you are trying to
avoid my point entirely, in favor of some imaginary dispute of your own
invention, which you think you can "win."

>Have you actually looked at Olli Niemitalo's graph closely?
>Here is proof that it is NOT a graph of sinc(x)/sinc^2(x):
>
>http://morpheus.spectralhead.com/img/other001-analysis.gif
>
>It is NOT sinc(x)/sinc^2(x), and you're blind as a bat if you do not see
that.

I have no idea what you think you are proving by scrutinizing graph
artifacts like that, but it's a preposterous approach to signal analysis on
its face.

It's also in extremely poor taste to use "retard" as a term of abuse.
People with mental disabilities have it hard enough already, without others
treating their status as an insult to be thrown around. I'd appreciate it
if you would compose yourself and refrain from these kinds of ugly
outbursts.

Meanwhile, it seems that you are suggesting that the spectrum of white
noise linearly interpolated up to a high oversampling rate is not sinc^2.
Is your whole point here that generating such a plot by FFTing the
interpolation of a finite segment of white noise will produce finite-data
artifacts in the resulting graph? Because that's not relevant to the
subject, and only goes to show that it's better to just graph the sinc^2
curve directly and so avoid all of the excess computation and finite-data
effects. Are you claiming that those wiggles in the graph represent
aliasing of the spectrum from resampling at 44.1kHz? If so, that is
unlikely.

You do agree that the spectrum of a continuous-time linear interpolator is
given by sinc^2, right?

E


On Fri, Aug 21, 2015 at 4:59 PM, Peter S 
wrote:

> Since you constantly derail this topic with irrelevant talk, let me
> instead prove that
>
> 1) Olli Niemiatalo's graph *is* equivalent of the spectrum of
> upsampled white noise.
> 2) Olli Niemitalo's graph does *not* depict sinc(x)/sinc^2(x).
>
> First I'll prove 1).
>
> Using palette modification, I extracted the linear interpolation curve
> from Olli's figure:
> http://morpheus.spectralhead.com/img/other001b.gif
>
> Then I sampled white noise at 500 Hz, and resampled it to 44.1 kHz
> using linear interpolation. I got this spectrum:
>
> http://morpheus.spectralhead.com/img/resampled_noise_spectrum.gif
>
> To do a proper A/B comparison between the two spectra, I tried to
> align and match them as much as possible, and created an animated GIF
> file that blinks between the two graphs at a 500 ms rate:
>
> http://morpheus.spectralhead.com/img/olli_vs_resampled_noise.gif
>
> Although the alignment is not 100% exact, to my eyes, they look like
> totally equivalent graphs.
>
> This proves that upsampled white noise has the same spectrum as the
> graph shown on Olli's graph for linear interpolation.
>
> Second, I'll prove 2).
>
> Have you actually looked at Olli Niemitalo's graph closely?
> Here is proof that it is NOT a graph of sinc(x)/sinc^2(x):
>
> http://morpheus.spectralhead.com/img/other001-analysis.gif
>
> It is NOT sinc(x)/sinc^2(x), and you're blind as a bat if you do not see
> that.
>
> Since I proved both 1) and 2), it is totally irrelevant what you say,
> because none of what you could ever say would disprove this.
>
> Sinc(x) does not have a jagged/noisy look, therefore it is 100%
> certain it is not what you see on Olli's graph. Point proven, end of
> discussion.
>
> -P
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Peter S
On 22/08/2015, Ethan Duni  wrote:
>
> We've been over this repeatedly, including in the very post you are
> responding to. The fact that there are many ways to produce a graph of the
> interpolation spectrum is not in dispute, nor is it germaine to my point.

Earlier you disputed that there's no upsampling involved.
Apparently you change your mind quite often...

> It's seems like you are trying to
> avoid my point entirely, in favor of some imaginary dispute of your own
> invention, which you think you can "win."

I claimed something, and you disputed it. I proved that what I
claimed, is true. Therefore, all your further arguments are invalid...
(and are boring)

> I have no idea what you think you are proving by scrutinizing graph
> artifacts like that

I am proving that what you see on the graph is not sinc(x) /
sinc^2(x), but rather some noisy curve, like the spectrum of upsampled
noise. Therefore, my original argument is correct.

> It's also in extremely poor taste to use "retard" as a term of abuse.

Well, if you do not see that the graph pictured on Olli's figure is
not sinc(x), then you're retarded.

> Meanwhile, it seems that you are suggesting that the spectrum of white
> noise linearly interpolated up to a high oversampling rate is not sinc^2.

Naturally, there's going to be some jaggedness in the spectrum because
of the noise. So, obviously, that is not sinc^2 then.

> Are you claiming that those wiggles in the graph represent
> aliasing of the spectrum from resampling at 44.1kHz? If so, that is
> unlikely.

Nope, the "wiggles" in the graph are from the noise.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
>Naturally, there's going to be some jaggedness in the spectrum because
>of the noise. So, obviously, that is not sinc^2 then.

So your whole point is that it's not *exactly* sinc^2, but a slightly noisy
version thereof? My point was that there are no effects of resampling
visible in the graphs. That has nothing to do with exactly how the graphs
were generated, nor does insisting that the graphs are slightly noisy
address the point.

Indeed, you've already conceded that the resampling effects are not visible
in the graphs several posts back. It seems like you're just casting about
for some other issue that you can tell yourself you "won," and then call me
names, to feed your fragile ego. Honestly, it's a pretty sad spectacle and
I'm embarrassed for you. It really would be better for everyone - including
you - if you could interact in a good-faith, mature manner. Please make an
effort to start doing so, or you're pretty soon going to find that nobody
here will interact with you any more.

By the way, there's no reason for any jaggedness to appear in the plots,
given the lengths of data you were talking about. You might want to look
into spectral density estimation methods to trade off frequency resolution
and bin accuracy.  It's pretty standard statistical signal processing 101
stuff. Producing a very smooth graph from a long enough segment of data is
straightforward, if you use appropriate techniques (not just one big FFT of
the whole thing, that won't ever get rid of the noisiness no matter how
much data you throw at it).

E

On Fri, Aug 21, 2015 at 5:47 PM, Peter S 
wrote:

> On 22/08/2015, Ethan Duni  wrote:
> >
> > We've been over this repeatedly, including in the very post you are
> > responding to. The fact that there are many ways to produce a graph of
> the
> > interpolation spectrum is not in dispute, nor is it germaine to my point.
>
> Earlier you disputed that there's no upsampling involved.
> Apparently you change your mind quite often...
>
> > It's seems like you are trying to
> > avoid my point entirely, in favor of some imaginary dispute of your own
> > invention, which you think you can "win."
>
> I claimed something, and you disputed it. I proved that what I
> claimed, is true. Therefore, all your further arguments are invalid...
> (and are boring)
>
> > I have no idea what you think you are proving by scrutinizing graph
> > artifacts like that
>
> I am proving that what you see on the graph is not sinc(x) /
> sinc^2(x), but rather some noisy curve, like the spectrum of upsampled
> noise. Therefore, my original argument is correct.
>
> > It's also in extremely poor taste to use "retard" as a term of abuse.
>
> Well, if you do not see that the graph pictured on Olli's figure is
> not sinc(x), then you're retarded.
>
> > Meanwhile, it seems that you are suggesting that the spectrum of white
> > noise linearly interpolated up to a high oversampling rate is not sinc^2.
>
> Naturally, there's going to be some jaggedness in the spectrum because
> of the noise. So, obviously, that is not sinc^2 then.
>
> > Are you claiming that those wiggles in the graph represent
> > aliasing of the spectrum from resampling at 44.1kHz? If so, that is
> > unlikely.
>
> Nope, the "wiggles" in the graph are from the noise.
>
> -P
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-22 Thread Peter S
On 22/08/2015, Ethan Duni  wrote:
>
> So your whole point is that it's not *exactly* sinc^2, but a slightly noisy
> version thereof? My point was that there are no effects of resampling
> visible in the graphs.

And you're wrong - all those 88 alias images are "effects of resampling"...

> That has nothing to do with exactly how the graphs
> were generated, nor does insisting that the graphs are slightly noisy
> address the point.

Well, it was *you* who insisted that it displays a graphed sinc^2
curve, and not a resampled signal... And you were wrong.

> Indeed, you've already conceded that the resampling effects are not visible
> in the graphs several posts back.

Aren't all those 88 alias images "effects of resampling"?
What are those, if not "effects of resampling"?

You claimed "no upsampling is involved", yet when I upsample noise, I
get exactly that graph. So it seems you were wrong.

> It seems like you're just casting about
> for some other issue that you can tell yourself you "won," and then call me
> names, to feed your fragile ego.

Well, if you do not see that the curve is NOT a graphed sinc^2, but
rather, a noisy curve seemingly from resampled noise, then you have
some underlying problem.

> Honestly, it's a pretty sad spectacle and I'm embarrassed for you.

I'm embarrassed for you.

> It really would be better for everyone - including
> you - if you could interact in a good-faith, mature manner. Please make an
> effort to start doing so, or you're pretty soon going to find that nobody
> here will interact with you any more.

Yet - for some reason - you keep interacting with me for the past 22
mails you wrote. Maybe to "feed your fragile ego" and prove that you
"won"... (?)

> By the way, there's no reason for any jaggedness to appear in the plots,
> given the lengths of data you were talking about.

There *is* reason for jaggedness to appear in the plots. If you don't
believe, try it yourself - take some white noise sampled at 500 Hz,
and resample it to 44.1 kHz. The shorter the length, the more jagged
the spectrum will look.

Besides, we do not know how much data Olli processed, so you cannot
say "there's no reason for jaggedness in his graph" - as you do not
know how he derived his graph. So your argument is invalid again.

> Producing a very smooth graph from a long enough segment of data is
> straightforward, if you use appropriate techniques (not just one big FFT of
> the whole thing, that won't ever get rid of the noisiness no matter how
> much data you throw at it).

Exactly. And that's what I used (spectral averaging over a long
segment), yet it is STILL noisy, if the white noise segment is not
very long.

So your argument is wrong again...

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-22 Thread Peter S
So you claim that the graph depicts a sinc^2 graph, and it shows the
frequency response of a continuous time linearly interpolated signal,
and involves no resampling.

That is false. That is not how Olli created his graph. First, the
continuous time signal (which, by the way, already contains an
infinite amount of aliases of the original spectrum) exists only in
your imagination - I'm almost 100% certain Olli made his graph by
resampling noise. The telltale signs of this are:

- the curves on the graph are jagged/noisy, typical of averaged white
noise spectrum
- if you watch closely, the same jaggedness repeats at a 2*PI
frequency interval, showing that they are aliases of the original
spectrum, which was noisy.

Therefore, Ollis graph does *not* depict a continuous time signal, but
rather, a noisy signal that was resampled to 44.1 kHz. Therefore, what
you see on the graph, is the artifacts from the resampling.

Therefore, all your arguments are invalid.

-P
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


  1   2   >