Re: [music-dsp] Computational complexity of common DSP algorithms

2020-03-19 Thread Ethan Duni
On Thu, Mar 19, 2020 at 8:11 AM Dario Sanfilippo wrote: > > I believe that the time complexity of FFT is O(nlog(n)); would you perhaps > have a list or reference to a paper that shows the time complexity of > common DSP systems such as a 1-pole filter? > The complexity depends on the topology. T

[music-dsp] Sliding Phase Vocoder (was FIR blog post & interactive demo)

2020-03-19 Thread Ethan Duni
On Tue, Mar 10, 2020 at 1:05 PM Richard Dobson wrote: > > Our ICMC paper can be found here, along with a few beguiling sound > examples: > > http://dream.cs.bath.ac.uk/SDFT/ So this is pretty cool stuff. I can't say I've digested the whole idea yet, but I had a couple of obvious questions. In

Re: [music-dsp] FIR blog post & interactive demo

2020-03-18 Thread Ethan Duni
my github with > audio if you want to hear whether or not there is quantization noise from > this FFT EQ or not (from changing the coefficients, etc). > > > cheers, > Eric Z > https://www.github.com/kardashevian > > On Fri, Mar 13, 2020 at 6:18 PM Ethan Duni wrote: > &g

Re: [music-dsp] FIR blog post & interactive demo

2020-03-13 Thread Ethan Duni
On Thu, Mar 12, 2020 at 9:35 PM robert bristow-johnson < r...@audioimagination.com> wrote: > i am not always persuaded that the analysis window is preserved in the > frequency-domain modification operation. It definitely is *not* preserved under modification, generally. The Perfect Reconstruct

Re: [music-dsp] FIR blog post & interactive demo

2020-03-12 Thread Ethan Duni
Hi Robert On Wed, Mar 11, 2020 at 4:19 PM robert bristow-johnson < r...@audioimagination.com> wrote: > > i don't think it's too generic for "STFT processing". step #4 is pretty > generic. > I think the part that chafes my intuition is more that the windows in steps #2 and #6 should "match" in s

Re: [music-dsp] FIR blog post & interactive demo

2020-03-11 Thread Ethan Duni
On Tue, Mar 10, 2020 at 8:36 AM Spencer Russell wrote: > > The point I'm making here is that overlap-add fast FIR is a special case > of STFT-domain multiplication and resynthesis. I'm defining the standard > STFT pipeline here as: > > 1. slice your signal into frames > 2. pointwise-multiply an a

Re: [music-dsp] FIR blog post & interactive demo

2020-03-10 Thread Ethan Duni
> On Mar 10, 2020, at 3:38 AM, Richard Dobson wrote: > > You can have windows when hop size is 1 sample (as used in the sliding phase > vocoder (SPV) proposed by Andy Moorer exactly 20 years ago, and the focus of > a research project I was part of around 2007). So long as the window is based

Re: [music-dsp] FIR blog post & interactive demo

2020-03-09 Thread Ethan Duni
It is certainly possible to combine STFT with fast convolution in various ways. But doing so imposes significant overhead costs and constrains the overall design in strong ways. For example, this approach: > On Mar 9, 2020, at 7:16 AM, Spencer Russell wrote: > >  > if you have an KxN STFT (

Re: [music-dsp] FIR blog post & interactive demo

2020-03-08 Thread Ethan Duni
On Sun, Mar 8, 2020 at 8:02 PM Spencer Russell wrote: > In fact, the the standard STFT analysis/synthesis pipeline is the same > thing as overlap-add "fast convolution" if you: > > 1. Use a rectangular window with a length equal to your hop size > 2. zero-pad each input frame by the length of you

Re: [music-dsp] FIR blog post & interactive demo

2020-03-08 Thread Ethan Duni
> > If the system is suitably designed (e.g. correct window and overlap), > you can filter using an FFT and get identical results to a time domain > FIR filter (up-to rounding/precision limits, of course). The > appropriate window and overlap process will cause all circular > convolution artefact

Re: [music-dsp] FIR blog post & interactive demo

2020-03-08 Thread Ethan Duni
hallmark of the MDCT or > DCT type IV which is ubiquitous in audio codecs. > >> On Sun, Mar 8, 2020, 7:41 PM Ethan Duni wrote: >> FFT filterbanks are time variant due to framing effects and the circular >> convolution property. They exhibit “perfect reconstruction” if you

Re: [music-dsp] FIR blog post & interactive demo

2020-03-08 Thread Ethan Duni
FFT filterbanks are time variant due to framing effects and the circular convolution property. They exhibit “perfect reconstruction” if you design the windows correctly, but this only applies if the FFT coefficients are not altered between analysis and synthesis. If you alter the FFT coefficient

Re: [music-dsp] FIR blog post & interactive demo

2020-03-08 Thread Ethan Duni
It is physically impossible to build a causal, zero-phase system with non-trivial frequency response. Ethan > On Mar 7, 2020, at 7:42 PM, Zhiguang Eric Zhang wrote: > >  > Not to threadjack from Alan Wolfe, but the FFT EQ was responsive written in C > and running on a previous gen MacBook P

Re: [music-dsp] high & low pass correlated dither noise question

2019-06-27 Thread Ethan Duni
So as Nigel and Robert have already explained, in general you need to separately handle the spectral shaping and pdf shaping. This dither algorithm works by limiting to the particular case of triangular pdf with a single pole at z=+/-1. For that case, the state of the spectral shaping filter can be

Re: [music-dsp] Who uses YIN or pYIN for pitch detection?

2019-03-06 Thread Ethan Duni
Looks like they use the Viterbi algorithm to get the pitch tracks. > On Mar 6, 2019, at 6:59 PM, Jay wrote: > > > Looks like there's a link to a python implementation on this topics page, > might provide some insights: > https://github.com/topics/pitch-tracking > > > > > > > > >> On W

Re: [music-dsp] Auto-tune sounds like vocoder

2019-01-16 Thread Ethan Duni
Aren't Auto-Tune and similar built on LPC vocoders? I had the impression that was publicly known (recalling magazine interviews/articles from the late 90s). The secret sauce being all the stuff required for pitch tracking, unvoiced segments, different tunings, vibrato, corner cases, etc. But as fa

Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-09 Thread Ethan Duni
applications? > > The background is still that I want to use a higher resolution for > ananlysis and > a lower resolution for synthesis in a phase vocoder. > > Am 08.11.2018 um 21:45 schrieb Ethan Duni: > > Not sure can get the odd bins *easily*, but it is certainly possible

Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-08 Thread Ethan Duni
gt; For instance we should have: > > X1 = x0 + (r - r*i)*x1 - i*x2 + (-r - r*i)*x3 - x4 + (-r + r*i)*x5 + i*x6 > + (r + r*i)*x7 > > where r=sqrt(1/2) > > Is it actually possible? It seems like the phase of the coefficients in > the Y's and Z's advance too quickly t

Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-05 Thread Ethan Duni
You can combine consecutive DFTs. Intuitively, the basis functions are periodic on the transform length. But it won't be as efficient as having done the big FFT (as you say, the decimation in time approach interleaves the inputs, so you gotta pay the piper to unwind that). Note that this is for nak

Re: [music-dsp] Antialiased OSC

2018-11-01 Thread Ethan Duni
Well you definitely want a monotonic, equal-amplitude crossfade, and probably also time symmetry. So I think raised sinc is right out. In terms of finer design considerations it depends on the time scale. For longer crossfades (>100ms), steady-state considerations apply, and you can design for fre

Re: [music-dsp] pitch shifting in frequency domain Re: FFT for realtime synthesis?

2018-10-28 Thread Ethan Duni
You should have a search for papers by Jean Laroche and Mark Dolson, such as "About This Phasiness Business" for some good information on phase vocoder processing. They address time scale modification mostly in that specific paper, but many of the insights apply in general, and you will find refere

Re: [music-dsp] Resampling

2018-10-06 Thread Ethan Duni
Alex, it sounds like you are confusing algorithmic latency with framing latency. At each frame, you take in 10ms (or whatever) of input, and then provide 10ms of output. This (plus processing time to generate the output) is the IO latency of the process. But the algorithm itself can add addition

Re: [music-dsp] Antialiased OSC

2018-08-06 Thread Ethan Duni
rbj wrote: >i, personally, would rather see a consistent method used throughout the MIDI keyboard range If you squint at it hard enough, you can maybe convince yourself that the naive sawtooth generator is just a memory optimization for low-frequency wavetable entries. I mean, it does a perfect jo

Re: [music-dsp] Playing a Square Wave

2018-06-13 Thread Ethan Duni
>The simple question that forced itself on me often, as I"m sure some can relate, >after having been used to all those early signal sources including a host of analog >synthesizers I had in the past, and a lot of music in various analog forms from standard >pop to G. Duke and Rose Royce to mention

Re: [music-dsp] Clock drift and compensation

2018-03-09 Thread Ethan Duni
Hi ben You don't need to evaluate the asin() - it's piecewise monotonic and symmetrical, so you can get the same comparison directly in the signal domain. Specifically, notice that x(n) = sin(2*pi*(1/4)*n) = [...0,1,0,-1,...]. So you get the same result just by checking ( abs( x[n] - x[n-1] ) ==

Re: [music-dsp] Sampling theory "best" explanation

2017-09-11 Thread Ethan Duni
agination.com> wrote: > > > Original Message > Subject: Re: [music-dsp] Sampling theory "best" explanation > From: "Ethan Duni" > Date: Wed, September 6, 2017 4:49 pm > To: "rober

Re: [music-dsp] Sampling theory "best" explanation

2017-09-06 Thread Ethan Duni
te. > > Sorry you misinterpreted it. > > On Sep 7, 2017, at 5:34 AM, Ethan Duni wrote: > > Nigel Redmon wrote: > >As an electrical engineer, we find great humor when people say we can't > do impulses. > > I'm the electrical engineer who pointed out that imp

Re: [music-dsp] Sampling theory "best" explanation

2017-09-06 Thread Ethan Duni
s far as DAC not using > impulses, it's only because the shortcut is trivial. Like I said, audio > sample rates are slow, not that hard to do a good enough job for > demonstration with "close enough" impulses. > > Don't anyone get mad at me, please. Just sitting on

Re: [music-dsp] Sampling theory "best" explanation

2017-09-06 Thread Ethan Duni
ristow-johnson < r...@audioimagination.com> wrote: > > > Original Message > Subject: Re: [music-dsp] Sampling theory "best" explanation > From: "E

Re: [music-dsp] Sampling theory "best" explanation

2017-09-04 Thread Ethan Duni
that it's that partitions the space of input shifts, where if >>> you restrict yourself to shifts from a given partition you will see time >>> invariance (in a certain sense). >> >> >> So this to me is a good example of how thinking of discrete time signals

Re: [music-dsp] Sampling theory "best" explanation

2017-09-03 Thread Ethan Duni
he > definition of LTI they were taught. > > On Sep 1, 2017, at 3:46 PM, Ethan Duni wrote: > > Ethan F wrote: > >I see your nitpick and raise you. :o) Surely there are uncountably many > such functions, > >as the power at any apparent frequency can be distributed arbitrar

Re: [music-dsp] Sampling theory "best" explanation

2017-09-01 Thread Ethan Duni
place in radio >> applications. > > > I see your nitpick and raise you. :o) Surely there are uncountably many > such functions, as the power at any apparent frequency can be distributed > arbitrarily among the bands. > > -Ethan F > > > On Fri, Sep 1, 2017 at 5:30

Re: [music-dsp] Sampling theory "best" explanation

2017-09-01 Thread Ethan Duni
>I'm one of those people who prefer to think of a discrete-time signal as >representing the unique bandlimited function interpolating its samples. This needs an additional qualifier, something about the bandlimited function with the lowest possible bandwidth, or containing DC, or "baseband," or su

Re: [music-dsp] advice regarding USB oscilloscope

2017-03-08 Thread Ethan Duni
These PicoScopes look pretty cool :] As it happens I am just now trying to free up some garage space to get an electronics bench together. But it's coming up on 20 years since I last soldered and it's a whole different world with scopes now. So thanks for this thread! Also if anybody knows good r

Re: [music-dsp] ± 45° Hilbert transformer using pair of IIR APFs

2017-02-09 Thread Ethan Duni
> how do you quadrature modulate without Hilbert filters? > Perhaps I'm using the wrong term - the operation in question is just the multiplication of a signal by e^jwn. Or, equivalently, multiplying the real part by cos(wn) and the imaginary part by sin(wn) - a pair of "quadrature oscillators."

Re: [music-dsp] ± 45° Hilbert transformer using pair of IIR APFs

2017-02-09 Thread Ethan Duni
On Tue, Feb 7, 2017 at 6:49 AM, Ethan Fenn wrote: > So I guess the general idea with these frequency shifters is something > like: > > pre-filter -> generate Hilbert pair -> multiply by e^iwt -> take the real > part > > Am I getting that right? > Exactly, this is a single sideband modulation tec

Re: [music-dsp] Can anyone figure out this simple, but apparently wrong, mixing technique?

2016-12-10 Thread Ethan Duni
Ha this article made me chuckle. All the considerations about odd 8 bit audio formats! This method has his desired property that if all but one input is silent, you get the non-silent one at output without attenuation or other degradation. But the inclusion of the cross term makes it quite non-lin

Re: [music-dsp] Allpass filter

2016-12-08 Thread Ethan Duni
mpute the impulse response and truncate/window it to the desired length. FFT domain is generally not a good place to design filters - you're only controlling what happens at the bin centers, and all kinds of wild things can happen in between them. And it's difficult to account for the ci

Re: [music-dsp] Allpass filter

2016-12-07 Thread Ethan Duni
I'm not sure I quite follow what the goal is here? If you already have lp and p, then there aren't any additional calculations needed to obtain ap - it's an IIR filter with numerator coefficients given by lp, and denominator coefficients given by p. The pulse response is obtained by running the fil

Re: [music-dsp] efficient running max algorithm

2016-09-02 Thread Ethan Duni
Right aren't monotonic signals the worst case here? Or maybe not, since they're worst for one wedge, but best for the other? Ethan D On Fri, Sep 2, 2016 at 10:12 AM, Evan Balster wrote: > Just a few clarifications: > > - Local maxima and first difference don't really matter. The maximum > wedg

Re: [music-dsp] idealized flat impact like sound

2016-07-30 Thread Ethan Duni
So like a cascade of allpass filters then? Ethan D On Fri, Jul 29, 2016 at 11:10 AM, gm wrote: > > I think what I am looking for would be the perfect reverb. > > So that's the question reformulated: how could you construct a perfectly > flat short reverb? > > It's the same problem. > > > > Am 2

Re: [music-dsp] Anyone using unums?

2016-04-15 Thread Ethan Duni
>okay, this PDF was more useful than the other. once i got down to slide #31, > i could see the essential definition of what a "unum" is. >big deeel. >first of all, if the word size is fixed and known (and how would you know how far >to go to get to the extra meta-data: inexact bit, num expone

Re: [music-dsp] High quality really broad bandwidth pinknoise (ideally more than 32 octaves)

2016-04-14 Thread Ethan Duni
Any noise other than white noise is correlated, by definition. That's what "white noise" means - uncorrelated. Correlation in the time domain is equivalent to non-constant shape in the frequency domain. Ethan On Thu, Apr 14, 2016 at 12:24 PM, Seth Nickell wrote: > Maybe stupid question: Is pink

Re: [music-dsp] confirm a2ab2276c83b0f9c59752d823250447ab4b666

2016-03-29 Thread Ethan Duni
Supposing this is some griefer it seems reasonable to ignore them - but is there a possibility that this is a symptom of some kind of server attack or attempt to profile/track list members? I've never received any unsub notices myself but it is a little disconcerting that somebody persists at doin

Re: [music-dsp] Changing Biquad filter coefficients on-the-fly, how to handle filter state?

2016-03-03 Thread Ethan Duni
Yeah zeroing out the state is going to lead to a transient, since the filter has to ring up. If you want to go that route, one possibility is to use two filters in parallel: one that keeps the old state/coeffs but gets zero input, and another that has zero state and gets the new input/coeffs. You

Re: [music-dsp] Cheap spectral centroid recipe

2016-02-26 Thread Ethan Duni
Theo wrote: >I get there are certain statistical ideas involved. I wonder >however where those ideas in practice lead to, because >of a number of assumptions, like the "statistical variance" >of a signal. I get that a self correlation of a signal in some >normal definition gives an idea of the powe

Re: [music-dsp] Cheap spectral centroid recipe

2016-02-25 Thread Ethan Duni
>Lastly, it's important to note that differentiation and semi-differentiation >filters are always approximate for sampled signals, and will tend to >exhibit poor behavior for very high frequencies and (for semi-differentiation) >very low ones. I'm not sure there's necessarily a problem at low freq

Re: [music-dsp] Time-domain noisiness estimator

2016-02-21 Thread Ethan Duni
Not a purely time-domain approach, but you can consider comparing sparsity in the time and Fourier domains. The idea is that periodic/tonal type signals may be non-sparse in the time domain, but look sparse in the frequency domain (because all of the energy is on/around harmonics). Similarly, trans

Re: [music-dsp] Cheap spectral centroid recipe

2016-02-19 Thread Ethan Duni
7;d put it out there... E On Thu, Feb 18, 2016 at 5:27 PM, robert bristow-johnson < r...@audioimagination.com> wrote: > > > From: "Ethan Duni" > Date: Thu, February 18, 2016 4:48 pm > -

Re: [music-dsp] Cheap spectral centroid recipe

2016-02-18 Thread Ethan Duni
eption that you know > what you're doing. Because it will be a long time before the perceptual > properties of any brightness metric can be clearly understood, I'll stick > to formulas whose mathematical properties are transparent -- these lend > themselves infinitely better t

Re: [music-dsp] Cheap spectral centroid recipe

2016-02-18 Thread Ethan Duni
ute > higher moments with the differential brightness estimator. > > – Evan Balster > creator of imitone <http://imitone.com> > > On Thu, Feb 18, 2016 at 1:00 AM, Ethan Duni wrote: > >> >normalized to fundamental frequency or not >> >normalized (so th

Re: [music-dsp] Cheap spectral centroid recipe

2016-02-17 Thread Ethan Duni
ion.com> wrote: > > > Original Message > Subject: Re: [music-dsp] Cheap spectral centroid recipe > From: "Ethan Duni" > Date: Wed, February 17, 2016 11:21 pm > To: "A discussion list fo

Re: [music-dsp] Cheap spectral centroid recipe

2016-02-17 Thread Ethan Duni
>It's essentially computing a frequency median, >rather than a frequency mean as is the case >with the derivative-power technique described > in my original approach. So I'm wondering, is there any consensus on what is the best measure of central tendency for a music signal spectrum? There's the m

Re: [music-dsp] Anyone using Chebyshev polynomials to approximate trigonometric functions in FPGA DSP

2016-01-20 Thread Ethan Duni
>given the same order N for the polynomials, whether your basis set are > the Tchebyshevs, T_n(x), or the basis is just set of x^n, if you come up >with a min/max optimal fit to your data, how can the two polynomials be >different? Right, if you do that you'll end up with equivalent answers (to wi

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-16 Thread Ethan Duni
>> [..] the autocorrelation is >> >> = (1/3)*(1-P)^|k| >> >> (I checked that with a little MC code before posting.) So the power >> spectrum is (1/3)/(1 + (1-P)z^-1) The FT of (1/3)*(1-P)^|k| is (1/3)*(1-Q^2)/(1-2Qcos(w) + Q^2), where Q = (1-P). Looks like you were thinking of the expression for

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-11 Thread Ethan Duni
- Original Message ---- > Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold > noise? > From: "Ethan Duni" > Date: Wed, November 11, 2015 7:36 pm > To: "robert bristow-johnson" > "A discussion list for

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-11 Thread Ethan Duni
-- Original Message > Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold > noise? > From: "Ethan Duni" > Date: Wed, November 11, 2015 5:57 pm > To: "robert bristow-johnson" > "A discussion list

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-11 Thread Ethan Duni
mption come from? E On Tue, Nov 10, 2015 at 6:33 PM, robert bristow-johnson < r...@audioimagination.com> wrote: > > > Original Message > Subject: Re: [music-dsp] how to derive spectrum of random sample-and-hold > noise?

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-10 Thread Ethan Duni
>(Semi-)stationarity, I'd say. Ergodicity is a weaker condition, true, >but it doesn't then really capture how your usual L^2 correlative >measures truly work. I think we need both conditions, no? >Something like that, yes, except that you have to factor in aliasing. What aliasing? Isn't this pr

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread Ethan Duni
alking about power per linear or angular frequency. And >>> there could be others I'm not thinking of maybe someone else can >>> shed more light here. >>> >> >> I multiplied the psd by 1/3 and as you can see from the graph it looks as >> though the F

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-05 Thread Ethan Duni
nd scaling, but that's the basic idea. https://en.wikipedia.org/wiki/Spectral_density_estimation E On Thu, Nov 5, 2015 at 2:00 AM, Ross Bencina wrote: > Thanks Ethan(s), > > I was able to follow your derivation. A few questions: > > On 4/11/2015 7:07 PM, Ethan Duni w

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-04 Thread Ethan Duni
Yep that's the same approach I just posted :] E On Tue, Nov 3, 2015 at 11:48 PM, Ethan Fenn wrote: > How about this: > > For a lag of t, the probability that no new samples have been accepted is > (1-P)^|t|. > > So the autocorrelation should be: > > AF(t) = E[x(n)x(n+t)] = (1-P)^|t| * E[x(n)^2]

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-04 Thread Ethan Duni
inite) x[n] = (r[n] wrote: > On 4/11/2015 5:26 AM, Ethan Duni wrote: > >> Do you mean the literal Fourier spectrum of some realization of this >> process, or the power spectral density? I don't think you're going to >> get a closed-form expression for the for

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ethan Duni
Wait, just realized I wrote that last part backwards. It should be: So in broad strokes, what you should see is a lowpass spectrum parameterized by P - for P very small, you approach a DC spectrum, and for P close to 1 you approach a spectrum that's flat. On Tue, Nov 3, 2015 at 10:26 AM,

Re: [music-dsp] how to derive spectrum of random sample-and-hold noise?

2015-11-03 Thread Ethan Duni
Do you mean the literal Fourier spectrum of some realization of this process, or the power spectral density? I don't think you're going to get a closed-form expression for the former (it has a random component). For the latter what you need to do is work out an expression for the autocorrelation fu

Re: [music-dsp] Fourier and its negative exponent

2015-10-05 Thread Ethan Duni
>the reason why it's merely convention is that if the minus sign was swapped >between the forward and inverse Fourier transform in all of the literature and >practice, all of the theorems would work the same as they do now. Note that in some other areas they do actually use other conventions. It's

Re: [music-dsp] warts in JUCE

2015-09-04 Thread Ethan Duni
I don't have a dog in any JUCE fight, but excluding the sample rate from an AudioSampleBuffer type object seems like good design to me. The reason is that system parameters that depend on the sample rate tend to be things like buffer sizes, and so changing them is typically not real-time thread saf

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-26 Thread Ethan Duni
ic on the lengths of the filters as a function of oversampling ratio. >i think we're on the same page. ain't we? Yeah, I was unclear on which scenario(s) the aliasing analysis was supposed to apply to. E On Wed, Aug 26, 2015 at 12:53 PM, robert bristow-johnson < r...@audioimaginati

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-25 Thread Ethan Duni
response. All I would add is that the general rate-change case has to contend with both aliasing suppression and imperfect fractional delay response, so I would expect a fractional-delay-only system to have looser requirements since the signal aliasing issue has been removed. E On Mon, Aug 24, 2015

Re: [music-dsp] [admin] list etiquette

2015-08-22 Thread Ethan Duni
Sounds good Douglass, I'm glad to see you taking the initiative on this matter. The list has generally been an oasis of pleasant, respectful behavior and informative discussions, and it's tragic that it has become so toxic lately. Thanks E On Sat, Aug 22, 2015 at 8:21 AM, Douglas Repetto wrote:

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
ne big FFT of the whole thing, that won't ever get rid of the noisiness no matter how much data you throw at it). E On Fri, Aug 21, 2015 at 5:47 PM, Peter S wrote: > On 22/08/2015, Ethan Duni wrote: > > > > We've been over this repeatedly, including in the very post yo

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
>1) Olli Niemiatalo's graph *is* equivalent of the spectrum of >upsampled white noise. We've been over this repeatedly, including in the very post you are responding to. The fact that there are many ways to produce a graph of the interpolation spectrum is not in dispute, nor is it germaine to my p

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
t of time creating various >demonstrations and FFT graphs showing my point. Your time would be better spent figuring out a point that is relevant to what I'm saying in the first place. It is indeed a waste of your time to invent equivalent ways to generate graphs, since that is not the point

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
The details of how the graphs were generated don't really matter. The point is that the only effect shown is the spectrum of the continuous-time polynomial interpolator. The additional spectral effects of delaying and resampling that continuous-time signal (to get fractional delay, for example) are

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
just highlights your insecurity. E On Fri, Aug 21, 2015 at 1:24 PM, Peter S wrote: > On 21/08/2015, Ethan Duni wrote: > >>It shows *exactly* the aliasing > > > > It shows the aliasing left by linear interpolation into the continuous > time > > domain. It do

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
trum of the continuous time signal. E On Fri, Aug 21, 2015 at 10:51 AM, Peter S wrote: > On 21/08/2015, Ethan Duni wrote: > >>Creating a 22000 Hz signal from a 250 Hz signal by interpolation, is > >>*exactly* upsampling > > > > That is not what is shown in that gra

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-21 Thread Ethan Duni
me signal, not a discrete time signal of whatever sampling rate. E On Fri, Aug 21, 2015 at 2:09 AM, Peter S wrote: > On 21/08/2015, Ethan Duni wrote: > >>In this graph, the signal frequency seems to be 250 Hz, so this graph > >>shows the equivalent of about 22000/250 = 88x

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Ethan Duni
>In this graph, the signal frequency seems to be 250 Hz, so this graph >shows the equivalent of about 22000/250 = 88x oversampling. That graph just shows the frequency responses of various interpolation polynomials. It's not related to oversampling. E On Thu, Aug 20, 2015 at 5:40 PM, Peter S wr

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-20 Thread Ethan Duni
>If all you're trying to do is mitigate the rolloff of linear interp That's one concern, and by itself it implies that you need to oversample by at least some margin to avoid having a zero at the top of your audio band (along with a transition band below that). But the larger concern is the overa

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
s the tightest use of resources (for whatever constraints). Typically those are the arcane ones that take a ton of debugging and optimization :P E On Wed, Aug 19, 2015 at 1:00 PM, robert bristow-johnson < r...@audioimagination.com> wrote: > On 8/19/15 1:43 PM, Peter S wrote: > >>

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
Ugh, I suppose this is what I get for attempting to engage with Peter S again. Not sure what I was thinking... E ___ music-dsp mailing list music-dsp@music.columbia.edu https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
targets. Memory tends to be at a premium on those platforms. E On Wed, Aug 19, 2015 at 3:55 PM, Peter S wrote: > On 20/08/2015, Ethan Duni wrote: > > > > I don't dispute that linear fractional interpolation is the right choice > if > > you're going to over

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
t should also be noted that the linear interpolation can be used for >the upsampling itself as well, reducing the cost of your oversampling, Again, that would add up to a very low quality upsampler. E On Wed, Aug 19, 2015 at 2:06 PM, Peter S wrote: > On 19/08/2015, Ethan Duni wrote: >

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
ples needed to drive the final fractional interpolator is well-taken, but I think I need to see a more detailed accounting of that to be convinced. E On Wed, Aug 19, 2015 at 1:00 PM, robert bristow-johnson < r...@audioimagination.com> wrote: > On 8/19/15 1:43 PM, Peter S wrote: > >

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-19 Thread Ethan Duni
constraint forcing you to use a first-order interpolator. >quite familiar with it. Yeah that was more for the list in general, to keep this discussion (semi-)grounded. E On Wed, Aug 19, 2015 at 9:15 AM, robert bristow-johnson < r...@audioimagination.com> wrote: > On 8/18/15 11:46 P

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
> for linear interpolation, if you are a delayed by 3.5 samples and you keep that delay constant, the transfer function is > > H(z) = (1/2)*(1 + z^-1)*z^-3 > >that filter goes to -inf dB as omega gets closer to pi. Note that this holds for symmetric fractional delay filter of any odd order (i.

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
frequency sinusoids has no bearing on the frequency response of fractional interpolators. I'd suggest dropping this whole derail, if you are no longer hung up on this point. E On Tue, Aug 18, 2015 at 2:08 PM, Peter S wrote: > On 18/08/2015, Ethan Duni wrote: > > > > That c

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
f simple arithmetic, the aliasing issue works like this: I add two numbers together, and find that the answer is X. I tell you X, and then ask you to determine what the two numbers were. Can you do it? E On Tue, Aug 18, 2015 at 2:13 PM, Peter S wrote: > On 18/08/2015, Ethan Duni wrote: >

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
>In order to reconstruct that sinusoid, you'll need a filter with >an infinitely steep transition band. No, even an ideal reconstruction filter won't do it. You've got your +Nyquist component sitting right on top of your -Nyquist component. Hence the aliasing. The information has been lost in the

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
>> well Peter, here again is where you overreach. assuming, without loss >> of generality that the sampling period is 1, the continuous-time signals >> >> x(t) = 1/cos(theta) * cos(pi*t + theta) >> >> are all aliases for the signal described above (and incorrectly as >> "contain[ing] no alia

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
eproduce a nyquist frequency sinusoid when you run it through a DAC. E On Tue, Aug 18, 2015 at 1:28 PM, Peter S wrote: > On 18/08/2015, Ethan Duni wrote: > >>Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1, > > -1... > > > > The sampling th

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
; wrote: > On 8/18/15 3:44 PM, Ethan Duni wrote: > >> >Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1, >> -1... >> >> The sampling theorem requires that all frequencies be *below* the Nyquist >> frequency. Sampling signals at exactl

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Ethan Duni
>Assume you have a Nyquist frequency square wave: 1, -1, 1, -1, 1, -1, 1, -1... The sampling theorem requires that all frequencies be *below* the Nyquist frequency. Sampling signals at exactly the Nyquist frequency is an edge case that sort-of works in some limited special cases, but there is no e

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Ethan Duni
Yeah I am also curious. It's not obvious to me where it would make sense to spend resources compensating for interpolation rather than just juicing up the interpolation scheme in the first place. E On Mon, Aug 17, 2015 at 11:39 AM, Nigel Redmon wrote: > Since compensation filtering has been men

[music-dsp] This seems relevant to the list of late

2015-08-12 Thread Ethan Duni
https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect E ___ music-dsp mailing list music-dsp@music.columbia.edu https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] about entropy encoding

2015-07-16 Thread Ethan Duni
address the source of your hostility, and also that you gain more insight into Information Theory. My apologies to the list for encouraging this unfortunate tangent. E On Thu, Jul 16, 2015 at 8:38 PM, Peter S wrote: > On 17/07/2015, Ethan Duni wrote: > > What are these better estim

Re: [music-dsp] about entropy encoding

2015-07-16 Thread Ethan Duni
ropy of the residual. The average rate produced by some actual coding system is an *upper bound* on the entropy rate of the random process in question. Again, I encourage you to slow the pace of your replies and instead try to write fewer, more concise posts with greater emphasis on clarity and prec

Re: [music-dsp] about entropy encoding

2015-07-16 Thread Ethan Duni
imation approach, not in the numerical implementation thereof. E On Thu, Jul 16, 2015 at 7:07 AM, Peter S wrote: > On 15/07/2015, Ethan Duni wrote: > > Right, this is an artifact of the approximation you're doing. The model > > doesn't explicitly understand periodicity

Re: [music-dsp] about entropy encoding

2015-07-16 Thread Ethan Duni
>This algorithm gives an entropy rate estimate approaching zero for any >periodic waveform, irregardless of the shape (assuming the analysis >window is large enough). But, it seems that it does *not* approach zero. If you fed an arbitrarily long periodic waveform into this estimator, you won't see

Re: [music-dsp] about entropy encoding

2015-07-15 Thread Ethan Duni
>I wondered a few times what a higher "entropy" estimate for a higher >frequency would mean according to this - I think it means that a >higher frequency signal needs a higher bandwidth channel to transmit, >as you need a transmission rate of 2*F to transmit a periodic square >wave of frequency F.

  1   2   3   >