>So for y[n] ~U(-1,1) I should multiply psd[w] by what exactly?

The variance of y[n]. For U(-1,1) this is 1/3. From your subsequent post it
sounds like you got this ironed out?

>What is the method that you used to go from ac[k] to psd[w]?
>Robert mentioned that psd was the Fourier transform of ac.
>Is this particular case a standard transform that you knew off the top of
your head?

Yeah it's just the DTFT of the autocorrelation function. You can find that
one in a suitably complete table of transform pairs, or just evaluate it
directly by using the geometric series formula and a bit of manipulation as
Ethan F described (that's what I did, for old time's sake).

Are you comparing this to a big FFT of a long sequence of samples from this
process (i.e., periodogram)? The basic shape should be visible there, but
with quite a lot of noise (because the frequency resolution increases at
the same rate as the number of samples, you have a constant number of
samples per frequency bin so that error never converges to zero). To really
see the effect you can use a more sophisticated spectral density estimate
like Welch's method. Basically you chop the signal into chunks (with length
determined by your frequency resolution) and then average the FFT
magnitudes of those. That way you have a constant frequency resolution and
increasing samples per bin, so the result will converge to the underlying
PSD. There are more details with windowing and overlap and scaling, but
that's the basic idea.

https://en.wikipedia.org/wiki/Spectral_density_estimation

E



On Thu, Nov 5, 2015 at 2:00 AM, Ross Bencina <rossb-li...@audiomulch.com>
wrote:

> Thanks Ethan(s),
>
> I was able to follow your derivation. A few questions:
>
> On 4/11/2015 7:07 PM, Ethan Duni wrote:
>
>> It's pretty straightforward to derive the autocorrelation and psd for
>> this one. Let me restate it with some convenient notation. Let's say
>> there are a parameter P in (0,1) and 3 random processes:
>> r[n] i.i.d. ~U(0,1)
>> y[n] i.i.d. ~(some distribution with at least first and second moments
>> finite)
>> x[n] = (r[n]<P)?x[n-1]:y[n]
>>
>> Note that I've switched the probability of holding to P from (1-P), and
>> that the signal being sampled-and-held can have an arbitrary (if well
>> behaved) distribution. Let's also assume wlog that E[y[n]y[n]] = 1
>> (Scale the final results by the power of whatever distribution you
>> prefer).
>>
>
> So for y[n] ~U(-1,1) I should multiply psd[w] by what exactly?
>
>
> Now, the autocorrelation function is ac[k] = E[x[n]x[n-k]]. Let's work
>> through the first few values:
>> k=0:
>> ac[0] = E[x[n]x[n]] = E[y[n]y[n]] = 1
>> k=1:
>> ac[1] = E[x[n]x[n-1]] = P*E[x[n-1]x[n-1]] + (1-P)*E[x[n-1]y[n]] =
>> P*E[y[n]y[n]] = P
>>
>> The idea is that P of the time, x[n] = x[n-1] (resulting in the first
>> term) and (1-P) of the time, x[n] is a new, uncorrelated sample from
>> y[n]. So we're left with P times the power (assumed to be 1 above).
>>
>> k=2:
>> ac[2] = P*P*E[x[n-2]x[n-2]] = P^2
>>
>> Again, we decompose the expected value into the case where x[n] = x[n-2]
>> - this only happens if both of the previous samples were held
>> (probability P^2). The rest of the time - if there was at least one
>> sample event - we have uncorrelated variables and the term drops out.
>>
>> So, by induction and symmetry, we conclude:
>>
> >
>
>> ac[k] = P^abs(k)
>>
> >
>
>> And so the psd is given by:
>>
>> psd[w] = (1 - P^2)/(1 - 2Pcos(w) + P^2)
>>
>
> What is the method that you used to go from ac[k] to psd[w]? Robert
> mentioned that psd was the Fourier transform of ac. Is this particular case
> a standard transform that you knew off the top of your head?
>
> And is psd[w] in exactly the same units as the magnitude squared spectrum
> of x[n] (i.e. |ft(x)|^2)?
>
>
> Unless I've screwed up somewhere?
>>
>
> A quick simulation suggests that it might be okay:
>
> https://www.dropbox.com/home/Public?preview=SH1_1.png
>
>
> But I don't seem to have the scale factors correct. The psd has
> significantly smaller magnitude than the fft.
>
> Here's the numpy code I used (also pasted below).
>
> https://gist.github.com/RossBencina/a15a696adf0232c73a55
>
> The FFT output is scaled by (2.0/N) prior to computing the magnitude
> squared spectrum.
>
> I have also scaled the PSD by (2.0/N). That doesn't seem quite right to me
> for two reasons: (1) the scale factor is applied to the linear FFT, but to
> the mag squared PSD and (2) I don't have the 1/3 factor anywhere.
>
> Any thoughts on what I'm doing wrong?
>
> Thanks,
>
> Ross.
>
>
> P.S. Pasting the numpy code below:
>
> # ---------------8<--------------------------------
> # see
> https://lists.columbia.edu/pipermail/music-dsp/2015-November/000424.html
> # psd derivation due to Ethan Duni
> import numpy as np
> from numpy.fft import fft, fftfreq
> import matplotlib.pyplot as plt
>
> N = 16384*2*2*2*2*2 # FFT size
>
> y = (np.random.random(N) * 2) - 1 # ~U(-1,1)
> r = np.random.random(N) # ~U(0,1)
> x = np.empty(N) # (r[n]<P)?x[n-1]:y[n]
>
> # hold process with probability of P for holding
> P = 0.9
> x[0] = 0
> for i in range(1,N-1):
>     if r[i] < P:
>         x[i] = x[i-1]
>     else:
>         x[i] = y[i]
>
> # FFT of x
> window = np.hamming(N)
> spectrum = fft(x*window,N)
> magnitudeSq = ((2.0/N) * np.abs(spectrum)) ** 2
>
> # Calculate PSD
> w = fftfreq(N)*np.pi*2 # (w from 0 to 0.5 then -0.5 to 1
> psd = ((1.0 - P**2)/(1.0 - 2*P*np.cos(w) + P**2))
>
> psd = psd*(2.0/N) # not sure about this
>
> plt.plot(w, magnitudeSq)
> plt.plot(w, psd, color='red')
> plt.xscale('log')
> plt.yscale('log')
> plt.show()
> # ---------------8<--------------------------------
>
>
>
>
>
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to