On 11.05.22 08:15, Carsten Andrich wrote:
Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, [...]
I had the same question when I first saw this. Unfortunately I don't have a good
answer, besides that forward + inverse ensures that the noise looks like it is
supposed to do, while I'm not 100% whether there is an easy way to generate
time-domain Gauss i.i.d. noise in the frequency domain.
If you know how, please let me know.
Got an idea on that, will report back.
Disclaimer: I'm an electrical engineer, not a mathematician, so someone
trained in the latter craft should verify my train of though. Feedback
is much appreciated.
Turns out the derivation of the DFT of time-domain white noise is
straightforward. The DFT formula
X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pi*k*n/N}
illustrates that a single frequency-domain sample is just a sum of
scaled time-domain samples. Now let x[n] be N normally distributed
samples with zero mean and variance \sigma^2, thus each X[k] is a sum of
scaled i.i.d. random variables. According to the central limit theorem,
the sum of these random variables is normally distributed.
To ascertain the variance of X[k], rely on the linearity of variance
[1], i.e., Var(a*X+b*Y) = a^2*Var(X) + b^2*Var(Y) + 2ab*Cov(X,Y), and
the fact that the covariance of uncorrelated variables is zero, so,
separating into real and imaginary components, one gets:
Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pi*k*n/N})}^2
= \Sum_{n=0}^{N-1} \sigma^2 * cos(2*\pi*k*n/N)^2
= \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pi*k*n/N)^2
Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pi*k*n/N})}^2
= ...
= \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pi*k*n/N)^2
The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and
k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real DC
and Nyquist components as is to be expected for a real x[n].
Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the following
variance:
{ N * \sigma^2, k = 0
Var(Re{X[k]}) = { N * \sigma^2, k = N/2
{ N/2 * \sigma^2, else
{ 0 , k = 0
Var(Im{X[k]}) = { 0 , k = N/2
{ N/2 * \sigma^2, else
Therefore, a normally distributed time domain-sequence x[n] ~ N(0,
\sigma^2) with N samples has the following DFT (note: N is the number of
samples and N(0, 1) is a normally distributed random variable with mean
0 and variance 1):
{ N^0.5 * \sigma * N(0, 1) , k = 0
X[k] = { N^0.5 * \sigma * N(0, 1) , k = N/2
{ (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else
Greenhall has the same results, with two noteworthy differences [2].
First, normalization with the sample count occurs after the IFFT.
Second, his FFT is of size 2N, resulting in a N^0.5 factor between his
results and the above. Finally, Greenhall discards half (minus one) of
the samples returned by the IFFT to realize linear convolution instead
of circular convolution, fundamentally implementing a single iteration
of overlap-save fast convolution [3]. If I didn't miss anything skimming
over the bruiteur source code, it seems to skip that very step and
therefore generates periodic output [4].
Best regards,
Carsten
[1] https://en.wikipedia.org/wiki/Variance#Basic_properties
[2] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5
[3] https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method
[4] https://github.com/euldulle/SigmaTheta/blob/master/source/filtre.c#L130
_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-le...@lists.febo.com