There is a theorem that goes something like this:

If you have white noise expressed in one orthonormal basis, and you
transform it to another orthonormal basis, the result will still be white
noise.

The phrasing of that is obviously imprecise, but the point is this: since
the time and Fourier domains are both orthonormal bases of band-limitted
functions, you can conclude that your FFT of white noise will also be
distributed like white noise. This allows us to define white noise in
multiple ways, the way the wikipedia article does.

However, white noise created in the time domain can be created using any
probability density function (PDF). For example, Gaussian white noise uses
the normal distribution and uniform white noise uses the uniform
distribution, but they both produce white noise as long as certain
conditions are met (e.g., the samples are independent). I am not sure if
the PDFs are preserved across transforms from one orthonormal basis to
another, and the answer to your question would depend on that (Of course it
would also depend on several other parts of the phrasing of your question
that aren't clear to me). My intuition is that PDFs are preserved across
such transforms.

bjorn


On Fri, Oct 31, 2014 at 1:06 PM, Theo Verelst <theo...@theover.org> wrote:

>
>
> Hi music DSpers,
>
> Maybe running the risk of starting a Griffin-Gate,
> but one more consideration for the people interested in
> keeping the basics of digital processing a bit pure, and
> maybe to learn a thing or two for those working and/or
> hobby-ing around in the field.
>
> Just like there is some doubt cast on the Wikipdia page on
> the white noise subject ( http://en.wikipedia.org/wiki/White_noise )
> I put quotes around the concept, because if we're talking the frequency
> transform usually implied by Fast Fourier Transform, we're talking
> sampled signals, so we need to make some assumptions about
> how to satisfy the sampling theorem if we start from the
> normal Information Theory and Physics interpretation of
> continuous white noise signals. I suppose the assumption is
> that if you take random numbers, somehow limited to the maximum
> amplitude of the samples you use, each sample an uncorrelated
> random number, you have some form of digital "white noise" that
> can be related to the more general concepts.
>
> Now we take such a signal, or a sampled signal from a continuous
> (more interesting !) white noise with some form of frequency
> limitation (creates correlation in most cases) or signal assumption
> and sample and hold perfection, to make the FFT transform act on
> a contiguous set of the properly obtained white noise signal.
> Say we're only taking one length of the FFT transform, and are only
> interested in the volume of the various output "bins".
>
> Now, how probable is it that we get "all equal" frequency amounts as
> the output of the this FFT transform (without regarding phase), taking
> for instance 256 or 4096 bins, and 16 bits accuracy ?! Or, how long would
> we have to average the bin values to end up equal (and what sort
> of entropy would that entail)?
>
> T.V.
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>



-- 
---------------------
Bjorn Roche
bjornroche.com <http://blog.bjornroche.com>
@xonamiaudio
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to