Re: [FRIAM] random vs pseudo-random

2009-04-25 Thread Nick Frost

On Apr 25, 2009, at 6:30 PM, Douglas Roberts wrote:

What leads you to suspect that the CPU I/O noise is random?  The  
noise generated by such comes from a chipset that operates at a  
given frequency, which is powered by an AC source running at another  
frequency, filtered through a power supply with capacitors,  
resistors, etc. with their own set of time constant responses...


I had a custom box some years ago built around a Gigabyte GA-7IXE  
(Slot-A Athlon).  I could hear bus noise when using headphones  
connected to the onboard audio.  I listened for periodicity, but based  
on keystrokes and the various different instructions occurring due to  
application behavior...it seemed pretty random to me.  However, I  
realize my observation is not empirical.  Morse code can *sound*  
regular when sent by hand, but a computer may have difficulty  
interpreting continuous waves of slightly different duration which  
sound "the same" to the human ear.


Nevertheless, the RFI emitted by computers and their components is a  
subject of near endless fascination.


-Nick


Nicholas S. Frost
7 Avenida Vista Grande #325
Santa Fe, NM  87508
ni...@nickorama.com




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] random vs pseudo-random

2009-04-25 Thread Nicholas Thompson
I have always thought of the concept of noise as equivalent to the concept
of "weeds".  

One man's noise is another man's signal. 

n

Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology, 
Clark University (nthomp...@clarku.edu)
http://home.earthlink.net/~nickthompson/naturaldesigns/




> [Original Message]
> From: Marcus G. Daniels 
> To: The Friday Morning Applied Complexity Coffee Group 
> Date: 4/25/2009 7:56:18 PM
> Subject: Re: [FRIAM] random vs pseudo-random
>
> Douglas Roberts wrote:
> > What leads you to suspect that the CPU I/O noise is random?  The noise 
> > generated by such comes from a chipset that operates at a given 
> > frequency, which is powered by an AC source running at another 
> > frequency, filtered through a power supply with capacitors, resistors, 
> > etc. with their own set of time constant responses..
> Otherwise the effort is to reduce all noise, so presumably it is less 
> difficult to let a certain of class of noise in.   I believe what the 
> PCMOS folks are going for are voltage variations on (fixed resistance) 
> wires due to Brownian motion of electrons due to heat-- white noise.   
> There's always plenty of heat.
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] random vs pseudo-random

2009-04-25 Thread Marcus G. Daniels

Douglas Roberts wrote:
What leads you to suspect that the CPU I/O noise is random?  The noise 
generated by such comes from a chipset that operates at a given 
frequency, which is powered by an AC source running at another 
frequency, filtered through a power supply with capacitors, resistors, 
etc. with their own set of time constant responses..
Otherwise the effort is to reduce all noise, so presumably it is less 
difficult to let a certain of class of noise in.   I believe what the 
PCMOS folks are going for are voltage variations on (fixed resistance) 
wires due to Brownian motion of electrons due to heat-- white noise.   
There's always plenty of heat.



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] random vs pseudo-random

2009-04-25 Thread Douglas Roberts
Owen,

What leads you to suspect that the CPU I/O noise is random?  The noise
generated by such comes from a chipset that operates at a given frequency,
which is powered by an AC source running at another frequency, filtered
through a power supply with capacitors, resistors, etc. with their own set
of time constant responses...

--Doug

On Sat, Apr 25, 2009 at 6:02 PM, Marcus G. Daniels wrote:

> Owen Densmore wrote:
>
>> Most of computing does not need to be exact .. a slight "error" generally
>> is not terrible and for imaging, audio, and so on simply is not observable
>> by a human.
>>
> And if what you need is a *lot* of random numbers [1], why do dozens of
> cycles of exact arithmetic and memory lookups to make pseudo random numbers,
> if you could instead just read a vector of physical noise values from a CPU
> I/O port in a single cycle?
>
> [1] http://en.wikipedia.org/wiki/Monte_Carlo_method
>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>



-- 
Doug Roberts
drobe...@rti.org
d...@parrot-farm.net
505-455-7333 - Office
505-670-8195 - Cell

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Re: [FRIAM] random vs pseudo-random

2009-04-25 Thread Owen Densmore

On Apr 25, 2009, at 6:02 PM, Marcus G. Daniels wrote:


Owen Densmore wrote:
Most of computing does not need to be exact .. a slight "error"  
generally is not terrible and for imaging, audio, and so on simply  
is not observable by a human.
And if what you need is a *lot* of random numbers [1], why do dozens  
of cycles of exact arithmetic and memory lookups to make pseudo  
random numbers, if you could instead just read a vector of physical  
noise values from a CPU I/O port in a single cycle?


[1] http://en.wikipedia.org/wiki/Monte_Carlo_method


Hmm.. maybe it'll be a bit like graphics: initially graphics was all  
done on the CPU.  Then it became important enough to have its own co- 
processor, a GPU.  Maybe a RPU is next?


-- Owen



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] random vs pseudo-random

2009-04-25 Thread Marcus G. Daniels

Owen Densmore wrote:
Most of computing does not need to be exact .. a slight "error" 
generally is not terrible and for imaging, audio, and so on simply is 
not observable by a human.
And if what you need is a *lot* of random numbers [1], why do dozens of 
cycles of exact arithmetic and memory lookups to make pseudo random 
numbers, if you could instead just read a vector of physical noise 
values from a CPU I/O port in a single cycle?


[1] http://en.wikipedia.org/wiki/Monte_Carlo_method



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] random vs pseudo-random

2009-04-25 Thread Owen Densmore
Marcus, what a nifty idea! (http://tinyurl.com/ys388b)  Most of  
computing does not need to be exact .. a slight "error" generally is  
not terrible and for imaging, audio, and so on simply is not  
observable by a human.


And there are lots of solutions for making inaccuracy less  
observable.  A few weeks ago, Steve was using netlogo, a projector,  
and a camera to build a camera coordinate system that lets the  
computer "know" where a particular event occurs in the world.


To do this, a calibration step of several horizontal and vertical  
stripes are projected and the camera collects the data.  Steve used  
gray-coding:

  http://en.wikipedia.org/wiki/Gray_code
to minimize the possible bit errors so that, for example, an error in  
the high order bit did not cause a 2^n error, but only an error of 1  
(2^0).


I really like this sort of thinking.  Letting computers be a bit fuzzy  
in areas where slight errors can be managed, especially with adaptive  
algorithms to bound the error, seems very reasonable, especially where  
the system achieves benefits in other areas such as power consumption  
and better random number generation.


Sweet!

-- Owen


On Apr 23, 2009, at 10:02 PM, Marcus G. Daniels wrote:


Possibly of interest..

http://www.cs.rice.edu/~kvp1

http://www.businessweek.com/technology/content/feb2005/tc2005024_2426_tc024.htm
http://www.technologyreview.com/read_article.aspx?ch=specialsections&sc=emerging08&id=20246


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] random vs pseudo-random

2009-04-24 Thread Robert Howard
>The really cool thing about this is that the total complexity of the
>Multiverse is really, really small, namely that of a fairly simple equation
>called Shroedinger's equation like you suggest."

That would make sense if for every bit created its complementary bits were
also created somewhere else. If the multi-universe is a reversible computer,
then the total sum of everything would be nothing, which is the motto of
your book. It sure is a lot simpler to explain nothing overall than
something specific. :-)

Rob



-Original Message-
From: friam-boun...@redfish.com [mailto:friam-boun...@redfish.com] On Behalf
Of russell standish
Sent: Thursday, April 23, 2009 11:34 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] random vs pseudo-random

On Thu, Apr 23, 2009 at 11:37:35AM -0700, Robert Howard wrote:
> I suppose Dennett is implying that the linear congruential generator below
> would take at least the number of bits in variables a, b, m, and x[0]. If
> those are 1-byte integers, then the bit count is at least 32 bits. There’s
> overhead for the actual code too. How do you measure that? Suppose the
> answer is 100 bits for the code including state. Then if you use it to
> generate a sequence of one gazillion values, that sequence would only
> contain the equivalent of 100 bits of information because it can be
> generated by a 100 bit algorithm.
> 

Yes.

>  
> 
> I still suspect there might be circular logic here. Do we place no value
on
> the energy needed to generate it? 
> 

That's right. Information is a purely information theoretic
construct. Energy doesn't come into it.

>  
> 
> What if our entire universe can be described in a very simple equation
that
> is just generated recursively or fractally by many iterations? If that
> equation was less than a megabyte, then would we argue that the entire
works
> of Shakespeare must have less information?
> 

No, because the universe did not produce Shakespeare by a
deterministic process. Every time a quantum measurement is taken of
something that might have one of two values, a bit of information is
generated. Information can be also be lost, a process known as quantum
erasure. This explains why the universe we see today is less complex
than it would naively seem if every particle generated a new bit of
information each Planck time.

This is perhaps most easily seen in the Many Worlds Interpretation,
although it is not necessary for the MWI to be true for information to
increase in our universe. In the MWI, the Universe splits each time a
measurement is taken, so one ends up with some enormous number of
parallel universes. In most of those universes, Shakespeare never
existed, and so the works of Shakespeare never materialised. Only in
our special neck of the woods are there Shakespearian tragedies, and
many more things that even more complex (eg human brains).

The really cool thing about this is that the total complexity of the
Multiverse is really, really small, namely that of a fairly simple
equation called Shroedinger's equation like you suggest.

If you want to know more, please take a look at my book "Theory of
Nothing".

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 hpco...@hpcoders.com.au
Australiahttp://www.hpcoders.com.au



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] random vs pseudo-random

2009-04-23 Thread russell standish
On Thu, Apr 23, 2009 at 11:37:35AM -0700, Robert Howard wrote:
> I suppose Dennett is implying that the linear congruential generator below
> would take at least the number of bits in variables a, b, m, and x[0]. If
> those are 1-byte integers, then the bit count is at least 32 bits. There’s
> overhead for the actual code too. How do you measure that? Suppose the
> answer is 100 bits for the code including state. Then if you use it to
> generate a sequence of one gazillion values, that sequence would only
> contain the equivalent of 100 bits of information because it can be
> generated by a 100 bit algorithm.
> 

Yes.

>  
> 
> I still suspect there might be circular logic here. Do we place no value on
> the energy needed to generate it? 
> 

That's right. Information is a purely information theoretic
construct. Energy doesn't come into it.

>  
> 
> What if our entire universe can be described in a very simple equation that
> is just generated recursively or fractally by many iterations? If that
> equation was less than a megabyte, then would we argue that the entire works
> of Shakespeare must have less information?
> 

No, because the universe did not produce Shakespeare by a
deterministic process. Every time a quantum measurement is taken of
something that might have one of two values, a bit of information is
generated. Information can be also be lost, a process known as quantum
erasure. This explains why the universe we see today is less complex
than it would naively seem if every particle generated a new bit of
information each Planck time.

This is perhaps most easily seen in the Many Worlds Interpretation,
although it is not necessary for the MWI to be true for information to
increase in our universe. In the MWI, the Universe splits each time a
measurement is taken, so one ends up with some enormous number of
parallel universes. In most of those universes, Shakespeare never
existed, and so the works of Shakespeare never materialised. Only in
our special neck of the woods are there Shakespearian tragedies, and
many more things that even more complex (eg human brains).

The really cool thing about this is that the total complexity of the
Multiverse is really, really small, namely that of a fairly simple
equation called Shroedinger's equation like you suggest.

If you want to know more, please take a look at my book "Theory of
Nothing".

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 hpco...@hpcoders.com.au
Australiahttp://www.hpcoders.com.au



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] random vs pseudo-random

2009-04-23 Thread Marcus G. Daniels

Possibly of interest..

http://www.cs.rice.edu/~kvp1

http://www.businessweek.com/technology/content/feb2005/tc2005024_2426_tc024.htm
http://www.technologyreview.com/read_article.aspx?ch=specialsections&sc=emerging08&id=20246


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] random vs pseudo-random

2009-04-23 Thread Robert Howard
I suppose Dennett is implying that the linear congruential generator below
would take at least the number of bits in variables a, b, m, and x[0]. If
those are 1-byte integers, then the bit count is at least 32 bits. There’s
overhead for the actual code too. How do you measure that? Suppose the
answer is 100 bits for the code including state. Then if you use it to
generate a sequence of one gazillion values, that sequence would only
contain the equivalent of 100 bits of information because it can be
generated by a 100 bit algorithm.

 

I still suspect there might be circular logic here. Do we place no value on
the energy needed to generate it? 

 

What if our entire universe can be described in a very simple equation that
is just generated recursively or fractally by many iterations? If that
equation was less than a megabyte, then would we argue that the entire works
of Shakespeare must have less information?

 

  _  

From: friam-boun...@redfish.com [mailto:friam-boun...@redfish.com] On Behalf
Of Roger Critchlow
Sent: Thursday, April 23, 2009 12:37 AM
To: nickthomp...@earthlink.net; The Friday Morning Applied Complexity Coffee
Group
Subject: Re: [FRIAM] random vs pseudo-random

 

 

On Thu, Apr 23, 2009 at 1:05 AM, Nicholas Thompson
 wrote:

 

Can anybody help me understand this.  (Please try to say something more
helpful than the well-deserved, "Well, why do you THINK they call it
pseudo-random, you dummy?")What DOES a pseudo randomizing program look like?


 


This one is called a linear congruential generator:

x[i+1] = (a * x[i] + b) modulo m


x[i] is the current random number, x[i+1] is the next random number, for
appropriate choice of a, b and m the sequence of numbers produced will go
through all the numbers from 0 to m-1 in some order over and over again.

The choice of a = 1, b = 1 enumerates in increasing order, and the choice of
a = 1, b = -1 enumerates in decreasing order.  Neither is a very good
choice, but they aren't the only bad choices.  For instance, a = 0 is
probably the worst choice of linear congruential multiplier.

You might try out different values for a and b in a spreadsheet with m = 17.
Or just do it on paper.

-- rec --


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Re: [FRIAM] random vs pseudo-random

2009-04-23 Thread Roger Frye
There are two conflicting definitions of randomness being used here.   
The purpose of a pseudo-random number generator (PRNG) on a computer  
is to provide a sequence of numbers that is statistically  
indistinguishable from random noise.  Good PRNG cover their range  
completely and do not show any obvious patterns in the sequence of  
digits.  This statistical randomness is different from Chaitin's  
informational randomness.


The digits of pi are statistically random, even though they are  
completely determined. They could be used as a PRNG, but you would  
prefer a faster algorithm than one of the ones that can be used to  
generate the digits of pi.  These algorithms are much shorter than the  
theoretically infinite number of digits of pi that they could generate.


The linear congruential generator that Critchlow describes is quite  
short, but generates long streams of numbers before it repeats.  So it  
is useful for making up seemingly random sentences or pictures or  
other objects.  This generator shows a streaking pattern if you use it  
to choose a pair of numbers as coordinates for where to plot a pixel,  
so there are some applications where it would not be considered random  
enough, but there are many others where it would be sufficient.


The successive digits of the square root of a prime number can be used  
as a PRNG, but when pairs of numbers are used as coordinates for a  
plot, they fill the space in a very predictable pattern.  Algorithms  
like this are called quasi-random number generators.  Another example  
would be the points on Peano's space-filling curve.  For most  
applications, this regular, non-overlapping coverage would be  
undesirable, but for many Monte Carlo applications, it provides order  
N convergence, which is a huge improvement over the order N^2  
convergence provided by a purely random sequence or by most PRNG.


-Roger Frye

On Apr 23, 2009, at 1:05 AM, Nicholas Thompson wrote:



Hi, everybody,

Now that the recent burst of metaphysics is completed, I was curious  
about your take on the following quote, which is from a footnote in  
Dennett's Real Patterns:


"More precisely: 'A series of numbers is random if the smallest  
algorithm capable of specifying it to a computer has about the same  
number of bits of information as the series itself" (Chaitin, p.  
48).   This what explalins the fact that the random number generator  
built into most computers is not really properly named, since it is  
some function describable in a few bits (a littlesubroutine that is  
called for some output whenver a program reuires a 'random' number  
or series).


So far it all makes sense to me.  But now Dennett adds the following  
comment:


If I send you the descriptoin of the pseudo-random number generator  
on my computer, you can use it to generate exactly the same infinite  
series of random-seeming digits.


Now, it seems to me that IF the behavior of a pseudo-random number  
generator IS describable in a very few bits ... if it is a SMALL  
program ..., then the pattern it generates is also describable with  
enormous savings and it is not, by Dennetts definition, anything  
like random.  It might by mysterious, but no where near RANDOM.
Can anybody help me understand this.  (Please try to say something  
more helpful than the well-deserved, "Well, why do you THINK they  
call it pseudo-random, you dummy?")What DOES a pseudo randomizing  
program look like?


Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University (nthomp...@clarku.edu)
http://home.earthlink.net/~nickthompson/naturaldesigns/


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Re: [FRIAM] random vs pseudo-random

2009-04-23 Thread Roger Critchlow
On Thu, Apr 23, 2009 at 1:05 AM, Nicholas Thompson <
nickthomp...@earthlink.net> wrote:

>
> Can anybody help me understand this.  (Please try to say something more
> helpful than the well-deserved, "Well, why do you THINK they call it
> pseudo-random, you dummy?")What DOES a pseudo randomizing program look like?
>
>
>

This one is called a linear congruential generator:

x[i+1] = (a * x[i] + b) modulo m

x[i] is the current random number, x[i+1] is the next random number, for
appropriate choice of a, b and m the sequence of numbers produced will go
through all the numbers from 0 to m-1 in some order over and over again.

The choice of a = 1, b = 1 enumerates in increasing order, and the choice of
a = 1, b = -1 enumerates in decreasing order.  Neither is a very good
choice, but they aren't the only bad choices.  For instance, a = 0 is
probably the worst choice of linear congruential multiplier.

You might try out different values for a and b in a spreadsheet with m =
17.  Or just do it on paper.

-- rec --

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

[FRIAM] random vs pseudo-random

2009-04-23 Thread Nicholas Thompson
Hi, everybody, 

Now that the recent burst of metaphysics is completed, I was curious about your 
take on the following quote, which is from a footnote in Dennett's Real 
Patterns: 

"More precisely: 'A series of numbers is random if the smallest algorithm 
capable of specifying it to a computer has about the same number of bits of 
information as the series itself" (Chaitin, p. 48).   This what explalins the 
fact that the random number generator built into most computers is not really 
properly named, since it is some function describable in a few bits (a 
littlesubroutine that is called for some output whenver a program reuires a 
'random' number or series).  

So far it all makes sense to me.  But now Dennett adds the following comment: 

If I send you the descriptoin of the pseudo-random number generator on my 
computer, you can use it to generate exactly the same infinite series of 
random-seeming digits.

Now, it seems to me that IF the behavior of a pseudo-random number generator IS 
describable in a very few bits ... if it is a SMALL program ..., then the 
pattern it generates is also describable with enormous savings and it is not, 
by Dennetts definition, anything like random.  It might by mysterious, but no 
where near RANDOM.  
Can anybody help me understand this.  (Please try to say something more helpful 
than the well-deserved, "Well, why do you THINK they call it pseudo-random, you 
dummy?")What DOES a pseudo randomizing program look like? 

Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology, 
Clark University (nthomp...@clarku.edu)
http://home.earthlink.net/~nickthompson/naturaldesigns/
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org