Bonsoir François,
Great post indeed :-)
I fully agree.
Le 13 nov. 08 à 18:47, François Chaplais a écrit :
Le 13 nov. 08 à 03:39, Randall Reetz a écrit :
And another problem is that a random and unique solution actually
reduces randomness as it is run. Each time you eliminate a
number, the set of numbers left is reduced. This is even true of
an infinate number randomizer. Sometimes i wonder if this
fascination with random number generation isnt a good diagnosis of
severe case of the geeks.
maybe it is just a lack of mathematical background
-----Original Message-----
From: "Randall Reetz" <[EMAIL PROTECTED]>
To: "How to use Revolution" <use-revolution@lists.runrev.com>
Sent: 11/12/2008 6:18 PM
Subject: RE: Random algorithm
There is a huge difference between random and unique. If you are
after unique then just use the counting numbers. If you need both
random and unique you will have to check each number generated
against a saved list of every previous number. There is nothing
wrong with a random number generator that spits out duplicate
numbers. Random is blind to history (and future). Random is not
nostalgic. A coin with two sides is just as good at random as a
pair of thousand sided dice.
actually, random is so little nostalgic that a random sequence of
zeros and ones (with equal probabilities) can produce ones for a
zillion consecutive ones without invalidating the probabilistic
model. This fact holds (mathematically) as long as the number of
events is finite (which is always the case in practice). The
central limit theorem only holds for an "actual" infinite number of
values.
Of course, some may object that having a zillion consecutive ones
is unprobable; however, this assumption itself can only be verified
by repeating the experience an actual infinity of times, so we're
back to the same modelling problem.
In practice, people do not refer to probabilities but to
statistics. As far as I know there are two schools of statisticians
(at least when it comes to teaching)
1) the "clean" statisticians present statistics as an offspring of
probabilities; it is mathematically clean but has the same
weaknesses when to it comes to confronting the model to the
experiment.
2) the "dirty" statisticians admit that if your random process
produces a zillion ones, then you have to pull the trigger on the
model, arguing that modelling the sequence by a constant is closer
to what happens and as economical as the flawed statistical model.
A zillion or two zillion limit: you chose.
Now, if you admit that computers are deterministic, then, knowing
the initial state of your machine (which may be LARGE), you are
able to predict every output of it provided you know the inputs.
Relying on unmodelled input (such as the time at which you computer
is turned on) only makes the thing unmodelled; it does not garantee
randomness.
If you go further, if all comes to a problem of semantics: what
people want with random series is a user triggered event that will
defeat prediction (that's what the las vegas folks want). However
this definition is severely hampered the the limitations of the
existing languages (man or machine language). You should consider
the possibility that one will produce a language/model that can
predict what happens.
cheers,
François
P.S. on the lighter side, my wife's experience with M$ Word on the
PC suggest that a large amount of Word's behaviour is unpredictable.
Best regards from Paris,
Eric Chatonet.
----------------------------------------------------------------
Plugins and tutorials for Revolution: http://www.sosmartsoftware.com/
Email: [EMAIL PROTECTED]/
----------------------------------------------------------------
_______________________________________________
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution