Re: paradoxes of randomness

2003-08-23 Thread John Kelsey
At 08:45 AM 8/19/03 -0700, Tim May wrote:
..
(I strongly urge you to actually do this experiment. Really. These are the 
experiments which teach probability theory. No amount of book learning 
substitutes.)
Yep.  I've often thought that one benefit to playing RPGs when I was 
younger was directly observing lots and lots of rolls of various kinds of 
dice.  That gives you an intuition for how unlikely things can happen 
sometimes, for the difference between very unlikely and impossible, etc.

So the coin has been tossed twice in this particular experiment. There is 
now the possibility for equal numbers of heads and tailsbut for the 
second coin toss to give the opposite result of the first toss, every 
time, to balance the outcomes, the coin or the wind currents would have 
to conspire to make the outcome the opposite of what the first toss 
gave. (This is so absurd as to be not worth discussing, except that I know 
of no other way to convince you that your theory that equal numbers of 
heads and tails must be seen cannot be true in any particular experiment. 
The more mathematical way of saying this is that the outcomes are 
independent. The result of one coin toss does not affect the next one, 
which may take place far away, in another room, and so on.)
In fact, I believe this is the trick that makes it very easy to distinguish 
between sequences of coin flips that really happen, and ones that are made 
up by a human.  The human tends to try to make things even out over time.

--Tim May
--John Kelsey, [EMAIL PROTECTED]
PGP: FA48 3237 9AD5 30AC EEDD  BBC8 2A80 6948 4CAA F259


Re: paradoxes of randomness

2003-08-19 Thread Sarad AV
hi,

--- Dave Howe [EMAIL PROTECTED] wrote:
. (Not
  saying you do, just quibbling with any claim that
 readily calculated
  probabilities can be surprising.)
 I meant surprising for Sarad - Much of this
 discussion pre-assumes that he
 *does* misunderstand probability but is willing to
 substitute our
 collective insanity for his current ignorance :)

No more of that-I will have a good read. I am
basically confused of the fact

 In a perfectly random experiment,how many tails and
 how many heads do we get?
we don't know - or it wouldn't be random :)
for a sufficiently large sample you *should* see
roughly equal numbers of heads and tails in the
average case.

We say that, we-don't know or it wont be random. Then
we say that we must see roughly equal numbers of heads
and tails for large trials. Thats what I fail to
understand.



The idea of a perfect random experiment was taken just
to understand the concept.

Thanks.

Regards Sarath.


__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: paradoxes of randomness

2003-08-19 Thread Dave Howe
Sarad AV wrote:
 We say that, we-don't know or it wont be random. Then
 we say that we must see roughly equal numbers of heads
 and tails for large trials. Thats what I fail to
 understand.
its the difference between any one test (which will be completely
unpredictable) and probabilities (where you know that, unless there is a
weighting in force, the odds of any one of n options coming up will be 1
in n, so you would expect to see roughly equal numbers of each)

as an analogy - imagine a horse race where all five horses are roughly
equal in fitness and rider skill. a bookie would give equal odds on each
(not 1 in 5, as he has to make a profit, but no horse would be worth
more than another). You would *expect* that, if the race was run enough
times, each horse would run about a fifth of them - but that won't help
you predict the result of any one race in particular, nor would it be
impossible for one horse to win all the races, purely from luck.



Re: paradoxes of randomness

2003-08-19 Thread Tim May
On Tuesday, August 19, 2003, at 03:13  AM, Sarad AV wrote:
In a perfectly random experiment,how many tails and
how many heads do we get?
we don't know - or it wouldn't be random :)
for a sufficiently large sample you *should* see
roughly equal numbers of heads and tails in the
average case.
We say that, we-don't know or it wont be random. Then
we say that we must see roughly equal numbers of heads
and tails for large trials. Thats what I fail to
understand.
Start small. Do some experiments _yourself_.

Take a coin out of your pocket. I assume your local coin has something 
that may be called a head and something that may be called a tail. 
In any case, decide what you want to call each side.

Flip the coin very high in the air and let it land on the ground 
without any interference by you.

This is a fair toss. (That subtle air currents may affect the landing 
is completely unimportant, as you will see even if you have doubts 
about it now.)

Now let's try a little piece of induction on this one, single toss. 
Remember when you had said earlier that a perfectly random coin toss 
would have exactly equal numbers of heads and tails? Well, with a 
single toss there can ONLY be either a head or a tail.

The outcome will be ONE of these, not some mixture of half and half.

This proves, by the way, that any claim that a random coin toss must 
result in equal numbers of heads and tails in any particular experiment.

Now toss the coin a second time and record the results.

(I strongly urge you to actually do this experiment. Really. These are 
the experiments which teach probability theory. No amount of book 
learning substitutes.)

So the coin has been tossed twice in this particular experiment. There 
is now the possibility for equal numbers of heads and tailsbut for 
the second coin toss to give the opposite result of the first toss, 
every time, to balance the outcomes, the coin or the wind currents 
would have to conspire to make the outcome the opposite of what the 
first toss gave. (This is so absurd as to be not worth discussing, 
except that I know of no other way to convince you that your theory 
that equal numbers of heads and tails must be seen cannot be true in 
any particular experiment. The more mathematical way of saying this is 
that the outcomes are independent. The result of one coin toss does 
not affect the next one, which may take place far away, in another 
room, and so on.)

In any case, by the time a third coin toss happens there again cannot 
be equal numbers of heads and tails, for obvious reasons. And so on.

Do this experiment. Do this experiment for at least 10 coin tosses. 
Write down the results. This will take you only a few minutes.

Then repeat the experiment and write down the results.

Repeat it as many times as you need to to get a good feeling for what 
is going on. And then think of variations with dice, with cards, with 
other sources of randomness.

And don't dry lab the results by imagining what they must be in your 
head. Actually get your hands dirty by flipping the coins, or dealing 
the cards, or whatever. Don't cheat by telling yourself you already 
know what the results must be.

Only worry about the deep philosophical implications of randomness 
after you have grasped, or grokked, the essence.

(Stuff about Kripke's possible worlds semantics, Bayesian outlooks, 
Kolmogoroff-Chaitin measures, etc., is very exciting, but it's based on 
the foundations.)

--Tim May

We should not march into Baghdad. To occupy Iraq would
instantly shatter our coalition, turning the whole Arab
world against us and make a broken tyrant into a latter-
day Arab hero. Assigning young soldiers to a fruitless
hunt for a securely entrenched dictator and condemning
them to fight in what would be an unwinable urban guerilla
war, it could only plunge that part of the world into ever
greater instability.
--George H. W. Bush, A World Transformed, 1998


Re: paradoxes of randomness

2003-08-19 Thread Major Variola (ret)
At 08:45 AM 8/19/03 -0700, Tim May wrote:
Only worry about the deep philosophical implications of randomness
after you have grasped, or grokked, the essence.

Then do this: get a block cipher or crypto-hash algorithm,
and pick a key.  Now encrypt 0, then 1, then 2, etc.  Examine the 17th
bit
of each output as you encrypt the integers.

Is this sequence random? Compressible?  How could you tell whether this
sequence is random or not, if you didn't know the key?

Hint: those are trick questions intended to lure you into
crypto.  And if you ask why 17?
you get whacked by a virtual bamboo cane.



Re: paradoxes of randomness

2003-08-19 Thread Morlock Elloi
 Is this sequence random? Compressible?  How could you tell whether this
 sequence is random or not, if you didn't know the key?

This is the a way to describe so-called randomness.

One simply has no adequate access to the Key and/or the Algorithm.

Both coin flipping and quantum noise fall into this category.

Actually, it's a pretty good method of authenticating Allah.






=
end
(of original message)

Y-a*h*o-o (yes, they scan for this) spam follows:

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com



Re: paradoxes of randomness

2003-08-18 Thread Sarad AV
hi,

--- martin f krafft [EMAIL PROTECTED] wrote:
  Okay- I need 5 bits to represent 32 coins.I count
 as
  coin 0,coin 1,... coin 31.
 
 No, you can't count coin 0. Or how will you
 represent no coins?

I thought i could use the null set to point to the
first coin,simply as a one to one mapping but then i
can't represent no coins. Thanks for the
clarification.

Regards Sarath.





__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: paradoxes of randomness

2003-08-18 Thread Sarad AV
hi,

Hope you can help on this.

--- Tim May [EMAIL PROTECTED] wrote:


 I hope you are not saying that you think there will
 always be 16 heads 
 and 16 tails!

In a perfectly random experiment,how many tails and
how many heads do we get?


thanks.

Regards Sarath.

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: *** GMX Spamverdacht *** Re: paradoxes of randomness

2003-08-18 Thread Sarad AV
hi,

Thank you-one more question.
Will the information obtained from the 2^32 tests have
a zero compression rate? 
If one of the occurance should yield all heads and one
occurance yields all tails-there appears to be scope
for compression.

If the output is random,then it will have no
mathametical structure,so I shouldn't be able to
compress it at all.


Regards Sarath.





--- Dave Howe [EMAIL PROTECTED] wrote:

 for a sufficiently large sample you *should* see
 roughly equal numbers of
 heads and tails in the average case - but :
 for 32 coins in 2^32 tests you should see:
 one occurance of all heads (and one of all tails)
 32 occurances of one tail, 31 heads (and 32 of one
 head, 31 tails)
 496 occurances of two
 and so forth up the chain
 none of these are guaranteed - it *is* random after
 all - but given a
 sufficiently large number of tests, statistically
 you should see the
 above.
 


__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


RE: *** GMX Spamverdacht *** Re: paradoxes of randomness

2003-08-18 Thread Vincent Penquerc'h
 If the output is random,then it will have no
 mathametical structure,so I shouldn't be able to
 compress it at all.

You could very well end up with all tails. That's a sequence
that has the same probability of happening that any other sequence.
A compressor will look for redundancy in the input you give it,
not in the algorithm you used to generate that input (conceptually,
a compressor could deduce the (determinist) algorithm from the
output, but if you bring it true randomness, chances are it will
not). Thus, a compressor will compress very well a sequence made
of all tails, but badly another which exhibits no detectable
redundancy.
Once you have the sequence, you lost a lot of info about whatever
algorithm was used to generate it. A sequence of all tails could
have been generated by a simple algorithm which generates all
tails. That's an emergement description of this one particular
sequence, but one that would not apply to *all* sequences your
algorithm can ever produce. That's lost information, and that's
why it can be compressed.

-- 
Vincent Penquerc'h 


Re: paradoxes of randomness

2003-08-18 Thread Tim May
On Monday, August 18, 2003, at 03:24  AM, Sarad AV wrote:

hi,

Hope you can help on this.

--- Tim May [EMAIL PROTECTED] wrote:


I hope you are not saying that you think there will
always be 16 heads
and 16 tails!
In a perfectly random experiment,how many tails and
how many heads do we get?
First, those who think there are perfectly random experiments or 
numbers are living in a state of sin, to paraphrase John von Neumann.

Second, despite the above, good approximations to random behavior are 
easy to obtain: radioactive decays, lava lamps, Johnson noise in 
diodes, etc. The important aspects of probability theory emerge even 
with imperfect sources of apparent randomness.

Third, the expected distribution of heads and tails in a series of 32 
coin tosses is approximated closely by the binomial distribution. The 
key concepts are combinations and permutations.

The expected probability of all heads is given by (0.5)^32. There are 
more chances of seeing 1 head and 31 tails, as the head can appear in 
any of the 32 positions. (the combination of 32 things taken 1 at a 
time). And so on, up to the maximum of 16 heads, 16 tails...although 
this particular outcome is not very likely.

Fourth, my point was that there is relatively low probability that 32 
tosses will result in exactly 16 heads and 16 tails. Given enough 
experiments, the distribution of outcomes will approximately follow the 
familiar bell-shaped curve, centered at 16H/16T, but with some chance 
of each of 0H/32T, 1H/31T,, 31H/0T, 32H/0T.

Fifth, not to sound harsh or snippy or sarcastic, but this is really 
basic stuff. There is a big gap in your education. Even if not taught 
this in 9th grade (or whatever the equivalent is in India), this is 
stuff that should be apparent through thinking about the way chance 
events occur. I urge you to suspend your advanced math questions 
until you have gone back over the more basic things.

(It's crazy to try to understand entropy and the algorithmic 
information theory work of Greg Chaitin and others without knowing the 
18th-century results on probability of people like Pascal, Poisson, 
etc.)

There are many Web pages with class notes on probability, encyclopedia 
entries, etc. And I suggest experiments with coin tosses. And cards. 
And dice.

--Tim May

A complex system that works is invariably found to have evolved from a
simple system that worked ...A complex system designed from scratch 
never  works and cannot be patched up to make it work. You have to 
start over,  beginning with a working simple system. -- Grady Booch



Re: paradoxes of randomness

2003-08-18 Thread Riad S. Wahby
Tim May [EMAIL PROTECTED] wrote:
 But, as I said in my last post, before you try to understand 
 algorithmic information theory, you need to learn the basics of 
 probability. Without understanding things like combinations and 
 permutations, binomial and Poisson distributions, the law of large 
 numbers, standard deviations, etc., your philosophizing will be 
 ungrounded.

A good place to learn the basics:

http://web.jfet.org/6.041-text/

-- 
Riad Wahby
[EMAIL PROTECTED]
MIT VI-2 M.Eng



Re: *** GMX Spamverdacht *** Re: *** GMX Spamverdacht *** Re: paradoxes of randomness

2003-08-18 Thread Dave Howe
Sarad AV wrote:
 Will the information obtained from the 2^32 tests have
 a zero compression rate?
 If one of the occurance should yield all heads and one
 occurance yields all tails-there appears to be scope
 for compression.
In that one case, yes.
The compression will vary based on the deviation from an ideally
compressable sample - for *any* bit pattern 0x to 0x you would
expect to see it once in 2^32 trials (by definition) therefore you will
get a mix of compressable and uncompressable patterns, with uncompressable
patterns being more likely (simply because there are more of them in the
full range of available patterns 0x to 0x

 If the output is random,then it will have no
 mathametical structure,so I shouldn't be able to
 compress it at all.
randomness is a funny thing. a truely random file can be *anything* that
is the right length - including a excerpt from william shakespere
(provided you have enough monkeys)
it is most likely to be random garbage, for a large sample - but for a
very small sample the odds of it being an ordered sample are surprisingly
good.
the obvious example here is a double coin test - two bits. an ordered
sample would be both heads or both tails. a disordered sample would be one
head and one tail. in practice, you would expect to see half the results
from a larger trial (say 32 double-throws) with a ordered sample, and half
disordered.
as you reach three coins, the odds of a completely ordered result decrease
(from 2 in 2^2, to 2 in 2^3). for four coins, you still have the same two
compressable results. consider:
HHHTHHTHHHTT
HTHHHTHTHTTHHTTT
THHHTHHTTHTHTHTT
TTHHTTHTTTTH

in a large trial, you would expect to see each of these once every 2^4
tests, on the average. obviously  and  are very compressable.
assuming a runlength encoding, I don't think any of the others are
compressable at all.



Re: paradoxes of randomness

2003-08-18 Thread Tim May
On Monday, August 18, 2003, at 06:37  AM, Sarad AV wrote:

hi,

Thank you-one more question.
Will the information obtained from the 2^32 tests have
a zero compression rate?
If one of the occurance should yield all heads and one
occurance yields all tails-there appears to be scope
for compression.
This outcome is compressible because it has a short description.



If the output is random,then it will have no
mathametical structure,so I shouldn't be able to
compress it at all.
Our best current description of randomness is that something is random 
when it has no _shorter_ description than itself. (A point of view 
variously credited to Komogoroff, Chaitin, and Solomonoff.)

But, as I said in my last post, before you try to understand 
algorithmic information theory, you need to learn the basics of 
probability. Without understanding things like combinations and 
permutations, binomial and Poisson distributions, the law of large 
numbers, standard deviations, etc., your philosophizing will be 
ungrounded.

You can read articles some of us wrote here about 10-11 years ago by 
using Google on the obvious terms.

--Tim May

We should not march into Baghdad. To occupy Iraq would
instantly shatter our coalition, turning the whole Arab
world against us and make a broken tyrant into a latter-
day Arab hero. Assigning young soldiers to a fruitless
hunt for a securely entrenched dictator and condemning
them to fight in what would be an unwinable urban guerilla
war, it could only plunge that part of the world into ever
greater instability.
--George H. W. Bush, A World Transformed, 1998


Re: *** GMX Spamverdacht *** Re: *** GMX Spamverdacht *** Re: *** GMX Spamverdacht *** Re: paradoxes of randomness

2003-08-18 Thread Tim May
(Will whomever prepending this Re: *** GMX Spamverdacht *** header 
please STOP.)

On Monday, August 18, 2003, at 09:01  AM, Dave Howe wrote:
randomness is a funny thing. a truely random file can be *anything* 
that
is the right length - including a excerpt from william shakespere
(provided you have enough monkeys)
it is most likely to be random garbage, for a large sample - but for a
very small sample the odds of it being an ordered sample are 
surprisingly
good.
Quibble: only surprising if one misunderstands probability. (Not saying 
you do, just quibbling with any claim that readily calculated 
probabilities can be surprising.)



the obvious example here is a double coin test - two bits. an ordered
sample would be both heads or both tails. a disordered sample would be 
one
head and one tail. in practice, you would expect to see half the 
results
from a larger trial (say 32 double-throws) with a ordered sample, and 
half
disordered.
as you reach three coins, the odds of a completely ordered result 
decrease
(from 2 in 2^2, to 2 in 2^3). for four coins, you still have the same 
two
compressable results. consider:
HHHTHHTHHHTT
HTHHHTHTHTTHHTTT
THHHTHHTTHTHTHTT
TTHHTTHTTTTH

in a large trial, you would expect to see each of these once every 2^4
tests, on the average. obviously  and  are very compressable.
assuming a runlength encoding, I don't think any of the others are
compressable at allWe should not march into Baghdad. To occupy 
Iraq would
instantly shatter our coalition, turning the whole Arab
world against us and make a broken tyrant into a latter-
day Arab hero. Assigning young soldiers to a fruitless
hunt for a securely entrenched dictator and condemning
them to fight in what would be an unwinable urban guerilla
war, it could only plunge that part of the world into ever
greater instability.
--George H. W. Bush, A World Transformed, 1998
For any sequence of n fair tosses, the length after compression of 
the outcome is roughly, modulo some constants and factor, (1 - 2^(-n)), 
where 1 is the uncompressed length. In other words, as n gets large 
nearly all strings have a length after compression that is close to 
the original length, i.e., little compression. As n gets arbitrarily 
large, an arbitrarily small fraction of strings have a short, concise 
description (are compressed).

--Tim May



Re: paradoxes of randomness

2003-08-18 Thread Dave Howe
Tim May wrote:
 (Will whomever prepending this Re: *** GMX Spamverdacht *** header
 please STOP.)
that would be my email provider - and hence me. sorry. They suck in many
ways, but they give an efficient free service with tls support; one of the
ways they suck is to either
a) hide some of your mail in a folder invisible to pop3 (so you have to
use their german-only and non-fishable interface to go look)
b) add that bloody stupid header to random emails that they think are spam
(no idea what criteria is in use as it is all in german)

 Quibble: only surprising if one misunderstands probability. (Not
 saying you do, just quibbling with any claim that readily calculated
 probabilities can be surprising.)
I meant surprising for Sarad - Much of this discussion pre-assumes that he
*does* misunderstand probability but is willing to substitute our
collective insanity for his current ignorance :)

snipping We should not march into ...Transformed, 1998
Erm - what was that? a misplaced .sig?



Re: paradoxes of randomness

2003-08-18 Thread Sarad AV
hi,

--- martin f krafft [EMAIL PROTECTED] wrote:
  Okay- I need 5 bits to represent 32 coins.I count
 as
  coin 0,coin 1,... coin 31.
 
 No, you can't count coin 0. Or how will you
 represent no coins?

I thought i could use the null set to point to the
first coin,simply as a one to one mapping but then i
can't represent no coins. Thanks for the
clarification.

Regards Sarath.





__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com



Re: paradoxes of randomness

2003-08-18 Thread Sarad AV
hi,

Hope you can help on this.

--- Tim May [EMAIL PROTECTED] wrote:


 I hope you are not saying that you think there will
 always be 16 heads 
 and 16 tails!

In a perfectly random experiment,how many tails and
how many heads do we get?


thanks.

Regards Sarath.

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com



Re: *** GMX Spamverdacht *** Re: paradoxes of randomness

2003-08-18 Thread Sarad AV
hi,

Thank you-one more question.
Will the information obtained from the 2^32 tests have
a zero compression rate? 
If one of the occurance should yield all heads and one
occurance yields all tails-there appears to be scope
for compression.

If the output is random,then it will have no
mathametical structure,so I shouldn't be able to
compress it at all.


Regards Sarath.





--- Dave Howe [EMAIL PROTECTED] wrote:

 for a sufficiently large sample you *should* see
 roughly equal numbers of
 heads and tails in the average case - but :
 for 32 coins in 2^32 tests you should see:
 one occurance of all heads (and one of all tails)
 32 occurances of one tail, 31 heads (and 32 of one
 head, 31 tails)
 496 occurances of two
 and so forth up the chain
 none of these are guaranteed - it *is* random after
 all - but given a
 sufficiently large number of tests, statistically
 you should see the
 above.
 


__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com



Re: paradoxes of randomness

2003-08-17 Thread martin f krafft
also sprach Sarad AV [EMAIL PROTECTED] [2003.08.17.1219 +0200]:
 Okay- I need 5 bits to represent 32 coins.I count as
 coin 0,coin 1,... coin 31.

No, you can't count coin 0. Or how will you represent no coins?

I would appreciate if you wouldn't simply include the quoted message
in your reply. Either reply in its context, or delete it altogether.

-- 
martin;  (greetings from the heart of the sun.)
  \ echo mailto: !#^.*|tr * mailto:; [EMAIL PROTECTED]
 
invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver!
 
i have smoked pot. it is a stupid business, like masturbation.
 -- thomas pynchon (v)


pgp0.pgp
Description: PGP signature


Re: paradoxes of randomness

2003-08-17 Thread Sarad AV
hi,

Okay- I need 5 bits to represent 32 coins.I count as
coin 0,coin 1,... coin 31.
If it is a perfectly random fair coin throwing
experiment,then 50 percent of them will be heads.

So I know that 16 of them will be heads.

What we do is i simply place all the 32 coins on the
table in a row or column.
I look at the first coin and determine if it is a head
or a tail. I repeat the same proccess till i count 16
heads. If I count 15 heads at coin 31, then I cant
reduce the entropy. How ever, if i count 16 heads at
coin 30,then I dont have to check that coin 31,I
already know its a tail,so I have less than 5 bits of
entropy.

So if it is a perfectly random experiment,I wouldn't
get 16 heads before i look at coin 31,which is the
last coin and thats what you said-isn't it?

So how did chaitin get to compress the information
from k instances of the turing machine in

http://www.cs.umaine.edu/~chaitin/summer.html 

under the sub-section redundant?

he says-
Is this K bits of mathematical information? K
instances of the halting problem will give us K bits
of Turing's number. Are these K bits independent
pieces of information? Well, the answer is no, they
never are. Why not? Because you don't really need to
know K yes/no answers, it's not really K full bits of
information. There's a lot less information. It can be
compressed. Why? 




If the input programs are truely random-there is no
redundancy and thats a contradiction to the claim in
the paper.

Thanks.

Regards Sarath.



It's simple, if I am correct. The redundancy simply
 makes you care
 less about the specific instance you are looking at.
 
  To represent 32 coins-i need 5 bits of
 information.
  Since the experiment is truely random-i know half
 of
  them will be heads,so in this case using 5 bits of
  information,i can determine all the coins that are
  heads and that are tails.
 
 Same deal, unless you are counting pairs, in which
 case you cannot
 distinguish between the members of a pair. You need
 an extra bit to
 tell a head from a tail.
 
  So-the question is what is the minimum number of
 bits
  or entropy required to determine which all coins
 are
  heads and which all coins are tails,is it 5 bits
 or 6
  bits of information?
 
 With 5 bits, you can count to 31, so you need 6.
 
 Just my two tails.
 

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com



Re: paradoxes of randomness

2003-08-16 Thread martin f krafft
also sprach Sarad AV [EMAIL PROTECTED] [2003.08.16.1321 +0200]:
 f(x)=2x for all x=0,1,2,... ;
 f(x)=2x+1 for all x=0,1,2,...;

You need an extra bit to store which of the two is used. Otherwise,

 f(x)=2x
 
 =2*5
 =10for x=5;
 
 Decimal number 5 can be represented in binary as 101.

... you can't decompress five as you don't know if it's 10 or 11.

and it's exactly this 2^0 bit which your compression kicks off. you
need it, can't do without.

[...]

 If the input programmes are picked truely randomly,then I know 16
 of the programs will halt(i.e 50% of the programs halt).

Look at it differently: you have a 50% chance of guessing whether
a given program from the sack will halt or not. This sheds some
different light onto the issue, doesn't it?

I raise a different question: are there more programs that halt than
programs that don't halt, or the other way around, or are they
equal in number?

 So where is the redundancy in different instances of
 the halting problem? I don't see any redundancy.

It's simple, if I am correct. The redundancy simply makes you care
less about the specific instance you are looking at.

 To represent 32 coins-i need 5 bits of information.
 Since the experiment is truely random-i know half of
 them will be heads,so in this case using 5 bits of
 information,i can determine all the coins that are
 heads and that are tails.

Same deal, unless you are counting pairs, in which case you cannot
distinguish between the members of a pair. You need an extra bit to
tell a head from a tail.

 So-the question is what is the minimum number of bits
 or entropy required to determine which all coins are
 heads and which all coins are tails,is it 5 bits or 6
 bits of information?

With 5 bits, you can count to 31, so you need 6.

Just my two tails.

-- 
martin;  (greetings from the heart of the sun.)
  \ echo mailto: !#^.*|tr * mailto:; [EMAIL PROTECTED]
 
invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver!
 
i love deadlines. i like the whooshing
 sound they make as they fly by.
  -- douglas adams


pgp0.pgp
Description: PGP signature


Re: paradoxes of randomness

2003-08-16 Thread Morlock Elloi
 - N+1 is the smallest integer that's not interesting.
   But that's interesting in itself - so N+1 is interesting.


It breaks down after few consequtive non-interesting integers.

In fact, there is a proof somewhere that 17, 18 and 19 are not interesting at
all.



=
end
(of original message)

Y-a*h*o-o (yes, they scan for this) spam follows:

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com



Re: CDR: paradoxes of randomness-errata

2003-08-16 Thread Justin
Sarad AV (2003-08-16 11:26Z) wrote:

 it comes to such a question-
 
 I do a fair coin throwing experiment with 64 coins.
 
 To represent 64 coins,i need 5  bits of information.
 
 To represnet 64 coins,i need 6 bits of infomation :)

To deal with 65 possibilites, you need 7 bits (well, 6.022)...

-- 
No man is clever enough to  Times are bad.  Children no longer
know all the evil he does.  obey their parents, and everyone
-Francois de la Rochefoucauld   is writing a book.  -Cicero



Re: paradoxes of randomness

2003-08-16 Thread Bill Stewart
The standard proof that all positive integers are interesting goes like this:
- 1 is the smallest positive integer.  That's interesting.
- Suppose that you've proven that 1N are interesting.
Then either N+1 is interesting, and you continue the induction process, or
- N+1 is the smallest integer that's not interesting.
But that's interesting in itself - so N+1 is interesting.
You can extend this to all integers, and to the rational numbers
using the kinds of ordering that some of Cantor's proofs did.
Doesn't work for real numbers, though - you can have a
nothing between X and Y is interesting, but X and Y are,
without having any smallest number above X.


Re: paradoxes of randomness

2003-08-16 Thread martin f krafft
also sprach Sarad AV [EMAIL PROTECTED] [2003.08.16.1321 +0200]:
 f(x)=2x for all x=0,1,2,... ;
 f(x)=2x+1 for all x=0,1,2,...;

You need an extra bit to store which of the two is used. Otherwise,

 f(x)=2x
 
 =2*5
 =10for x=5;
 
 Decimal number 5 can be represented in binary as 101.

... you can't decompress five as you don't know if it's 10 or 11.

and it's exactly this 2^0 bit which your compression kicks off. you
need it, can't do without.

[...]

 If the input programmes are picked truely randomly,then I know 16
 of the programs will halt(i.e 50% of the programs halt).

Look at it differently: you have a 50% chance of guessing whether
a given program from the sack will halt or not. This sheds some
different light onto the issue, doesn't it?

I raise a different question: are there more programs that halt than
programs that don't halt, or the other way around, or are they
equal in number?

 So where is the redundancy in different instances of
 the halting problem? I don't see any redundancy.

It's simple, if I am correct. The redundancy simply makes you care
less about the specific instance you are looking at.

 To represent 32 coins-i need 5 bits of information.
 Since the experiment is truely random-i know half of
 them will be heads,so in this case using 5 bits of
 information,i can determine all the coins that are
 heads and that are tails.

Same deal, unless you are counting pairs, in which case you cannot
distinguish between the members of a pair. You need an extra bit to
tell a head from a tail.

 So-the question is what is the minimum number of bits
 or entropy required to determine which all coins are
 heads and which all coins are tails,is it 5 bits or 6
 bits of information?

With 5 bits, you can count to 31, so you need 6.

Just my two tails.

-- 
martin;  (greetings from the heart of the sun.)
  \ echo mailto: !#^.*|tr * mailto:; [EMAIL PROTECTED]
 
invalid/expired pgp subkeys? use subkeys.pgp.net as keyserver!
 
i love deadlines. i like the whooshing
 sound they make as they fly by.
  -- douglas adams


pgp0.pgp
Description: PGP signature


paradoxes of randomness-errata

2003-08-16 Thread Sarad AV
it comes to such a question-

I do a fair coin throwing experiment with 64 coins.

To represent 64 coins,i need 5  bits of information.


To represnet 64 coins,i need 6 bits of infomation :)

Regards Sarath.

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com