--- Mitchell Porter <[EMAIL PROTECTED]>
wrote:

> 
> I said
> 
> >If you the programmer ('you' being an AI, I assume)
> already have the
> >concept of probability, and you can prove that a
> possible program will
> >estimate probabilities more accurately than you do,
> you should be able
> >to prove that it would provide an increase in
> utility, to a degree
> >depending on the superiority of its estimates and
> the structure of
> >your utility function. (A trivial observation, but
> that's usually where
> >you have to start.)
> 
> Suppose that
> 
> the environment is a two-state Markov process,
> pr(A)=p, pr(B)=1-p;
> your modelling freedom consists in setting q, your
> guess at the value of p;
> and utility at timestep t is just the cumulative
> number of correct guesses.

This is an oversimplification, even when compared
against simplistic Bayesian/expected utility models.
In real life, utility is, except on multiple-choice
tests, not directly derived from guessing correctly.
Guessing correctly is useful only insofar as it helps
you direct future events. This can be trivially
demonstrated by hypothesizing a machine which predicts
the future with perfect accuracy, but is closed from
any communication of any sort with the world. Most of
us would assign such a machine no utility at all, I
think.

> Then at time t, for a given q, expected utility is
> EU_q[t] = pq + (1-p)(1-q).
> 
> It should not be hard to prove that
> |p-q0|<|p-q1| implies EU_q0[t] > EU_q1[t].
> 
> What I had in mind was a situation in which there is
> a programmable
> external device with higher-precision arithmetic
> than you have, so
> it can estimate p better than you.

In real life, higher-precision arithmetic is laughably
irrelevant to probability accuracy. Bog-standard
floats have ten decimal places of accuracy and doubles
have even more, while our probabilities are often off
by several orders of magnitude.

> It's a rather
> artificial example 
> (although
> this is the human situation with respect to
> electronic hardware), but
> the situation involved would not be hard to
> represent, superficially
> anyway, and that would be enough for the deduction
> to be made.

Isn't there some rule of logic that says you cannot
prove a general principle by invoking a specific
example? It won't do us any good to say that "AI will
be able to test for a better program in one case",
because the AI will come across millions if not
billions of cases, even while it's still below
transhuman intelligence.

> So that's a simple case, "where the statistical
> structure of the
> environment is known", as you put it below. The more
> abstract
> cases will revolve around proofs of *algorithmic*
> superiority,
> perhaps.

You can't mathematically prove an algorithm superior
when you don't know the statistical structure of the
environment. Proof:

1. Algorithm A does multiplication at a speed of 100
MegaComps.
2. Algorithm B also does multiplication, but it can do
a speed on 110 MegaComps on the same hardware to the
same precision.
3. It is impossible for us to know, by definition
(since we don't know the statistical structure of the
environment), which algorithm is more likely to emit a
bit that would then cause a chain reaction which
results in Really Undesirable Consequences (tm).
4. A proof would have to prove either Algorithm A or
Algorithm B superior.
5. Which algorithm the proof proves superior obviously
cannot depend on statement 3, since if it did, we
could work backward and discover the information which
we already assumed we didn't know, resulting in a
contradiction.
6. Split the universe into two parallel paths, one in
which A fulfills statement 3 and one in which B
fulfills statement 3.
7. Because of statement 5, the proof in Universe A'
and Universe B' must be the same.
8. In Universe A', A is likely to fulfill statement 3,
and in universe B', B is likely to fulfill statement
3.
9. Therefore, in universe A', algorithm B is superior,
and in universe B', algorithm A is superior.
10. Because of statements 4, 7, and 9, the proof must
be wrong in either A' or B'.
11. A proof, by definition, cannot be wrong. QED.

 - Tom

> Eliezer said
> 
> >Mitch, I haven't found that problem to be trivial
> if one seeks a precise 
> >demonstration.  I say "precise demonstration",
> rather than "formal proof", 
> >because formal proof often carries the connotation
> of first-order logic, 
> >which is not necessarily what I'm looking for.  But
> a line of reasoning 
> >that an AI itself carries out will have some exact
> particular 
> >representation and this is what I mean by
> "precise".  What exactly does it 
> >mean for an AI to believe that a program, a
> collection of ones and zeroes, 
> >"estimates probabilities" "more accurately" than
> does the AI?  And how does 
> >the AI use this belief to choose that the expected
> utility of running its 
> >program is ordinally greater than the expected
> utility of the AI exerting 
> >direct control?  For simple cases - where the
> statistical structure of the 
> >environment is known, so that you could calculate
> the probabilities 
> >yourself given the same sensory observations as the
> program - this can be 
> >argued precisely by summing over all probable
> observations.  What if you 
> >can't do the exact sum? How would you make the
> demonstration precise enough 
> >for an AI to walk through it, let alone
> independently discover it?
> >
> >*Intuitively* the argument is clear enough, I
> agree.
> 
>
_________________________________________________________________
> Advertisement: Fresh jobs daily. Stop waiting for
> the newspaper. Search now! 
> www.seek.com.au  
>
http://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fninemsn%2Eseek%2Ecom%2Eau&_t=757263760&_r=Hotmail_EndText_Dec06&_m=EXT
> 
> 




 
____________________________________________________________________________________
Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to