On 2/4/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:

Hi Russell,

OK, I'll try to specify my ideas in this regard more clearly.  Bear
in mind though that there are many ways to formalize an intuition,
and the style of formalization I'm suggesting here may or may not be
the "right" one.  With this sort of thing, you only know if the
formalization is right after you've proved some theorems using it....


Okay, that's reasonably clear thanks.

My desire in this context is to show that, for agents that are
optimal or near-optimal at achieving the goal G under resource
restrictions R, the set of important implicit abstract expectations
associated with the agent (in goal-context G as assessed by an ideal
probabilistic observer) should come close to being consistent.

Clearly, this will hold only under certain assumptions about the
agent, the goal, and the resource restrictions, and I don't know what
these assumptions are.


Well... given infinite R, the expectations should be perfectly consistent,
right? (I'm guessing that has been formally proved, though I don't know for
sure offhand; it's intuitively clear at least.)

Now given finite R, it's important to conserve resources. So if we imagine
the agent being created by a conscious programmer, the programmer would need
to ask "what sort of situations will typically arise? and what sort of
actions in those situations will typically be _important_ to achieving G?
I'd better make sure to spend the limited resources on important things that
will arise frequently." (This is of course what people actually do when they
try to optimize code.)

So I would expect a minimization of _important_ inconsistencies (relative to
G) in _typical_ situations. (This is indeed what we tend to find in
biological life forms.)

How to formally prove that, I don't know; I'm not a mathematician. A guess
at a possibly fruitful approach might be to say... *goes out for smoke break
in an attempt to jolt brain into providing words for a vague idea...*

Okay... let's say when an agent exhibits inconsistent implicit preferences
for acting in a particular situation, it is being suboptimal, and the degree
of suboptimality depends on the degree of inconsistency.

Given:

S = a situation
I = importance (to G) of acting correctly in that situation
D = degree of suboptimality of chosen action in that situation
F = fraction of the time S occurs

Then:

Suboptimality of an agent = sum over S of I * D * F

So clearly you want to minimize D for high I and F. Does that help?

Now, I agree that this is all kind of obvious, intuitively.  But
"kind of obvious" doesn't mean "trivial to prove."  Pretty much all
of Marcus Hutter's results about AIXI are kind of obvious too,
perhaps even more so than the hypotheses I've made above  -- it's
intuitively quite clear that AIXI can achieve an arbitrarily high
level of intelligence, and that it can perform just as well as any
other algorithm up to a (large) constant factor.  Yet, to prove this
rigorously turned out to be quite a pain, given the mathematical
tools at our disposal, as you can see from the bulk and complexity of
Hutter's papers.


And of course it's only true for some definitions of intelligence. Take a
blank, infinitely powerful computer, code up AIXI on it and type "What color
is the sky?", it won't be able to answer whereas a human could, because it
simply doesn't have the data. If you define intelligence so that use of
background knowledge "doesn't count" then yes, it's intuitively obvious
(though nontrivial to formally prove) that an infinitely powerful computer
running a simple algorithm like AIXI can maximize intelligence.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to