On 1/29/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Summary of the difference:
>
> 1) I am not even convinced that an AI driven by a GS will ever actually
> become generally intelligent, because of the self-contrdictions built
> into the idea of a goal stack.  I am fairly sure that whenever anyone
> tries to scale one of those things up to a real AGI (something that has
> never been done, not by a long way) the AGI will become so unstable that
> it will be an idiot.
>
> 2) A motivation-system AGI would have a completely different set of
> properties, and among those properties would be extreme stability.  It
> would be possible to ensure that the thing stayed locked on to a goal
> set that was human-empathic, and which would stay that way.
>
> Omohundros's analysis is all predicated on the Goal Stack approach, so
> my response is that nothing he says has any relevance to the type of AGI
> that I talk about (which, as I say, is probably going to be the only
> type ever created).

Hmm. I'm not sure of exact definition that you're using of the term
"motivational AGI", so let me wager a guess based on what I remember
reading from you before - do you mean something along the lines of a
system built out of several subsystems, each with partially
conflicting desires, that are constantly competing for control and
exerting various kinds of "pull" to the behavior of the system as a
whole? And you contrast this with a goal stack AGI, which would only
have one or a couple of such systems?

While this is certainly a major difference on the architectural level,
I'm not entirely convinced how large of a difference it makes in
behavioral terms, at least in this context. In order to accomplish
anything, the motivational AGI would still have to formulate goals and
long-term plans. Once it managed to hammer out acceptable goals that
the majority of its subsystems agreed on, it would set out on
developing ways to fulfill those goals as effectively as possible,
making it subject to the pressures outlined in Omohundro's paper.

The utility function that it would model for itself would be
considerably more complex than for an AGI with less subsystems, as it
would have to be a compromise between the desires of each subsystem
"in power", and if the balance of power would be upset too radically,
the modeled utility function may even be changed entirely (like the
way different moods in humans give control to different networks,
altering the current desires and effective utility functions).
However, AGI designers likely wouldn't make the balance of power
between the different subsystems /too/ unstable, as an agent that
constantly changed its mind about what it wanted would just go around
in circles. So it sounds plausible that the utility function it
generated would remain relatively stable, and the motivational AGI's
behavior optimized just as Omohundro analysis suggests.

-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=91075649-b77bad

Reply via email to