On Sat, Aug 23, 2008 at 11:38 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Just wanted to add something, to bring it back to feasibility of
> embodied/unembodied approaches. Using the definition of embodiment
> I described, it needs to be said that it is impossible to specify the goals
> of the agent, because in so doing, you'd be passing it information in an
> unembodied way. In other words, a fully-embodied agent must completely
> structure internally (self-organize) its model of the world, such as it is.
> Goals must be structured as well. Evolutionary approaches are the only
> means at our disposal for shaping the goal systems of fully-embodied
> agents, by providing in-built biases towards modeling the world in a way
> that is in alignment with our goals. That said, Friendly AI is impossible
> to guarantee for fully-embodied agents.
>

The last post by Eliezer provides handy imagery for this point (
http://www.overcomingbias.com/2008/08/mirrors-and-pai.html ). You
can't have an AI of perfect emptiness, without any goals at all,
because it won't start doing *anything*, or anything right, unless the
urge is already there (
http://www.overcomingbias.com/2008/06/no-universally.html ). But you
can have an AI that has a bootstrapping mechanism that tells it where
to look for goal content, tells it to absorb it and embrace it.
Evolution has nothing to do with it, except in the sense that it was
one process that implemented the bedrock of goal system, making a
first step that initiated any kind of moral progress. But evolution
certainly isn't an adequate way to proceed from now on.

> The question then becomes, is it necessary to implement full embodiment, in 
> the
> sense I have described, to arrive at AGI. I think most in this forum will say 
> that it's
> not. Most here say that embodiment (at least partial embodiment) would be 
> useful
> but not necessary.
>

Basically, non-embodied interaction as you described it is
extracognitive interaction, workaround that doesn't comply with a
protocol established by cognitive algorithm. If you can do that, fine,
but cognitive algorithm is there precisely because we can't build a
mature AI by hand, by directly reaching into the AGI's mind, it needs
a subcognitive process that will assemble its cognition for us. It is
basically the same problem with general intelligence and with
Friendliness: you can neither assemble an AGI that already knows all
the stuff and possesses human-level skills, nor an AGI that has proper
humane goals. You can only create a metacognitive metamoral process
that will collect both from the environment.


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to