On 9/18/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
I think that before we can debate whether AGI will be friendly, there are some
fundamental questions that need to be answered first.
1. If we built an AGI, how would we know if we succeeded?
2. How do we know that AGI does not already exist?
I think your questions go towards the general concept of intelligence
and how to recognize. My reply is that intelligence is socially
defined; an alien hive mind might look, in its fashion, at humans and
deny their status as hive-like beings (the true metric of
advancement), while acknowledging our ability to make complex
structures and communicate data effectively... We, of course, would
see their technology and wonder that such things could be made by
beings with no sense of self. My use of 'deny' and 'acknowledge' is
inaccurate, but in both cases I mean simply making models of the world
that include some things and do not include others.
When we look at an AI, we may draw much the same conclusions. An AI
may not resemble a human very much at all - but to be worthy of the
name, it must do some observable and objectively definable things,
such as form models of the world, create plans based on those models,
and execute them. Selfhood is probably inevitable once a certain
amount of skill at modelling is accomplished, but it may not have the
form as our selfhood. AIs that just ponder, for instance, are not
sufficiently worldly to be general AI as we see it.
I think this also answers your second question. Things that are so
beyond us as to have no effect on us should not be of any consequence
when it comes to our own plans for AI.
--Nate
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]