to Will: Sure you can make that program.
Any program that has no random number generator, will always run the same based 
on the same input, thats a core concept of computer science.
(given same input, and same database state)

to Matt: For that I would need your definition of autonomy.
>From Wiki:
Autonomy: one who gives oneself his own law) means freedom from external 
authority. Within these contexts it refers to the capacity of a rational 
individual to make an informed, uncoerced decision. 

In robotics "autonomy means independence of control. This characterization 
implies that autonomy is a property of the relation between two agents, in the 
case of robotics, of the relations between the designer and the autonomous 
robot. Self-sufficiency, situatedness, learning or development, and evolution 
increase an agent’s degree of autonomy.", according to Rolf Pfeifer.

-So this appears to be "an agent that is controlling itself"
Now I would argue that pretty quickly an AGI that is given a control unit that 
does not directly answer to a human (IE google style query answer, or directly 
commanded bot) would have autonomy.

Now there is a restriction in autonomy, given by the environment, and to a 
degree, always from other entities.
Humans are believed to be autonomous.  But we must act within the laws of 
physics and our environment.  I choose what I am going to do next.
These choices however are limited by my "internal programming"  IE the limits 
and bounds of what I know how to do, and what I am able to do.

An AGI that has the ability to choose its next action would be autonomous as 
well, though still has to act within bounds of its environment.
These choices however are limited by its "internal programming"  IE the limits 
and bounds of what it know how to do, and what it is able to do.

I would go much FURTHER and say that a complex agent such as a race-car 
opponent simulator is an autonomous agent, in that within its world realm, it 
has free choice of what it is able to do, even though it must still act within 
the bounds of the environment it is in.

So I would say that an AGI is as autonomous as a person is under those 
definitions.


For the consciousness argument I would take the same route.  Point to something 
that a conscious human can do, that an AGI could not.

James Ratcliff

William Pearson <[EMAIL PROTECTED]> wrote: On 04/06/07, Matt Mahoney  wrote:
> Suppose you build a human level AGI, and argue
> that it is not autonomous no matter what it does, because it is
> deterministically executing a program.
>

I suspect an AGI that executes one fixed unchangeable program is not
physically possible.

  Will Pearson

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
       
---------------------------------
Got a little couch potato? 
Check out fun summer activities for kids.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to