Vladimir Nesov wrote:
On 10/23/07, *Richard Loosemore* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    To make a system do something organized, you would have to give it goals
    and motivations.  These would have to be designed:  you could not build
    a "thinking part" and then leave it to come up with motivations of its
    own.  This is a common science fiction error:  it is always assumed that
    the thinking part would develop its own mitivations.  Not so:  it has to
    have some motivations built into it.  What happens when we imagine
    science fiction robots is that we automatically insert the same
    motivation set as is found in human beings, without realising that this
    is a choice, not something that comes as part and parcel, along with
pure intelligence.

It can always pick something at random, can't it? Of course you can say that to do so, it must already have a motivation for it, it it all comes down to presence of design choice that makes speaking about motivations (as extracted from behavior as a whole) meaningful.

It can pick random thoughts to pursue, of course, but if there is no structure at all, I believe that the system as a whole would not appear to be in the least bit intelligent: it would not be able to even learn to get up to adult intelligence, because it will be unable to engage in any kind of learning behaviors. From the outside, nobody would say that it is behaving intelligently at all.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56691533-cc5d30

Reply via email to