Cenny,

If you're saying that you can give an agent general, conceptual goals and control its interpretation of them in every circumstance, please give an example of one such goal or set of goals. The entire legal profession and all philosophers of language are waiting to hear from you.


----- Original Message ----- From: "Cenny Wenner" <[EMAIL PROTECTED]>
To: <singularity@v2.listbox.com>
Sent: Thursday, July 12, 2007 3:41 PM
Subject: Re: [singularity] ESSAY: Why care about artificial intelligence?


On 7/12/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
>> 2."An AI programmed only to help humanity will only help humanity."
>> Really?
>> George Bush along with every other leader is programmed to "help
>> humanity."
>
> George Bush is human. He has plenty of other goals in _addition_ to
> "helping humanity" - if he even has that goal at all...

Comment: You and others seem to be missing the point, which obviously needs
spelling out. There is no way of endowing any agent with conceptual goals
that cannot be interpreted in ways opposite to the designer's intentions -
that is in the general, abstract nature of language & symbolic systems.

For example, the general, abstract goal of "helping humanity" can
legitimately in particular, concrete situations be interpreted as wiping out the entire human race (bar, say, two) - for the sake of future generations.


Any agent's actions may be described by a (possibly indeterministic)
policy or norm. If this scheme adheres to some property and the agent
was constructed through programming, the agent has been programmed to
adhere to that property. The problem of your example is the
interpretation of the ambiguous and vague statement. It is the
behaviour and not it's representation in an arbitrary system that is
relevant. If an agent is given a natural language goal which needs to
be interpreted we need to take the interpretation into account - how
should the two goals be weighted? If we may fulfil one but not the
other, what would be the prefered action? How should one deal with
inconsistencies? These are not enigmas of the policy.

And there is no reasonable way to circumvent this. You couldn't, say,
instruct an agent... "help humanity but don't kill any human beings..."
because what if some humans (like, say Bush) are threatening to kill vastly greater numbers of other humans...wouldn't you want the agent to intervene? And if you decided that even so, you would instruct the agent not to kill,
it could still as good as kill by rendering humans vegetables while still
alive.

So many people here and everywhere are unaware of the general and
deliberately imprecise nature of language - much stressed by Derrida and
exemplified in the practice of law.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 269.10.4/897 - Release Date: 11/07/2007 21:57




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=19927706-30cc2e

Reply via email to