On 7/12/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
Cenny,

If you're saying that you can give an agent general, conceptual goals  and
control its interpretation of them in every circumstance, please give an
example of one such goal or set of goals. The entire legal profession and
all philosophers of language are waiting to hear from you.


Sorry, I do not want to sound arrogant. I'm not all that experienced
so I do not stress too much certainty behind my words even if they to
me seem to be fairly supported.

In a proper language we could define the goal to avoid dark while
approaching light colors along with a well-defined weighting and
domain-knowledge (part of the interpretation) defining these two in
terms of optical sensors.

That a goal expressed in natural language may be interpreted
differently in different contexts (including agents and history) and
possibly even indeterministically does not mean that it is impossible
for all languages - and a (language, interpretation)-pair is as
general as it gets. You were making a very broad statement about all
agents;

There is no way of endowing any agent with conceptual goals
that cannot be interpreted in ways opposite to the designer's intentions -
that is in the general, abstract nature of language & symbolic systems.

Interpretation relies on parameters - in natural language one may
either (both are right) see context as an internal and external
parameter. If it is considered internal, i.e. we're talking about the
interpretation of (context, natural_language_segment), we only require
determinism to ensure that it follows the designer's intended
interpretation. If context is rendered irrelevant and the language is
deterministic, the language segment is enough. Since past and current
outcomes are part of the history (of the belief state), there is a
deterministic language in which we may express any policy.

In some languages, we could precisely define policies of arbitary
goals. This might be the case for non-stationary natural languages as
well (given a deterministic universe for instance), albeit we would
not necessarily know the parameters just because we know that they
exist. i.e. if a concept X means one thing in state s1 and another in
s2 - it is part of a possibly history-dependent policy.

A problem is that I do not quite grasp the concept of "general
conceptual goals". It seems like you're making an artificial
distinction, or possibly a meta distinction so that in the initial
statement we need to be careful and dinstinguish between certain
identities and equivalences in terms of actions (same policies), and
which one is actually relevant.

In a formal and logical setting; if an agent is programmed to follow
(a possibly partial) policy X, it will follow it or else it was not
programmed to do so.

There are certainly difficulties with natural language but I do not
see how these empirical and practical difficulties can be called
tautologies of languages in general.


----- Original Message -----
From: "Cenny Wenner" <[EMAIL PROTECTED]>
To: <singularity@v2.listbox.com>
Sent: Thursday, July 12, 2007 3:41 PM
Subject: Re: [singularity] ESSAY: Why care about artificial intelligence?


> On 7/12/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
>> >> 2."An AI programmed only to help humanity will only help humanity."
>> >> Really?
>> >> George Bush along with every other leader is programmed to "help
>> >> humanity."
>> >
>> > George Bush is human. He has plenty of other goals in _addition_ to
>> > "helping humanity" - if he even has that goal at all...
>>
>> Comment: You and others seem to be missing the point, which obviously
>> needs
>> spelling out. There is no way of endowing any agent with conceptual goals
>> that cannot be interpreted in ways opposite to the designer's
>> intentions -
>> that is in the general, abstract nature of language & symbolic systems.
>>
>> For example, the general, abstract goal of "helping humanity" can
>> legitimately in particular, concrete situations be interpreted as wiping
>> out
>> the entire human race (bar, say, two) - for the sake of future
>> generations.
>>
>
> Any agent's actions may be described by a (possibly indeterministic)
> policy or norm. If this scheme adheres to some property and the agent
> was constructed through programming, the agent has been programmed to
> adhere to that property. The problem of your example is the
> interpretation of the ambiguous and vague statement. It is the
> behaviour and not it's representation in an arbitrary system that is
> relevant. If an agent is given a natural language goal which needs to
> be interpreted we need to take the interpretation into account - how
> should the two goals be weighted? If we may fulfil one but not the
> other, what would be the prefered action? How should one deal with
> inconsistencies? These are not enigmas of the policy.
>
>> And there is no reasonable way to circumvent this. You couldn't, say,
>> instruct an agent... "help humanity but don't kill any human beings..."
>> because what if some humans (like, say Bush) are threatening to kill
>> vastly
>> greater numbers of other humans...wouldn't you want the agent to
>> intervene?
>> And if you decided that even so, you would instruct the agent not to
>> kill,
>> it could still as good as kill by rendering humans vegetables while still
>> alive.
>>
>> So many people here and everywhere are unaware of the general and
>> deliberately imprecise nature of language - much stressed by Derrida and
>> exemplified in the practice of law.
>>
>>
>> -----
>> This list is sponsored by AGIRI: http://www.agiri.org/email
>> To unsubscribe or change your options, please go to:
>> http://v2.listbox.com/member/?&;
>>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
>
> --
> No virus found in this incoming message.
> Checked by AVG Free Edition. Version: 7.5.476 / Virus Database:
> 269.10.4/897 - Release Date: 11/07/2007 21:57
>
>


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=20320719-3728f8

Reply via email to