On 3/9/07, Pei Wang <[EMAIL PROTECTED]> wrote:
On 3/9/07, Jef Allbright <[EMAIL PROTECTED]> wrote:

Thanks for the clarification. You can surely call it "high-level
functional description", but what I mean is that it is not an ordinary
high-level functional description, but a concrete expectation of
certain future experience --- the system wants to change the
environment to meet certain pre-built mental pattern.

Yes, entirely valid use of "goals" in the context of relating to
"expectations."


At the very beginning of the system's life cycle, the initial
goals are innate.

This answers my original objection.


However, my "initial goals" are different from other people's
"supergoals", in that they do not dominate the system's behaviors, and
the derivation process doesn't guarantee that the realization of a
child goal will always lead to the realization of the parent goal ---
the system's belief can be wrong. Also there are resources competition
among goals.


Here it's interesting that if a human claimed such a description of
their internal processing I would object that it's overly metaphorical
because a human could not maintain a context of "concrete expectation"
with regard to such a complex system of goals.  But I agree that an AI
could introspectively validate such a claim, at least to a very
significant extent.


We seem to have skipped over my point about intelligence being about
the encoding of regularities of effective interaction of an agent with
its environment, but perhaps that is now moot.

Thanks,

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to