Hi,

It seems to me that discussing AI or human thought in terms of goals and
subgoals is a very "narrow-AI" approach and destined to fail in general
application.

I think it captures a certain portion of what occurs in the human
mind.  Not a large portion, perhaps, but an important portion.

Why?  Because to conceive of a goal requires a perspective
outside of and encompassing the goal system.  We can speak in a valid
way about the goals of a system, or the goals of a person, but it is
always from a perspective outside of that system.

But, the essence of human reflective, deliberative awareness is
precisely our capability to view ourselves from a "perspective outside
ourselves." ... and then use this view to model ourselves and
ultimately change ourselves, iteratively...

It seems to me that a better functional description is based on
"values", more specifically the eigenvectors and eigenvalues of a highly
multidimensional model *inside the agent* which drive its behavior in a
very simple way:  It acts to reduce the difference between the internal
model and perceived reality.

I wouldn't frame it in terms of eigenvectors and eigenvalues, because
I don't know how you are defining addition or scalar multiplication on
this space of "mental models."

But I agree that the operation of "acting to reduce the difference
between internal models and perceived reality" is an important part of
cognition.

It is different from explicit goal-seeking, which IMO is also important.

Goals thus emerge as third-party
descriptions of behavior, or even as post hoc internal explanations or
rationalizations of its own behavior, but don't merit the status of
fundamental drivers of the behavior.

I distinguished "explicit goals" from "implicit goals."  I believe
that in your comments you are using the term "goal" to mean what I
term "explicit goal."

I think that some human behavior is driven by explicit goals, and some is not.

I agree that the identification of the **implicit goals** of a system
S (the functions the system S acts like it is seeking to maximize) is
often best done by another system outside the system S.  Nevertheless,
I think that implicit goals are worth talking about, and can
meaningfully be placed in a hierarchy.

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to