Things like finding recharging sockets are really more complex goals built
on top of more primitive systems.  For example, if a robot heading for a
recharging socket loses a wheel its goals should change from feeding to
calling for help.  If it cannot recognise a deviation from the "normal"
state then it will fail to handle the situation intelligently.  Of course
these things can be "hard coded", but hard coding isn't usually a good
strategy, other than as initial scaffolding.  Systems which are not
adaptable are usually narrow and brittle.



On 22/11/06, Charles D Hixson <[EMAIL PROTECTED]> wrote:

I don't know that I'd consider that an example of an uncomplicated
goal.  That seems to me much more complicated than simple responses to
sensory inputs.   Valuable, yes, and even vital for any significant
intelligence, but definitely not at the minimal level of complexity.

An example of a minimal goal might be "to cause an extended period of
inter-entity communication", or "to find a recharging socket".  Note
that the second one would probably need to have a hard-coded solution
available before the entity was able to start any independent
explorations.  This doesn't mean that as new answers were constructed
the original might not decrease in significance and eventually be
garbage collected.  It means that it would need to be there as a
pre-written answer on the tabula rasa.  (I.e., the tablet can't really
be blank.  You need to start somewhere, even if you leave and never
return.)  For the first example, I was thinking of "peek-a-boo".

Bob Mottram wrote:
>
> Goals don't necessarily need to be complex or even explicitly
> defined.  One "goal" might just be to minimise the difference between
> experiences (whether real or simulated) and expectations.  In this way
> the system learns what a normal state of being is, and detect
deviations.
>
>
>
> On 21/11/06, *Charles D Hixson* <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
>     Bob Mottram wrote:
>     >
>     >
>     > On 17/11/06, *Charles D Hixson* <[EMAIL PROTECTED]
>     <mailto:[EMAIL PROTECTED]>
>     > <mailto:[EMAIL PROTECTED]
>     <mailto:[EMAIL PROTECTED]>>> wrote:
>     >
>     >     A system understands a situation that it encounters if it
>     predictably
>     >     acts in such a way as to maximize the probability of
>     achieving it's
>     >     goals in that situation.
>     >
>     >
>     >
>     >
>     > I'd say a system "understands" a situation when its internal
>     modeling
>     > of that situation closely approximates its main salient
>     features, such
>     > that the difference between expectation and reality is minimised.
>     > What counts as salient depends upon goals.  So for example I
>     could say
>     > that I "understand" how to drive, even if I don't have any
detailed
>     > knowledge of the workings of a car.
>     >
>     > When young animals play they're generating and tuning their
models,
>     > trying to bring them in line with observations and goals.
>     That sounds reasonable, but how are you determining the match of the
>     internal modeling to the "main salient features".  I propose that
>     you do
>     this based on it's actions, and thus my definition.  I'll admit,
>     however, that this still leaves the problem of how to observe what
>     it's
>     goals are, but I hypothesize that it will be much simpler to
>     examine the
>     goals in the code than to examine the internal model.
>
>     -----
>     This list is sponsored by AGIRI: http://www.agiri.org/email
>     To unsubscribe or change your options, please go to:
>     http://v2.listbox.com/member/?list_id=303
>
>
> ------------------------------------------------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to