Dear Ramiro,
thanks for the reply. I need to add something more.
> Marco, This is a difficult question to answer since the word agent
>has taken a very broad meaning and you will get different answers to
>the question you have posted. In my view agents rely heavily on
>distributed processing and communication to resolve a problem. In the
>Artificial Intelligence community some will claim that learning is an
>important component that would differentiate an agent from some other
>process, but I don't personally think that this is an issue.
I agree with you. The point of my research is to understand when the basic
components could be mature/complex enough to give us an intersting
behaviour.
It is true that an agent that learns may exhibit a different behaviour and
qualitative differences from an agent that doesn't learn. But this is not of
my concern right now.
> Another attribute that an agent should have is an independent goal
>to achieve. As agent is programmed to achieve a goal and will
>determine independently how it will achieve that goal by sensing its
>environment and communicating with other entities in its environment.
Just one thing. Can't almost everything be reduced to the egocentrical
(dynamical) point of view of an agent? The example is as follows:
Imagine a spinal cord or any kind of similar bones structure, perhaps a
robot-snake that I have seen on TV that looked like an ensemble of
vertebrae. Imagine it had to warp around an object.
Apart from some internal procedures and rules there is one extra external
condition to which it is subject to and we have to take into account: the
force of gravity. This exerts its force both on each vertebra, both on the
center of tha whole system. Can't we model gravity as seen by each vertebra
and therefore take it away from the domain of extrenal variables?
I was astounded when I have read of how computer graphics render the warping
of clothes over tables or rigid sufaces. The computer tries to minimise the
potential energy of the system. This is a way to rephrase garvity but is
also an example of what I am suggesting above. (Each square millimiter of
the cloth tries (in conjunction with others) to minimise the overall
potential energy X).
The example above also is an exmple in which a constrain (a force) traduces
into the minimum for some parameter (Energy). A minimum can be found by
approaching it... an agent can be programmed to tend towards it. I guess...
> Therefore its microscopic behavior may not be predictable, but its macro
>behavior should be. Best example of this are team sports.
This is very interesting.
> So to get back to distributed computing and communications. Agents
therefore > resolve problems where distribution and communications are
key factors. It is > then necessary to focus on issues like a common
language, a conflict resolution > strategy, and a knowledge query
component. Does a community of red cells communicate? > I don't think
they do but I may be mistaken. Does each one have an independent
objective? > I am not certain with my limited biology background. Are
they distributed? Not really. > > Hope this helps
This puts it in perspective. So it helps a lot,
thanks
Marco