I like that Sims approaches the problem of AI from the perspective that
life is a consequence of the world, that life is the world discovering
itself.
He specifies a learning semantics (genetic algorithms) and a learning
syntax (motivation functions and virtual embodiment in time) for his
creations.
His specifications are functor-like in that they determine a structure on
the
world that when probed gives information about the world, more or less
finely.
Through process come functions like crawling, reaching, or defending.
Some how these functions follow from motivation, learning and the world.
Is it reasonable to interpret them as dependent functions of the underlying
motivation functions, the motivations acting as a generalized grobner basis?

To Glen's point, or perhaps the point of the Bengio paper, if we watch long
enough and the virtual world has sufficient analog to our own, we can begin
to experience a transparency of understanding. Still perhaps, the
understanding
is not of the agent but of the world.
.-. .- -. -.. --- -- -..-. -.. --- - ... -..-. .- -. -.. -..-. -.. .- ... .... 
. ...
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

Reply via email to