Peter Voss mentioned "trying to solve the wrong problem is the first place"
as a source for failure in an AGI project.  This was actually this first thing
that I thought of, and it brought to my mind a problem that I think of when
considering general intelligence theories--object permanence.  Now, I think
it's established that babies have to learn the concept of object permanence.
They are probably genetically inclined to do so, but they still have to
acquire the concept.  You don't have to have an anthromorphic system,
certainly, but to me this says profound things about what intelligence itself
could possibly be if you could be "intelligent" before having such a simple
concept, and then have some way that you develop it.  One of the implications
for me is that intelligence almost certainly requires a some kind of causal,
sensory-motor interaction with the world.  "Object permanence" itself is an
abstract notion from the various practical behaviors involved with it, so I
would also not expect it to be just a piece of "knowledge" that was added to a
system.  It's a hard question of what it actually is, speaking to the nature
of generalization of knowledge.  And while this is only one concept of many,
it and others like it are the kind of problems that I see as getting missed in
the sorts of general intelligence theories that I see.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to