I suspect that we won't ever get a real thinking
machine by
deliberately trying to model thought. I suspect
that the approach that
will ultimately work is one of two: One: a "sufficiently
complex"
evolutionary simulation system, or rather set of
competing systems,
will create a concious-seeming intelligence all by
itself (though that
intelligence will be non-human, and not modeled
after human thought,
and we might not understand each other well--how do
you instill an AI
with human concepts of morality?) or two, someone
will create a
super-complex physics simulation that can take
hyper-detailed 3D brain
CAT/PET/etc scan data as input then simply simulate
the goings on at
the atomic level, the "mind" being an emergent
property of the
"matter." Of course, the mind will probably
instantly go insane, even
if provided with sufficient quantity and types of
virtual senses and
body.
And we *still* won't know how the mind
happens.
;)
~~James
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's
College