Glen E. P. Ropella wrote:
> But, programmers haven't yet
> found a way to handle all ambiguity a computer program may or may not
> come across in the far-flung future.  That's in contrast to a living
> system, which we _presume_ can handle any ambiguity presented to it (or,
> in a softer sense, many many more ambiguities than a computer program
> can handle).
>   
Perception, locomotion, and signaling are capabilities that animals have 
evolved for millions of years.   It's not fair to compare a learning 
algorithm to the learning capabilities of a living system without 
factoring in the fact that robots aren't disposable for the sake of 
realizing evolutionary selection and search.  And even if they weren't, 
do you want drive over robots on the highway to make it so?   Anything 
that requires significant short term memory and integration of broad but 
scare evidence is probably something a computer will be better at than a 
human.  It may be that a `programmer' implements a self-organized neural 
net, or an kernel eigensystem solver but that only concerns the large 
classes of signals that can be extracted.   It's not like some giant 
if/then statement for all possible cases that a programmer would keep 
tweaking.

My assertion remains that the things computers do are primarily limited 
by the desire of humans to 1) understand what was learned, and then 2) 
use it.   If those two conditions are removed, then we are talking about 
a very different scenario.  There's little incentive to develop control 
systems for robots to keep them stumbling around as long as possible, 
with no limits on the actions they can take.

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to