On Friday 19 October 2007 06:34:08 pm, Mike Tintner wrote:

> In fact, there is an important, distinctive point here. AI/AGI machines may 
> be "uncertain," (usually quantifiably so), about how to learn an activity. 
> Humans are, to some extent,  fundamentally "confused."  We, typically, don't 
> just watch and listen to one person engaging in a skilled activity, (which 
> is what your and the prevailing analysis implies) - but to several people, 
> who have not just different but fundamentally conflicting practices and 
> philosophies - and we're not sure about how to resolve those conflicts. 

con·fused (kən-fyūzd') 
 adj.
 1. Being unable to think with clarity or act with understanding and 
intelligence.

(The American Heritage® Dictionary)

By this definition, all existing AI systems are confused :-)

To your point, you're committing what's sometimes called the "superhuman 
human" fallacy. It's like telling the Wright brothers that their plans for a 
flying machine wouldn't work because their machine wouldn't carry 500 
passengers nonstop from NYC to London. 

People learn best when they recieve simple, progressive, unambiguous 
instructions or examples. This is why young humans imprint on parent-figures, 
have heroes, and so forth -- heuristics to cut the clutter and reduce 
conflict of examples. An AGI that was trying to learn from the Internet from 
scratch would be very confused -- but that's not a good way to teach it. 

I'll be happy if I can get my system to learn from me alone. Then I can *teach 
it* to be able to handle contradictory inputs -- at least to the extent that 
I can do so myself.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55622379-8bb971

Reply via email to