At 02:06 PM 11/10/2007, Richard Loosemore wrote:
AI progress has been slow for a specific reason, not because the problem is intrinsically hard.  The reason for the slow progress is a fundamental misperception of the nature of the AI problem: ... the AI community is populated with people who have an extremely strong bias against accepting these arguments, and this strong bias is what is holding back progress.  Basically, 'traditional' AI people have an almost theological aversion to the idea that the task of building an AI might involve having to learn (and deconstruct!) a vast amount of cognitive science, and then use an experimental-science methodology to find the mechanisms that really give rise to AI. ... If it were not for this particular way of seeing the problems of AI, I would be with the skeptics:  I think that conventional AI will not yield a singularity-class AGI for a long time (if ever), and I believe that the brain-emulation folks are being wildly optimistic about what they can achieve, because they are blind to functional-level issues, and do not have the resolution or in-vivo tools needed to reach their goals.

I have to give a lot of weight to the apparent fact that most AI researchers have not yet been convinced to accept your favored approach.   More persuasive to me are arguments for fast AI based on more widely shared premises.

Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu
Research Associate, Future of Humanity Institute at Oxford University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323
 


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=63888320-103722

Reply via email to