Robin Hanson wrote:
At 02:06 PM 11/10/2007, Richard Loosemore wrote:
AI progress has been slow for a specific reason, not because the
problem is intrinsically hard. The reason for the slow progress is a
fundamental misperception of the nature of the AI problem: ... the AI
community is populated with people who have an extremely strong bias
against accepting these arguments, and this strong bias is what is
holding back progress. Basically, 'traditional' AI people have an
almost theological aversion to the idea that the task of building an
AI might involve having to learn (and deconstruct!) a vast amount of
cognitive science, and then use an experimental-science methodology to
find the mechanisms that really give rise to AI. ... If it were not
for this particular way of seeing the problems of AI, I would be with
the skeptics: I think that conventional AI will not yield a
singularity-class AGI for a long time (if ever), and I believe that
the brain-emulation folks are being wildly optimistic about what they
can achieve, because they are blind to functional-level issues, and do
not have the resolution or in-vivo tools needed to reach their goals.
I have to give a lot of weight to the apparent fact that most AI
researchers have not yet been convinced to accept your favored
approach. More persuasive to me are arguments for fast AI based on
more widely shared premises.
You would then be making an assessment based on the volume of the
crowd's applause rather than the actual content. Historically, not a
terribly productive approach.
Given that I explained exactly why "most AI researchers" would be
expected to reject the approach, it is somewhat amusing to see you
dismiss the approach ... and then give as your reason the fact that most
AI researchers reject it.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=63909371-8fbdec