Richard,

am I reading you right when I'm interpreting you to be saying that
there are several distinguished cogsci professionals who have read
Yudkowsky's writings and have serious disagreements with his major
points?



Well, you could take pretty much anything ever written about the theory of
cognition, theoretical AI or philosophy of mind and find "several
distinguished cogsci professionals who ... have serious disagreements with
the major points."  So it is hardly a surprise that this would be the case
for Eliezer Yudkowsky's writings as well.

Furthermore, Yudkowsky in year N often has serious disagreements with
Yudkowsky in year (N-k) where k  is a small integer ;-)





(Personally, I can't help but find Yudkowsky's writings consistently
brilliant, so I'd quite curious in hearing reasons for why serious
professionals would consider him a nutcase.



Brilliant is one thing; correct is another ;-)

As for why some professionals would view Yudkowsky's ideas with skepticism,
I suppose there are two reasons

1) disagreement with the ideas themselves [but as I noted above,
professionals in cog sci and AI have wild disagreements with each other on
foundational issues, all the time]

2) Yudkowsky's writings often are not written in the dry, reference-heavy
style that academics favor


Personally, I think Eliezer's writing style is wonderful, and that he has
introduced a number of interesting ideas into the dialogue on AGI and the
Singularity.  I have disagreements with him on many essential points; but
this is a contentious area where there is no consensus among experts.  Of
course, I think I am right in all our disagreements, though ;-)

I do agree with Samantha and many others that "formulating a design for a
pragmatically achievable, provably Friendly AI" (Eliezer's central goal
these days) is almost surely an infeasible goal.   However, I also think
that the achieving of this goal would be SO NICE that it's well worth having
some smart people spending some time working on it.  I myself have spent
some bits and pieces of time working on it, but have repeatedly given up in
frustration and turned back to merely-very-hard goals like "creating an AGI
according to a design that seems very likely to be Friendly."

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to