(Echoing Joshua Fox's request:) Ben, could you also tell us where you
disagree with Eliezer?

Eliezer and I disagree on very many points, and also agree on very
many points, but I'll mention a few key points here.

(I also note that Eliezer's opinions tend to be a moving target, so I
can't say for sure that I disagree with his current opinions, only
with some of his prior statements!)

I disagree with his previously stated opinion that "If an AGI is
created by humans without a solid, fairly complete formal
understanding of why it is almost sure to be Friendly ... then it is
extremely likely that the AGI will be Unfriendly."

I really don't see how we can know that...

I also disagree with his previously stated assessment of the viability of

A) coming to a thorough, rigorous formal understanding of AI
Friendliness prior to actually building some AGI's and experimenting
with them

or

B) creating an AGI that will ascend to superhuman intelligence via
ongoing self-modification, but in such a way that we humans can be
highly confident of its continued Friendliness through its successive
self-modifications

He seems to think both of these are viable (though he hasn't given a
probability estimate, that I've seen).

My intuition is that A is extremely unlikely to happen.

As for B, I'd have to give it fairly low odds of success, though not
as low as A.

I also disagree with his previously stated opinion that
-- Anyone smart enough to actually create a human-level AGI, is likely
to be smart enough to avoid the risk of creating an Unfriendly AGI

And, I disagree with his previously stated assessments that
-- Any AI system with significant learning power should be considered
a significant risk to lead to an unanticipated hard takeoff
For instance, we once argued about whether Genetic Programming systems
should be considered serious risks for hard takeoff.  He said yes, I
said they're just too stupid.  But of course, I can't mathematically
prove that they're too stupid.  But nor can I mathematically prove
that my car won't spontaneously turn into a goose this afternoon.

Anyway, you get the idea.

I have enjoyed Eliezer's writings, and think he has done an
outstanding job of exploring some very subtle and important issues.
But on several rather important matters of intuition and estimation,
our best-guess opinions differ significantly -- and in ways that have
led us down radically different R&D paths in spite of having fairly
similar large-scale goals.

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to