--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:

> > (Echoing Joshua Fox's request:) Ben, could you
> also tell us where you
> > disagree with Eliezer?
> 
> Eliezer and I disagree on very many points, and also
> agree on very
> many points, but I'll mention a few key points here.
> 
> (I also note that Eliezer's opinions tend to be a
> moving target, so I
> can't say for sure that I disagree with his current
> opinions, only
> with some of his prior statements!)
> 
> I disagree with his previously stated opinion that
> "If an AGI is
> created by humans without a solid, fairly complete
> formal
> understanding of why it is almost sure to be
> Friendly ... then it is
> extremely likely that the AGI will be Unfriendly."
> 
> I really don't see how we can know that...

We can't "know it" in the sense of a mathematical
proof, but it is a trivial observation that out of the
bazillions of possible ways to configure matter, only
a ridiculously tiny fraction are Friendly, and so it
is highly unlikely that a selected AI will be Friendly
without a very, very strong Friendly optimization over
the set of AIs. In addition, for the vast majority of
goals, it is useful to get additional
matter/energy/computing power, and so unless there's
something in the goal system that forbids it, turning
us into raw materials/fusion fuel/computronium is the
default action.

> I also disagree with his previously stated
> assessment of the viability of
> 
> A) coming to a thorough, rigorous formal
> understanding of AI
> Friendliness prior to actually building some AGI's
> and experimenting
> with them
> 
> or
> 
> B) creating an AGI that will ascend to superhuman
> intelligence via
> ongoing self-modification, but in such a way that we
> humans can be
> highly confident of its continued Friendliness
> through its successive
> self-modifications
> 
> He seems to think both of these are viable (though
> he hasn't given a
> probability estimate, that I've seen).
> 
> My intuition is that A is extremely unlikely to
> happen.
> 
> As for B, I'd have to give it fairly low odds of
> success, though not
> as low as A.

So, er, do you have an alternative proposal? Even if
the probability of A or B is low, if there are no
alternatives other than doom by old
age/nanowar/asteroid strike/virus/whatever, it is
still worthwhile to pursue them. Note that I don't
know how we could go about calculating what the
probability is; it's not like we've done this before.

> I also disagree with his previously stated opinion
> that
> -- Anyone smart enough to actually create a
> human-level AGI, is likely
> to be smart enough to avoid the risk of creating an
> Unfriendly AGI

I disagree with this, and I believe Eliezer also
disagrees with it nowadays.

> And, I disagree with his previously stated
> assessments that
> -- Any AI system with significant learning power
> should be considered
> a significant risk to lead to an unanticipated hard
> takeoff

It is very easy to build an "AI" that will "learn" by
trawling random facts off the Internet, but such an AI
isn't a hard takeoff risk. I think a better term would
be "programming ability" or "general intelligence".

> For instance, we once argued about whether Genetic
> Programming systems
> should be considered serious risks for hard takeoff.
>  He said yes, I
> said they're just too stupid.

Even if they're stupid nowadays, if genetic
programming is Turing-complete, it is possible
(although not necessarily likely) for them to become
arbitrarily smart with future research.

>  But of course, I
> can't mathematically
> prove that they're too stupid.  But nor can I
> mathematically prove
> that my car won't spontaneously turn into a goose
> this afternoon.

See my post on this at
http://www.acceleratingfuture.com/tom/?p=11.

> Anyway, you get the idea.
> 
> I have enjoyed Eliezer's writings, and think he has
> done an
> outstanding job of exploring some very subtle and
> important issues.
> But on several rather important matters of intuition
> and estimation,
> our best-guess opinions differ significantly -- and
> in ways that have
> led us down radically different R&D paths in spite
> of having fairly
> similar large-scale goals.

Since becoming SIAI's Director of Research, have you
pursued any joint projects with Eliezer?

> -- Ben G

 - Tom

> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



      
____________________________________________________________________________________
Shape Yahoo! in your own image.  Join our Network Research Panel today!   
http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to