On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote:


We can't "know it" in the sense of a mathematical
proof, but it is a trivial observation that out of the
bazillions of possible ways to configure matter, only
a ridiculously tiny fraction are Friendly, and so it
is highly unlikely that a selected AI will be Friendly
without a very, very strong Friendly optimization over
the set of AIs.

Out of the bazillions of possible ways to configure matter only a ridiculously tiny fraction are more intelligent than a cockroach. Yet it did not take any grand design effort upfront to arrive at a world overrun when beings as intelligent as ourselves. So how does your argument show that Friendly AI (at least relatively Friendly) can only be arrived at by intense up front Friendly design?


In addition, for the vast majority of
goals, it is useful to get additional
matter/energy/computing power, and so unless there's
something in the goal system that forbids it, turning
us into raw materials/fusion fuel/computronium is the
default action.


For a rather stupid unlimited optimization process this might be the case but that is a pretty weak notion of an AGI.


I also disagree with his previously stated
assessment of the viability of

A) coming to a thorough, rigorous formal
understanding of AI
Friendliness prior to actually building some AGI's
and experimenting
with them

or

B) creating an AGI that will ascend to superhuman
intelligence via
ongoing self-modification, but in such a way that we
humans can be
highly confident of its continued Friendliness
through its successive
self-modifications

He seems to think both of these are viable (though
he hasn't given a
probability estimate, that I've seen).

My intuition is that A is extremely unlikely to
happen.

As for B, I'd have to give it fairly low odds of
success, though not
as low as A.

So, er, do you have an alternative proposal? Even if
the probability of A or B is low, if there are no
alternatives other than doom by old
age/nanowar/asteroid strike/virus/whatever, it is
still worthwhile to pursue them.

If A and B are very unlikely then major effort toward A and B are unlikely to bear fruit in time to halt existential risk outside AGI that we are already prone to, especially including being of too limited effective intelligence without AGI. MNT by itself would be the end of old age, physical scarcity and most diseases relatively quickly. It would also give us the means, if we have sufficient intelligence, to combat many other existential risks. But ultimately we are limited by available intelligence. In a faster and more complex world wielding greater and greater powers having intelligence capped by no AGI is a very serious existential threat. Serious enough that I believe it is very suboptimal for high powered brilliant researchers to be chasing an impossible or very highly unlikely goal.

- samantha

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to