When the purse-strings
open, and the money flows, it will flow like tax dollars, bequests, and
donations do -- toward politically tenable projects. Yudkowsky's
Friendliness theory, whether you agree with it's technical feasibility
or not, is very effectively positioning the Singularity Institute's
future AGI projects to be Politically Friendly.

Hmmm.... I agree that the SIAI's line has some very powerful marketing
appeal on its side....  And so far SIAI has, so far as I can tell,
been a big success at marketing itself, though only a modest success
at advancing AGI theory or Friendliness Theory (and hasn't really
tried much to advance AGI practice).  [Of course, successful
self-marketing, while not easy at all, is a lot easier than AGI theory
or Friendliness theory!]

However, I am not sure that the SIAI's redoubtable marketing appeal is
the kind that will appeal to the AI funders in the government.  These
guys will likely throw any future, major AI funding to the same major
university and corporate labs that have gotten most of the AI funding
in the past, IMO.

As an exercise, and remembering that you're really, really
smart, and the rest of us aren't, how do you debate against the
following statement?

"We should ensure, in fact guarantee, that AGI doesn't wipe out
humanity."

Well, if you want to talk politics, here is another story someone may
tell the government:

"There are no guarantees in real life, everyone knows that.  There
are all sorts of dangers out there.  There are natural dangers: Earth
could be pulverized by a comet, or the sun could flare up and consume
the Earth in flames.  A plague could sweep the Earth tomorrow and wipe
out humanity.  All these things are possible, none of them are very
likely.  But, most frightening of all, there are human dangers: crazy
people and dogmatic people out there who might like to kill a lot of
people, and might kill everyone instead of just a lot of people in a
fit of lunacy or by a technical mistake.  Having an AGI vastly smarter
and more capable than these hostile people are, on our side, is worth
a lot.   We're a lot safer with such an AGI on our side than we are
with such an AGI on their side.  Remember, if they get one first, they
may prevent us from  building one of our own.  We need to get a
powerful AGI first before they do.  We'll be vastly safer with one
than without one.  No, there's no guarantee that such an AGI couldn't
possibly be dangerous with us -- any more than there are absolute,
provable guarantees with any other powerful technology.  But it's just
plain commonsense that we're better off with the superhuman AGI on our
side than theirs."

;=p

I believe this kind of story will ultimately compel the Powers that Be
more strongly than the "let's delay building AI till we can prove it's
Friendly" story (which I agree also has some powerful marketing
appeal)

But I also believe AGI may get created well before the Powers that Be
start to take the idea seriously ;-)

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to