Hi all,

There is a possibility that at some point in the future, government
agencies, wealthy foundations, and non-profits will significantly
increase expenditures as to AGI development. When the purse-strings
open, and the money flows, it will flow like tax dollars, bequests, and
donations do -- toward politically tenable projects. Yudkowsky's
Friendliness theory, whether you agree with it's technical feasibility
or not, is very effectively positioning the Singularity Institute's
future AGI projects to be Politically Friendly.     

In the summer of 2003, the US media reported on an attempt by DARPA to
put a futures market in place which would ostensibly be able to forecast
certain undesirable events such as terrorist attacks, assassinations and
the like. The idea was to find a way to elevate our awareness before a
threat materialized, and so DARPA was studying these prediction methods.
Now, while the idea itself has theoretical and practical merit, some
members of Congress sensed an easy victory against an injured opponent
and piled on. There was no debate; there was no thoughtful consideration
of the project's chances for success; there was no collective desire to
learn more about the idea. The project was incinerated by fiery
political soundbites, and no opposing voice was willing to be
incinerated along with it. The incident caused Adm John Poindexter to
resign in disgrace and we will never hear of the US government playing
with prediction markets again in our lifetimes.  

There is a lesson here for everyone working on an AGI project that "just
needs funding to get there." You must be politically tenable. Funding
your project must be justifiable in a soundbite. And it should take a
Ph.D. droning on and on for pages in technical jargon to present an
argument against you, your theory, your design, and your most likely
outcome. As an exercise, and remembering that you're really, really
smart, and the rest of us aren't, how do you debate against the
following statement?

"We should ensure, in fact guarantee, that AGI doesn't wipe out
humanity."

Do you not see that lining up *against* this statement for whatever
technical mumbo-jumbo reason is suicidal? But forget funding for a
moment. Think about what happens when Congress gets involved in
regulating this field, and guys in jackboots come knocking. Is it
smarter to have publicly stated 'Friendly AI is bunk' or to have said
'It's the only kind AI worth building'? 

Please, everyone working on a real AGI project that might hasten a
Singularity, you must learn this lesson from Yudkowsky. He's not wrong.

Keith


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to