Kaj,

Another solid post.

I think you, Don Detrich, and many others on this list believe that, for
at least a couple of years, it's still pretty safe to go full speed ahead
on AGI research and development.  It appears from the below post that both
you and Don agree AGI can potentially present grave problems (which
distinguished Don from some on this list who make fun of anyone who even
considers such dangers).  It appears the major distinction between the two
of you is whether, and how much, we should talk and think about the
potential dangers of AGI in the next few years.

I believe AGI is so potentially promising it is irresponsible not to fund
it.  I also believe it is so potentially threatening it is irresponsible
to not fund trying to understanding such threats and how they can best be
controlled.  This should start now so by the time we start making and
deploying powerful AGI's there will be a good chance they are relatively
safe.

At this point much more effort and funding should go into learning how to
increase the power of AGI, than into how to make it safe.  But even now
there should be some funding for initial thinking and research (by
multiple different people using multiple different approaches) on how to
create machines that provide maximal power with reasonable safety.  AGI
could actually happen very soon.  If the right team, or teams, were funded
by Google, Microsoft, IBM, Intel, Samsung, Honda, Toshiba, Matsushita,
DOD, Japan, China, Russia, the EU, or Israel (to name just a few), at a
cost of, say, 50 million dollars per team over five years, it is not
totally unrealistic to think one of them could have a system of the
general type envisioned by Goertzel providing powerful initial AGI,
although not necessarily human-level in many ways, within five years.  The
only systems that are likely to get there soon are those that rely heavily
on automatic learning and self organization, both techniques that are
widely considered to be more difficult to understand and control that
other, less promising approaches.

It would be inefficient to spend too much money on how to make AGI safe at
this early stage, because as Don points out there is much about it we
still don't understand.  But I think it is foolish to say there is no
valuable research or theoretical thinking that can be done at this time,
without, at least, first having a serious discussion of the subject within
the AGI field.

If AGIRI's purpose is, as stated in its mission statement, truly to
"Foster the creation of powerful and ethically positive Artificial General
Intelligence [underlining added]," it would seem AGIRI's mailing list
would be an appropriate place to have a reasoned discussion about what
sorts of things can and should be done now to better understand how to
make AGI safe.

I for one would welcome such discussion, of subjects such as  "what are
the currently recognized major problems involved in getting automatic
learning and control algorithms of the type most likely to be used in AGI
to operate as desired; what are the major techniques for dealing with
those problems; and how effect have those techniques been.

I would like to know how many other people on this list would also.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Kaj Sotala [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 10:11 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On 9/30/07, Don Detrich - PoolDraw <[EMAIL PROTECTED]> wrote:
> So, let's look at this from a technical point of view. AGI has the
> potential of becoming a very powerful technology and misused or out of
> control could possibly be dangerous. However, at this point we have
> little idea of how these kinds of potential dangers may become
> manifest. AGI may or may not want to take over the world or harm
> humanity. We may or may not find some effective way of limiting its
> power to do harm. AGI may or may not even work. At this point there is
> no AGI. Give me one concrete technical example where AGI is currently
> a threat to humanity or anything else.
>
> I do not see how at this time promoting investment in AGI research is
> "dangerously irresponsible" or "fosters an atmosphere that could lead
> to humanity's demise". It us up to the researchers to devise a safe
> way of implementing this technology not the public or the investors.
> The public and the investors DO want to know that researchers are
> aware of these potential dangers and are working on ways to mitigate
> them, but it serves nobodies interest to dwell on dangers we as yet
> know little about and therefore can't control. Besides, it's a stupid
> way to promote the AGI industry or get investment to further
> responsible research.

It's not dangerously irresponsible to promote investment in AGI research,
in itself. What is irresponsible is to purposefully only talk about the
promising business opportunities, while leaving out discussion about the
potential risks. It's a human tendency to engage in wishful thinking and
ignore the good sides (just as much as it, admittedly, is a human tendency
to concentrate on the bad sides and ignore the good). The more that we
talk about only the promising sides, the more likely people are to ignore
the bad sides entirely, since the good sides seem so promising.

The "it is too early to worry about the dangers of AGI" argument has some
merit, but as Yudkowsky notes, there was very little discussion about the
dangers of AGI even back when researchers thought it was just around the
corner. What is needed when AGI finally does start to emerge is a
/mindset/ of caution - a way of thinking that makes safety issues the
first priority, and which is shared by all researchers working on AGI. A
mindset like that does not spontaneously appear - it takes either decades
of careful cultivation, or sudden catastrophes that shock people into
realizing the dangers. Environmental activists have been talking about the
dangers of climate change for decades now, but they are only now starting
to get taken seriously. Soviet engineers obviously did not have a mindset
of caution when they designed the Chernobyl power plant, nor did its
operators when they started the fateful experiment. Most current AI/AGI
researchers do not have a mindset of caution that makes them consider
thrice every detail of their system architectures - or that would even
make them realize there /are/ dangers. If active discussion is postponed
to the moment when AGI is starting to become a real threat - if
advertisement campaigns for AGI are started without mentioning all of the
potential risks - then it will be too late to foster that mindset.

There is also the issue of our current awareness of risks influencing the
methods we use in order to create AGI. Investors who have only been told
of the good sides are likely to pressure the researchers to pursue
progress at any means available - or if the original researchers are aware
of the risks and refuse to do so, the investors will hire other
researchers who are less aware of them. To quote
Yudkowsky:

"The field of AI has techniques, such as neural networks and evolutionary
programming, which have grown in power with the slow tweaking of decades.
But neural networks are opaque - the user has no idea how the neural net
is making its decisions - and cannot easily be rendered unopaque; the
people who invented and polished neural networks were not thinking about
the long-term problems of Friendly AI. Evolutionary programming (EP) is
stochastic, and does not precisely preserve the optimization target in the
generated code; EP gives you code that does what you ask, most of the
time, under the tested circumstances, but the code may also do something
else on the side. EP is a powerful, still maturing technique that is
intrinsically unsuited to the demands of Friendly AI. Friendly AI, as I
have proposed it, requires repeated cycles of recursive self-improvement
that precisely preserve a stable optimization target.

The most powerful current AI techniques, as they were developed and then
polished and improved over time, have basic incompatibilities with the
requirements of Friendly AI as I currently see them. The Y2K problem -
which proved very expensive to fix, though not global-catastrophic -
analogously arose from failing to foresee tomorrow's design requirements.
The nightmare scenario is that we find ourselves stuck with a catalog of
mature, powerful, publicly available AI techniques which combine to yield
non-Friendly AI, but which cannot be used to build Friendly AI without
redoing the last three decades of AI work from scratch."


--
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48268450-720816

Reply via email to