On 9/30/07, Don Detrich - PoolDraw <[EMAIL PROTECTED]> wrote:
> So, let's look at this from a technical point of view. AGI has the potential
> of becoming a very powerful technology and misused or out of control could
> possibly be dangerous. However, at this point we have little idea of how
> these kinds of potential dangers may become manifest. AGI may or may not
> want to take over the world or harm humanity. We may or may not find some
> effective way of limiting its power to do harm. AGI may or may not even
> work. At this point there is no AGI. Give me one concrete technical example
> where AGI is currently a threat to humanity or anything else.
>
> I do not see how at this time promoting investment in AGI research is
> "dangerously irresponsible" or "fosters an atmosphere that could lead to
> humanity's demise". It us up to the researchers to devise a safe way of
> implementing this technology not the public or the investors. The public and
> the investors DO want to know that researchers are aware of these potential
> dangers and are working on ways to mitigate them, but it serves nobodies
> interest to dwell on dangers we as yet know little about and therefore can't
> control. Besides, it's a stupid way to promote the AGI industry or get
> investment to further responsible research.

It's not dangerously irresponsible to promote investment in AGI
research, in itself. What is irresponsible is to purposefully only
talk about the promising business opportunities, while leaving out
discussion about the potential risks. It's a human tendency to engage
in wishful thinking and ignore the good sides (just as much as it,
admittedly, is a human tendency to concentrate on the bad sides and
ignore the good). The more that we talk about only the promising
sides, the more likely people are to ignore the bad sides entirely,
since the good sides seem so promising.

The "it is too early to worry about the dangers of AGI" argument has
some merit, but as Yudkowsky notes, there was very little discussion
about the dangers of AGI even back when researchers thought it was
just around the corner. What is needed when AGI finally does start to
emerge is a /mindset/ of caution - a way of thinking that makes safety
issues the first priority, and which is shared by all researchers
working on AGI. A mindset like that does not spontaneously appear - it
takes either decades of careful cultivation, or sudden catastrophes
that shock people into realizing the dangers. Environmental activists
have been talking about the dangers of climate change for decades now,
but they are only now starting to get taken seriously. Soviet
engineers obviously did not have a mindset of caution when they
designed the Chernobyl power plant, nor did its operators when they
started the fateful experiment. Most current AI/AGI researchers do not
have a mindset of caution that makes them consider thrice every detail
of their system architectures - or that would even make them realize
there /are/ dangers. If active discussion is postponed to the moment
when AGI is starting to become a real threat - if advertisement
campaigns for AGI are started without mentioning all of the potential
risks - then it will be too late to foster that mindset.

There is also the issue of our current awareness of risks influencing
the methods we use in order to create AGI. Investors who have only
been told of the good sides are likely to pressure the researchers to
pursue progress at any means available - or if the original
researchers are aware of the risks and refuse to do so, the investors
will hire other researchers who are less aware of them. To quote
Yudkowsky:

"The field of AI has techniques, such as neural networks and
evolutionary programming, which have grown in power with the slow
tweaking of decades. But neural networks are opaque - the user has no
idea how the neural net is making its decisions - and cannot easily be
rendered unopaque; the people who invented and polished neural
networks were not thinking about the long-term problems of Friendly
AI. Evolutionary programming (EP) is stochastic, and does not
precisely preserve the optimization target in the generated code; EP
gives you code that does what you ask, most of the time, under the
tested circumstances, but the code may also do something else on the
side. EP is a powerful, still maturing technique that is intrinsically
unsuited to the demands of Friendly AI. Friendly AI, as I have
proposed it, requires repeated cycles of recursive self-improvement
that precisely preserve a stable optimization target.

The most powerful current AI techniques, as they were developed and
then polished and improved over time, have basic incompatibilities
with the requirements of Friendly AI as I currently see them. The Y2K
problem - which proved very expensive to fix, though not
global-catastrophic - analogously arose from failing to foresee
tomorrow's design requirements. The nightmare scenario is that we find
ourselves stuck with a catalog of mature, powerful, publicly available
AI techniques which combine to yield non-Friendly AI, but which cannot
be used to build Friendly AI without redoing the last three decades of
AI work from scratch."


-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48224949-4840c6

Reply via email to