On Mon, 2 Dec 2002, Bill Hibbard wrote:

> Hi Stephen,
>
> On Sun, 1 Dec 2002, Stephen Reed wrote:
>
> > As one of the research groups in this forum, or elsewhere, begins to
> > evidence AGI, then its management will have to decide how the US military
> > will use it.  So the issue you raise is a general one.
>
> First, I have no objection to current Cyc technology being used
> to fight terrorism, because neither Cyc nor any other current
> system is anywhere close to acheiving real intelligence. Of
> course, I do want to see any technology applied consistently
> with the U.S. consitution.

Agreed.

> However, when technology does develop real intelligence, then
> I think that resistence to military applications is necessary
> and in fact presents the best opportunity for educating the
> public to the dangers of machine intelligence. I will explain.

The recent press flap over the Darpa Total Information Awareness program
is a good model for the debate that should occur.

> In my view, intelligent behavior cannot be described by any set
> of rules explicitly written down by any programmers. Rather, the
> behavior must be learned. Of course, the learning behavior will
> be defined by rules written down by programmers, but that is
> different from the learned behavior. By analogy, the DNA for
> human brains is a set of rules for a learning architecture, but
> not a set of rules for language or other intelligent behaviors,
> which must be learned.

In general, agreed.

> Learning is reinforced by a set of values, generally called
> emotions in humans. Some behaviors are positively reinforced and
> others are negatively reinforced. Human emotional values are
> mostly for self-interest, although not all. This is nicely
> described in Stephen Pinker's How the Mind Works.

I have the book and generally agree.

> When we develop machine intelligence, they key to human safety
> will be that their behaviors are positively reinforced by
> human happiness and negatively reinforced by human unhappiness.
> Of course there are lots of conflicts among humans. So machines
> will learn intelligent behaviors for resolving those conflicts
> equitably, just as legal systems require judges who can render
> intelligent judgements. The best model is the love of a mother
> for her children: she balances the interests of all her children
> and focuses her energy where it is needed most.

Agreed. I plan on following the approach of "Friendly AI".

> The greatest danger in the development of intelligent machines
> is that they will be built by corporations with learning values
> focused narrowly on corporate profits (this corresponds very
> closely with current applications of machine learning to financial
> investing). Or they will be built by militaries with learning
> values focused on killing enemies and preserving lives of
> friendly soldiers.

The latter is much more likely in my opinion. Another discussion is
whether the US military has sufficient ethics to train and use an AGI on
behalf of US citizens and I believe that it does.

> It is important to generate public resistence before wealthy
> organizations build intelligent machines with learning values
> focused on narrow interests, rather than the happiness of all
> humans. Military applications provide an opportunity to make
> a clear analogy with nuclear, chemical and especially biological
> weapons, where the public and responsible leaders already
> understand the importance of banning such technologies.

The analogy of comparing AGI with weapons of mass destruction/impact
is relevant to the degree that both are dangerous in the hands of our
enemies, but fails in that AGI is potentially the greatest, most
beneficial technology - so it likely will not be banned, rather
regulated.  So government regulation of AGI is another issue to discuss;
I favor it, many others distrust the US government.

> There will eventually be a terrific political battle over the
> values of intelligent machines. Powerful corporations will
> want machines that serve their narrow interests, and national
> security will motivate many to argue for unrestricted military
> applications. On the other hand, democracy, education and the
> free flow of information are increasing (although there are
> certainly challenges). Hopefully as the technology matures, a
> "Ralph Nader" of machine intelligence will raise the general
> public awareness.

I entirely agree, although I do not have Green political beliefs, I find
many of Ralph Nader's arguments persuasive.  One can imagine an AGI not
having a party line or dogma, but whose reasoning powers are objective.
It will be interesting to see what an evolving AGI contributes to
political debate (or to military ethics).

-Steve

-- 
===========================================================
Stephen L. Reed                  phone:  512.342.4036
Cycorp, Suite 100                  fax:  512.342.4040
3721 Executive Center Drive      email:  [EMAIL PROTECTED]
Austin, TX 78731                   web:  http://www.cyc.com
         download OpenCyc at http://www.opencyc.org
===========================================================

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to