Ben,

On 9th Jan you said:

> All I have to offer now are:
> 1) good parenting principles (i.e. a plan for teaching the system to
> be benevolent) 
> 2) an AGI framework that seems, theoretically, to be quite capable of
> supporting "learning to be benevolent" 
> 3) an intention to implement a careful "AGI sandbox" that we won't
> release our AGI from until we're convinced it is genuinely benevolent 

I think your three level approach sounds good, although I think the idea 
of some basic hard wiring needs to be reconsidered in the contect of 
your item 2).

My guess is that there is at least a partial hard wired difference 
between animals that are social and those that are not.

There are probably lots of processes at work that make specific species 
'social' and not all of these are necessarily good candidates for 
attributes to build into an AGI.

Some creatures may well be social because they just feel rotten outside 
the company of others.  I'm not sure that this is a great basis for a 
holistic empathetic benevolent sociability.  It's more like a crude form of 
binding.

I think another dimension of sociability is the capacity for affiliation or 
identification - a sense of what's in the clan (what's in is a moral 
subject, what's out is an object that does npot qualify for moral/ethical 
consideration.

Most humans affiliate with some other humans but they are apable of 
feeling affiliation to some degree with the rest of humanity and they 
clearly can (depending on their own personality and the prevailing 
culture can affiliate well beyond humans -- reaching out to many other 
life forms and beyond - our affiliation capacity is pretty maleable when 
you consider that some people love objects (their cars, etc.) with almost 
as much passion as they devote to other people.

I think that what we need in an AGI is some hard wiring that allows this 
possibility of widespread moral affiliation.

In practice I think what this capacity does is to provide the basis for 
reconciling the needs/wants of self with the needs/wants of the wider 
community or even the universal whole.

My own work on ecological sustainability is driven by very much this 
dual affiliation - how can we care about specific humans/other living 
things/other species etc. while also caring for the whole 'community' of 
people/life.

It means that one ends up trying to develop strategies for "no major 
trade-offs" and for win win solutions (these are not identical concepts).

I think that poweful organisms are very dangerous to living things at 
large when they apply their capacity for affiliation very narrowly and 
when they have no wider sense of affiliation or when this wider sense is 
focussed/displaced in an other-worldy sense on a greater whole (a god 
or gods) but does not include other life on Earth.

I think it would be worth considering carefully the value of hard wiring a 
drive to affiliate with the widest possible whole that includes the 
tangible reality of life on Earth (20 million species etc.) and possible life 
elsewhere in the universe.

This might include an urge to comprehend and understand life (and 
ecology??) on the widest possible basis (universal/meta universal!!)) 
and a desire to contribute to opportunities/solutions that involve no-
major trade-offs / win-win outcomes for the benefit of humans, other 
species including the AGIs and everything that cn be cared about in the 
universe.

This doesn't involve hard wiring solutions, it doesn't involve locking 
onto serving the interests of just one species (eg. humans) but it does 
(hopefully) overcome or moderate the risk that AGIs might become 
obsessively self-serving individuals or cliques - with all the risks that 
that entails for others (human and non-human).

The other aspect of the building in of a capcity for affiliation is that it is 
probabably the basis upon which altruism - ie. either or both of a drive 
to serve and a capacity for self-sacrifice for a greater good.

I think Frans De Waals work shows that the capacity for altruism in a 
number of species is real - it may have evolved because it benefited 
the group but once the capacit for altruism was there is has been reified 
- it is now real.

By the way, it's clear that the Novamente is deeply considering the 
ethical aspect of nurturing AGI minds.

But what proportion of AGI development projects are doing this?  and 
what if just a small percentage of projects do not develop a human 
friendly/life friendly approach.

My guess is that at least some AGIs as nasty as the humans that create 
them and the organisations that support and fund the work will emerge.

How can we give the friendly AGIs the edge in the inevitable 
competition to come between AGI agents dedicated to serving narrow 
interests vs those serving wider or notionally universal interests???   I 
think this edge needs to be not only in terms of intellect but also in 
terms of the resources needed to support and proliferate benevolent 
AGIs.

Cheers, Philip

Philip Sutton
Director, Strategy
Green Innovations Inc.
195 Wingrove Street
Fairfield (Melbourne) VIC 3078
AUSTRALIA

Tel & fax: +61 3 9486-4799
Email: <[EMAIL PROTECTED]>
http://www.green-innovations.asn.au/

Victorian Registered Association Number: A0026828M

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to