David Noziglia wrote:
> In fact, ethical systems of cooperation are really, on a very simplistic
> level, ways of improving the lives of individuals.  And this is not true
> because of strictures from on high, but for reasons of real-world
> self-interest.  Thus, the Nash Equilibrium, or the results of the
> Tit-for-Tat game experiment, show that an individual life is better in an
> environment where players cooperate.  Being nice is smart, not just moral.
> Other experiments have shown that much hard-wired human and
> animal behavior
> is aimed at enforcing cooperation to punish "cheaters," and that
> cooperation
> has survival value!
>
> I reference here, quickly, Darwin's Blind Spot, by Frank Ryan,
> which argues
> that symbiotic cooperation is a major creative force in evolution and
> biodiversity.
>
> Thus, simply giving AGI entities a deep understanding of game
> theory and the
> benefits of cooperative society would have far greater impact on their
> ability to interact productively with the human race than hard-wired
> instructions to follow the Three Laws that could some day be overwritten.

Hmmm.  Well, there is a lot of truth to this approach.

But it has one major shortcoming: The "ethical behavior is valuable for
self-interest" conclusion only holds in certain situations, and these
situations may NOT be the ones obtaining in a future consisting of humans
and AGI's.

In particular, your conclusion seems most valid in the case of a population
of *roughly equally powerful* entities.

Rational self-interest does not stop us from knocking down forests to build
cities, in spite of all the ants and squirrels that are rendered homeless or
dead as a consequence.

If an AGI is sufficiently  more powerful than humans, then taking a "tit for
tat" approach with us is no longer significantly valuable to its
self-interest.  In this scenario, we need to rely on its non-self-interested
benevolence.

If it finds that its desires/goals lead it to want to do something that
indirectly would be harmful to us (i.e. the analogue of us razing a forest
to build a city), will it

a) resist its desire and spend its time finding a way to achieve its goals
without harming us
b) at least, act to minimize the harm?

If we cared about ants and squirrels that much, then before creating a new
city on top of a forest, we would undertake a massive Ant and Squirrel
Relocation Program.  We don't.  We take care of endangered species to some
extent, but we ignore the individual lives of organisms of other species.

So, I agree, we need to teach an AGI about rational self-interest and the
harmony that cooperation has with this.

But I think there is an aspect of benevolence that goes beyond rationality.
Rationality is about what one does to fulfill one's goals -- morals and
ethics are about what the goals are, in the first place.  Benevolence and
respect for all forms of life need to be there *in the goal system*.  Not
hardwired in, in any sense -- rather, taught and fully internalized.

-- Ben Goertzel



-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to