Eliezer,
> I think most
of us here take that point for granted, actually - can we
> accept it and
move on? Is there anyone here who thinks AI morality
> can or should
be a matter of source code?
I am not deeply experienced in issues
of AI morality or the origin of
morality in biological life......so for me to offer a comment on your
question is either brave or foolish or both!......but here's my intuition for
what it's worth....
I think that (a) architecture/coding
and (b) learning are both essential in
developing moral behaviour in AGIs. I strongly feel that relying on one
or the other wholly or even largely will not work.
I agree with both Ben and yourself
that morality in any advanced
general intelligence (biological or not) will depend mightily on good
training and that, even assuming that coding is important, the emergent
moral behaviour will not bear a simplisitic direct relationship to any
coding that may be involved.
But I think the coding will be critical
to giving any advanced general
intelligence a high aptitude for moral learning and for the effective,
adaptive application of morality.
For example, in the day-to-day work
I do on environmental
sustainability I notice that people seem to find it terribly hard to model
multidimentional problems operating over large areas and long time
horizons - that is pat of the reason why we find it hard to avoid global
warming or to create a robust state of global peace. Humans in general
have a tendency to grab their favourite bits of multidimensional
problems and elevate them above the other parts of the problem.
So I think it would help boost the
aptitude of artificial general
intelligences if, coupled to a moral drive to seek no-major trade-offs
and win-win outcomes for all life and a motivational pragmatic/aesthetic
drive to strive to retain of valuable patterns we also worked to build in
the capacity for complex whole system modelling. I think it would also
be desirable to make sure that AGIs are given, at the outset, in built in
form, well developed tools for the easy and rapid identification at least
some intitial critical examples of 'life'.
It might also be worth building in
a curiosity to explore moral beliefs
among AGIs and other sentient beings - to seek the goodness in others
moral beliefs/behaviours and to identify wrongness as well (in both the
AGIs moral beliefs and the beliefs of others).
I know someone is going to say - but
how do you code these abstract
ideas into programs....but I think this is ultimately dooble through the
extension of high level computer languages to encompass moral
concepts and as a complementary measure to develop specialist
pattern recognition systems that are attuned to seeking out patterns in
the behaviour of advanced lifeforms that reflect moral
responses.
For example, Franz De Waal (a very
respected animal behaviourist)
tells a wonderfully instructive true story of an older bonobo (a species
somewhat like a chimpanzee, but much more peaceful in it's basic
behaviours) that removed a captured bird from the clutches of a
juvenile and that then climbed a tree opened the birds wings and threw
it into the air. Franz using human skills at sensing moral behaviour
believes that the most probable explanation for this behavior is that the
older bonobo felt empathy for the captured bird and that it deliberately
rescued it from probable death at the hands of the less empathetic
youngster.
Franz also points out that bonobos
especially and all other apes and
also many monkey species devote a great deal of their time to studying
and memorising the relationships between members of their clan -
even keeping tabs of kinship relationships and hierarchies (all this is
backed up with observational data). This suggests (a) that these
creatures (humans included) have a drive to pay attention to clan
members and that they have a large part of their brain devoted to
keeping track of all the social dimensions of the clan.
This is one argument for why big brained
primates emerged - that they
gained in evolutionary terms from social, cooperative behaviour and
that operating socially required a lot of brain grunt to keep tabs on the
group - and possibly a large amount of human brain power that could
be used for other things might have 'come free' with the growth of the
brain to handle social interactions.
I guess what I'm thinking is that
developing moral sensibility might be
analogous to developing a vision system. Images are analysed for
special regularities by the retina and the brain. I think we need to think
about what regularities there are in moral behaviour so that a high
performance system can be built so AGIs can 'see' the moral aspects of
what goes on around them.
All this is an intuition at this stage
rather than a a well researched idea.
But I think there is something here worth exploring before we dismiss
'hard wiring' as minor or irrelevant part of the picture.
Cheers, Philip
Philip Sutton
Director, Strategy
Green Innovations Inc.
195 Wingrove Street
Fairfield (Melbourne) VIC 3078
AUSTRALIA
Tel & fax: +61 3 9486-4799
Email: <[EMAIL PROTECTED]>
http://www.green-innovations.asn.au/
Victorian Registered Association Number:
A0026828M