Stathis Papaioannou wrote:


Jef Allbright writes:

[Stathis Papaioannou]
If slavery could be scientifically shown to promote the well-being of the species as a whole does that mean we
should have slavery? Does it mean that slavery is good?

Teaching that slavery is "bad" is similar to teaching that lying is
"bad".  In each case it's a narrow over-simplification of a more general
principle of what works. Children are taught simplified modes of moral
reasoning to match their smaller context of understanding. At one end of
a moral scale are the moral instincts (experienced as pride, disgust,
etc.) that are an even more condensed form of "knowledge" of what worked
in the environment of evolutionary adaptation. Further up the scale are
cultural--including religious--laws and even the patterns of our
language that further codify and reinforce patterns of interaction that
worked well enough and broadly enough to be taken as principles of
"right" action.
Relatively few of us take the leap beyond the morality that was
inherited or given to us, to grasp the broader and more extensible
understanding of morality as patterns of behavior assessed as promoting
increasingly shared values over increasing scope. Society discourages
individual thinking about what is and what is not moral; indeed, it is a
defining characteristic that moral principles subsume both narrow self
interest and narrow situational awareness.  For this reason, one can not
assess the absolute morality of an action in isolation, but we can
legitimately speak of the relative morality of a class of behavior
within context.

Just as lying can clearly be the right action within a specific context
(imagine having one's home invaded and being unable, on moral grounds,
to lie to the invaders about where the children are hiding!), the moral
issue of slavery can be effectively understood only within a larger
context.
The practice of slavery (within a specific context) can be beneficial to
society; numerous examples exist of slavery contributing to the economic
good of a locale, and on a grander scale, the development of western
philosophy (including democracy!) as a result of freeing some from the
drudgery of manual labor and creating an environment conducive to deeper
thought.  And as we seek to elucidate a general principle regarding
slavery, we come face-to-face with other instances of this class of
problem, including rights of women to vote, the moral standing of
sentient beings of various degrees of awareness (farm animals, the great
apes, artificial intelligences), and even the idea that all "men", of
disparate mental or emotional capability, are "created equal"?  Could
there be a principle constituting a coherent positive-sum stance toward
issues of moral interaction between agents of inherently different
awareness and capabilities?

Are we as a society yet ready to adopt a higher level of social
decision-making, "moral" to the extent that it effectively promotes
increasingly shared values over increasing scope, one that provides an
increasingly clear vision of effective interaction between agents of
diverse and varying capabilities, or are going to hold tightly to the
previous best model, one that comfortingly but childishly insists on the
fiction of some form of strict equality between agents?  Are we mature
enough to see that just at the point in human progress where
technological development (biotech, nanotech, AI) threatens to
drastically disrupt that which we value, we are gaining the necessary
tools to organize at a higher level--effectively a higher level of
wisdom?

Well, I think slavery is bad, even if it does help society - unless we were actually in danger of extiction without it or something.

Slavery is bad almost by defintion. It consists in treating beings we empathize with as though we had no empathy.
So yes, the moral rules must bend in the face of changing circumstances, but the point at which they bend will be different for each individual, and there is no objective way to define what this point would or should be.

Slightly off topic, I don't see why we would design AI's to experience emotions such as resentment, anger, fear, pain etc.

John McCarthy says in his essay, "Making Robots Conscious of their Mental 
States"
http://www-formal.stanford.edu/jmc/consciousness/consciousness.html

In fact, if we could reprogram our own minds at will, it would be a very different world.

Better living through chemistry!

Suppose you were upset because you lost your job. You might decide to stay upset to the degree that it remains a motivating factor to look for other work, but not affect your sleep, ability to experience pleasure, etc. If you can't find work you might decide to downgrade your expectations, so that you are just as content having less money or a menial job, or just as content for the next six months but then have the motivation to look for interesting work kick in again, but without the confidence- and enthusiasm-sapping disappointment that comes from repeated failure to find work.

I think that's called a cocaine habit. :-)

Brent Meeker

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to