Stathis Papaioannou wrote:

Jef Allbright writes:

[Stathis Papaioannou]
>> If slavery could be scientifically shown to promote the well-being of >> the species as a whole does that mean we should have slavery? Does it >> mean that slavery is good? >
Teaching that slavery is "bad" is similar to teaching
that lying is "bad".  In each case it's a narrow
over-simplification of a more general principle of what
works. Children are taught simplified modes of moral
reasoning to match their smaller context of understanding. At one end of a moral scale are the
moral instincts (experienced as pride, disgust, etc.)
that are an even more condensed form of "knowledge" of
what worked in the environment of evolutionary adaptation. Further up the scale are cultural--including religious--laws
and even the patterns of our language that further codify
and reinforce patterns of interaction that worked well
enough and broadly enough to be taken as principles of "right" action.

Relatively few of us take the leap beyond the morality
that was inherited or given to us, to grasp the broader
and more extensible understanding of morality as patterns
of behavior assessed as promoting increasingly shared values
over increasing scope. Society discourages individual
thinking about what is and what is not moral; indeed, it is
a defining characteristic that moral principles subsume both narrow self interest and narrow situational awareness. For this reason, one can not assess the absolute morality
of an action in isolation, but we can legitimately speak of
the relative morality of a class of behavior within context.

Just as lying can clearly be the right action within a
specific context (imagine having one's home invaded and
being unable, on moral grounds, to lie to the invaders about
where the children are hiding!), the moral issue of slavery
can be effectively understood only within a larger context.

The practice of slavery (within a specific context) can be beneficial to society; numerous examples exist of slavery
contributing to the economic good of a locale, and on a
grander scale, the development of western philosophy
(including democracy!) as a result of freeing some from
the drudgery of manual labor and creating an environment conducive to deeper thought. And as we seek to elucidate
a general principle regarding slavery, we come face-to-face
with other instances of this class of problem, including
rights of women to vote, the moral standing of sentient
beings of various degrees of awareness (farm animals, the
great apes, artificial intelligences), and even the idea that all "men", of disparate mental and emotional capability,
are "created equal"?  Could there be a principle constituting
a coherent positive-sum stance toward issues of moral
interaction between agents of inherently different awareness
and capabilities?

Are we as a society yet ready to adopt a higher level of social decision-making, "moral" to the extent that it effectively
promotes increasingly shared values over increasing scope, one
that provides an increasingly clear vision of effective
interaction between agents of diverse and varying capabilities,
or are going to hold tightly to the previous best model, one
that comfortingly but childishly insists on the fiction of some
form of strict equality between agents?  Are we  mature enough
to see that just at the point in human progress where technological development (biotech, nanotech, AI) threatens to drastically disrupt that which we value, we are gaining the necessary tools to organize at a higher level--effectively a
higher level of wisdom?

Well, I think slavery is bad, even if it does help society - unless we were actually in danger of extiction without it or something. So yes, the moral rules must bend in the face of changing circumstances, but the point at which they bend will be different for each individual, and there is no objective way to define what this point would or should be.

I thought you and I had already clearly agreed that there can be no
absolute or objective morality, since moral judgments are based on
subjective values.  And I thought we had already moved on to discussion
of how agents do in fact hold a good portion of their subjective values
in common, due to common environment, culture and  evolutionary
heritage.  In my opinion, the discussion begins to get interesting from
this point, because the population tends to converge on agreement as to
general principles of effective interaction, while tending to diverge on
matters of individual interests and preferences.

Please notice that I don't say that slavery *is* immoral, because as you
well know there's no objective basis for that claim. But I do say that
people will increasingly agree in their assessment that it is highly
immoral.  Their *statements* are objective facts, and measurements of
the degree of agreement are objective facts, and on this basis I claim
that we can implement an improved form of social decision-making as
described earlier.

People will increasingly agree that slavery is immoral because IT
DOESN'T WORK TO PROMOTE INCREASINGLY SHARED VALUES OVER INCREASING
SCOPE.  It's nowhere near being a positive-sum game within any
reasonable context.  In terms of first-order consequences it's a
terribly wasteful, terribly inefficient use of human intelligence and
ability within the structure of a modern society.  In terms of
second-order consequences it perpetuates patterns of fear, hate,
ignorance and conflict, all of which have objective, measurable
consequences of their own.

But it's not for the childishly simplistic fictional reason which most
people have been taught, that all "men" are somehow perfectly equal in
some moral aspect.  People are beginning to struggle with this
simplistic rule, adding women to the equation, and more recently trying
to reckon where non-human agents can fit into this scheme, without
realizing yet that it's the scheme itself that's been broken, never
possessing the kind of generality that it claimed. The idea that "all
men are created equal" was a great improvement from the widespread
imbalances of the feudal system, given that the structure of society was
changing so as to decentralize the means of production with all its
attendant implications, but this shining principle was still only a
rough approximation of a general principle of effective interaction of
social agents.

Please note that we actually do have many real-world examples of
combined subjective assessments determining objective decision points.
The stock market may be the most apt analogy at this point in our
discussion.


Slightly off topic, I don't see why we would design AI's to experience emotions such as resentment, anger, fear, pain etc.

I agree.  Such add-ons would tend to interfere with their primary value
system.

In fact, if we could reprogram our own minds at will, it would be a very different world. Suppose you were upset because you lost your job. You might decide to stay upset to the degree that it remains a motivating factor to look for other work, but not affect your sleep, ability to experience pleasure, etc. If you can't find work you might decide to downgrade your expectations, so that you are just as content having less money or a menial job, or just as content for the next six months but then have the motivation to look for interesting work kick in again, but without the confidence- and enthusiasm-sapping disappointment that comes from repeated failure to find work. Or any variation on the above you can imagine.

I agree that would seem to be a rational improvement.

Merry Newtonmas to all.
- Jef

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to