On 5/29/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:


On 29/05/07, Jef Allbright <[EMAIL PROTECTED]> wrote:

> I. Any instance of rational choice is about an agent acting so as to
> promote its own present values into the future.  The agent has a model
> of its reality, and this model will contain representations of the
> perceived values of other agents, but it is always only the agent's
> own values that are subjectively promoted.  A choice is considered
> "good" to the extent that it is expected (by the agent) to promote its
> present values into the future.

So whether I'm being selfish or altruistic from an external perspective, I
can really only be selfish from my own perspective, since I am promoting my
own values, wherever that leads me.Does this mean that my own values are by
definition "good", but not necessarily "moral", going by what you say below?

I think you understand this point, but the words remain slippery and
subject to the pull of customary usage.  To be clear, this meta-ethics
says nothing about whether your values are good, but only that you
must see them (more precisely, actions resulting from their coherent
expression) as leading to greater good.


> II. A choice is considered increasingly "moral" (or "right") to the
> extent that it is assessed as promoting an increasingly shared context
> of decision-making (e.g. involving more values, the values of more
> agents) over increasing scope of consequences (e.g. over more time,
> more agents, more types of interactions.)  In other words, a choice is
> considered "moral" to the extent that it is seen as "good" over
> increasing context of decision-making and increasing scope of
> consequences.

That has a utilitarian ring to it.

Utilitarian moral philosophy fails because it neglects the
evolutionary dynamics (it assumes a fixed context.)

This "Arrow of Morality" does not point to what is "good", but only
points out the direction of increasing subjective "good."  The utility
here is the meta-utility of knowing the direction (essentially the
direction of growth of positive-sum configurations promoting
subjective values), rather than the utility of any presumed goal.


> III.  Due to our inherent subjectivity with regard to anticipating the
> extended consequences of our actions, "increasing scope of
> consequences" refers to the power and general applicability of the
> *principles* we apply to promoting our values, rather than any
> anticipated *ends.*

But the principle, however broad it is, assumes some end, doesn't it?

No, that would lead to inconsistency.

The key point here is that the agent who values "bridgeness" exploits
best-known principles to express those values, and the particular
future bridge configuration emerges (it may not even be a bridge, if
something revolutionarily unbridgelike emerges. This is in contrast to
the popular (mis)conception that we start with a complex goal in mind
and then find a way to make it work. That approach is fine, but only
within a well-defined context, which is rare in the domain of human
affairs.

Increasing awareness of effective bridge-building principles leads to
increasingly "good" bridges, bridges that an increasing context of
decision-makers would agree are not only "good" but "right."

Significantly, this increasing convergence on principles of "right"
bridge-building supports increasing divergence of actual bridge
implementations "that work."


> IV. Due to our inherent subjectivity with regard to our role in the
> larger system, our values lead to choices that lead to actions that
> affect our environment feeding back to us, thus modifying our values.
> This feedback process thrives on increasingly divergent expressions of
> increasingly convergent subjective values.  This implies a higher
> level dynamic similar to our ideas of cooperation, synergy, or
> positive-sumness.

I think I see. Is this a description of how ethics actually functions, or a
prescription for how it ought to function? It would seem that this feedback
mechanism will in the long run find the "optimal" ethics, although this
process could be sped up by starting from a better base.

This is a descriptive meta-ethics rather than a prescriptive ethics
(which is necessarily context dependent.)  Although it is "only"
descriptive, (as the laws of physics are only descriptive) we as
subjective agents can apply this understanding toward improving our
subjective progress.


> It's not that lying to others is "bad" because one doesn't like being
> lied to, but rather, lying is bad in principle because it's
> anti-cooperative over many scales of interaction, and therefore in a
> very powerful but indirect way leads to diminishment, rather than
> promotion of one's values (those that work) into the future.  Or
> conversely, one acts to promote one's values, in the bigger picture
> this is best achieved via principles of cooperation (entailing not
> lying) between others with similar models of the world.
>
> It's not that eating meat is "bad" because one certainly wouldn't want
> to be eaten oneself, but rather, that eating others is
> anti-cooperative to the extent that others are similar to oneself,
> leading in principle to diminishment, rather than promotion, of the
> values that one would like to see in the future created by one's
> choices.

Sure, but an essential part of the badness of lying to and eating people is
that they are in fact people. It wouldn't be the same if we were talking
about lying to and eating vegetables, for example.

Please note that I clearly said "to the extent that others are similar
to oneself..."  But this has nothing to do with some imagined
intrinsic "personhood", and everything to do with potential for
cooperation.


If AI's were more like
vegetables than people in their reaction to being lied to or eaten, then all
else being equal, it wouldn't be so bad to lie to them or eat them.

And at this point the extended understanding collapses into the black
hole of a heuristic attractor.

Yes Stathis, lying and eating people (and entities like people) is bad.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to