On 28/05/07, Jef Allbright <[EMAIL PROTECTED]> wrote:

Before you consider whether killing the machine would be bad, you have to
> consider whether the machine minds being killed, and how much it minds
being
> killed. You can't actually prove that death is bad as a mathematical
> theorem; it is something that has to be specifically programmed, in the
case
> of living things by evolution.

You're perpetuating a popular and pervasive moral fallacy here.

The assumption that the moral rightness of a decision is tied to
another's "personhood" and/or preferences is only an evolved heuristic
based on its effectiveness in terms of promoting positive-sum
interactions between similar agents.

Any decision is always only a function of the decider in terms of
promoting its own values.

The morality of terminating a machine intelligence (or a person)
depends not on the preference or intensity of preference of the object
entity, but is a function of the decision-making context and expected
scope of consequences of the principle(s) behind such a choice.

To the extent terminating the object entity would be expected to
promote the decider's values then the decision will be considered
"good."

To the extent such a "good" decision has agreement over a larger
context of social decision-making, and to the extent the desired
values are expected to be promoted over large scope, the decision will
be considered "moral."



Could you give an example of how this reasoning would apply, say in the case
of humans eating meat?


--
Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to