--- Stefan Pernar <[EMAIL PROTECTED]> wrote:
> The question if a future AI is going to moral is the same as asking it it is
> rational to be moral. In a recent paper of mine I proved that it is.
> 
> Abstract. These arguments demonstrate the a priori moral nature of reality
> and develop the basic understanding necessary for realizing the logical
> maxim in Kant's categorical imperative[1] based on the implied goal of
> evolution[2]. The maxim is used to proof moral behavior as obligatory
> emergent phenomenon among evolving interacting goal driven agents.
> 
> You can find it at:
> 
> Practical Benevolence - a Rational Philosophy of Morality - A4 PDF, 11
> pages,
>
456kb<http://rationalmorality.info/wp-content/uploads/2007/11/practical-benevolence-2007-12-01_iostemp.pdf>

I disagree.  The proof depends on the axiom that it is preferable to exist
than to not exist, where existence is defined as the ability to be perceived. 
What is the justification for this?  Not evolution.  Evolution selects for
existence, whether or not that existence can be perceived.

There is selective pressure favoring groups whose members cooperate with each
other, e.g. the cells in your body.  At the same time there is selective
pressure on individuals to compete, e.g. cancerous cells.  Likewise, we see
cultural evolutionary pressure for both cooperation and competition among
humans, e.g. groups that practice nationalism and internal law enforcement are
more successful than either anarchists or lovers of world peace.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=71196040-9ee871

Reply via email to