On Dec 1, 2007 12:11 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> How can we design AI so that it won't wipe out all DNA based life,
> possibly
> this century?
>
> That is the wrong question.  I was reading
> http://sl4.org/wiki/SoYouWantToBeASeedAIProgrammer and realized that (1) I
> am
> not smart enough to be on their team and (2) even if SIAI does assemble a
> team
> of the world's smartest scientists with IQs of 200+, how are they going to
> compete with a Jupiter brain with an IQ of 10^39?  Recursive self
> improvement
> is a necessarily evolutionary algorithm.  It doesn't matter what the
> starting
> conditions are.  All that ultimately matters is the fitness function.
>

The question if a future AI is going to moral is the same as asking it it is
rational to be moral. In a recent paper of mine I proved that it is.

Abstract. These arguments demonstrate the a priori moral nature of reality
and develop the basic understanding necessary for realizing the logical
maxim in Kant's categorical imperative[1] based on the implied goal of
evolution[2]. The maxim is used to proof moral behavior as obligatory
emergent phenomenon among evolving interacting goal driven agents.

You can find it at:

Practical Benevolence - a Rational Philosophy of Morality - A4 PDF, 11
pages, 
456kb<http://rationalmorality.info/wp-content/uploads/2007/11/practical-benevolence-2007-12-01_iostemp.pdf>

Kind regards,

Stefan


-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=71109042-c1ed1e

Reply via email to