--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> This is nonsense:  the result of giving way to science fiction fantasies 
> instead of thinking through the ACTUAL course of events.  If the first 
> one is benign, the scenario below will be impossible, and if the first 
> one is not benign, the scenario below will be incredibly unlikely.
> 
> Over and over again, the same thing happens:  some people go to the 
> trouble of thinking through the consequences of the singularity with 
> enormous care for the real science and the real design of intelligences, 
> and then someone just waltzes in and throws all that effort out the 
> window and screams "But it'll become evil and destroy everything [gibber 
> gibber]!!"

Not everyone shares your rosy view.  You may have thought about the problem a
lot, but where is your evidence (proofs or experimental results) backing up
your view that the first AGI will be friendly, remain friendly through
successive generations of RSI, and will quash all nonfriendly competition? 
You seem to ignore that:

1. There is a great economic incentive to develop AGI.
2. Not all AGI projects will have friendliness as a goal.  (In fact, SIAI is
the ONLY organization with friendliness as a goal, and they are not even
building an AGI).
3. We cannot even define friendliness.
4. As I have already pointed out, friendliness is not stable through
successive generations of recursive self improvement (RSI) in a competitive
environment, because this environment favors agents that are better at
reproducing rapidly and acquiring computing resources.

RSI requires an agent to have enough intelligence to design, write, and debug
software at the same level of sophistication as its human builders.  How do
you propose to counter the threat of intelligent worms that discover software
exploits as soon as they are published?  When the Internet was first built,
nobody thought about security.  It is a much harder problem when the worms are
smarter than you are, when they can predict your behavior more accurately than
you can predict theirs.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56508130-ee5f61

Reply via email to