--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > I did also look at http://susaro.com/archives/category/general but there
> is no
> > design here either, just a list of unfounded assertions.  Perhaps you can
> > explain why you believe point #6 in particular to be true.
> 
> Perhaps you can explain why you described these as "unfounded 
> assertions" when I clearly stated in the post that the arguments to back 
> up this list will come later, and that this lst was intended just as a 
> declaration?

You say, "The problem with this assumption is that there is not the slightest
reason why there should be more than one type of AI, or any competition
between individual AIs, or any evolution of their design."

Which is completely false.  There are many competing AI proposals right now. 
Why will this change?  I believe your argument is that the first AI to achieve
recursive self improvement will overwhelm all competition.  Why should it be
friendly when the only goal it needs to succeed is acquiring resources?  We
already have examples of reproducing agents: Code Red, SQL Slammer, Storm,
etc. A worm that can write and debug code and discover new vulnerabilities
will be unstoppable.  Do you really think your AI will win the race when you
have the extra burden of making it safe?

Also, RSI is an experimental process, and therefore evolutionary.  We have
already gone through the information theoretic argument why this must be the
case.



-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com

Reply via email to