Matt Mahoney wrote:
On Feb 3, 2008 10:22 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful motivations.  Seems hard to believe, but for various technical
reasons I think we can make a very powerful case that this is exactly
what will happen.  After that, every other AGI will be the same way
(again, there is an argument behind that).  Furthermore, there will not
be any "evolutionary" pressures going on, so we will not find that (say)
the first few million AGIs are built with perfect motivations, and then
some rogue ones start to develop.

In the context of a distributed AGI, like the one I propose at
http://www.mattmahoney.net/agi.html this scenario would require the first AGI
to take the form of a worm.

That scenario is deeply implausible - and you can only continue to advertise it because you ignore all of the arguments I and others have given, on many occasions, concerning the implausibility of that scenario.

You repeat this line of black propaganda on every occasion you can, but on the other hand you refuse to directly address the many, many reasons why that black propaganda is nonsense.

Why?




Richard Loosemore



It may indeed be peaceful if it depends on human
cooperation to survive and spread, as opposed to exploiting a security flaw. So it seems a positive outcome depends on solving the security problem. If a
worm is smart enough to debug software and discover vulnerabilities faster
than humans can (with millions of copies working in parallel), the problem
becomes more difficult.  (And this *is* an evolutionary process).  I guess I
don't share Richard's optimism.

I suppose a safer approach would be centralized, like most of the projects of
people on this list.  But I don't see how these systems could compete with the
vastly greater resources (human and computer) already available on the
internet.  A distributed system with, say, Novamente and Google as two of its
millions of peers is certainly going to be more intelligent than either system
alone.

You may wonder why I would design a dangerous system.  First, I am not
building it.  (I am busy with other projects).  But I believe that for
practical reasons something like this will eventually be built anyway, and we
need to study the design to make it safer.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to