Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Matt Mahoney wrote:
Because recursive self improvement is a competitive evolutionary process
even
if all agents have a common ancestor.
As explained in parallel post:  this is a non-sequiteur.

OK, consider a network of agents, such as my proposal,
http://www.mattmahoney.net/agi.html
The design is an internet-wide system of narrow, specialized agents and an
infrastructure that routes (natural language) messages to the right experts. Cooperation with humans and other agents is motivated by an economy that
places negative value on information.  Agents that provide useful services and
useful information (in the opinion of other agents) gain storage space and
network bandwidth by having their messages stored and forwarded.  Although
agents compete for resources, the network is cooperative in the sense of
sharing knowledge.

Security is a problem in any open network.  I addressed some of these issues
in my proposal.  To prevent DoS attacks and vandalism, the protocol does not
provide a means to delete or modify messages once they are posted.  Agents
will be administered by humans who independently establish policies on which
messages to accept or ignore.  A likely policy is to ignore messages from
agents whose return address can't be verified, or messages unrelated to the
interests of the owner (as determined by keyword matching).  There is an
economic incentive to not send spam, viruses, false information, etc., because
malicious agents will tend to be blocked and isolated.  Agents will share
knowledge about other agents and gain a reputation by consensus.

I foresee a problem when the collective computing power of the network exceeds
the collective computing power of the humans that administer it.  Humans will
no longer be able to keep up with the complexity of the system.  When your
computer says "please run this program to protect your computer from the
Singularity worm", how do you know you aren't actually installing the worm?

I would be interested in alternative AGI proposals that solve this problem of
humans being left behind, but I am not hopeful that there is a solution.  When
machines achieve superhuman intelligence, humans will lack the cognitive power
to communicate with them effectively.  An AGI talking to you would be like you
talking to your dog.  I suppose that uploading and brain augmentation would be
solutions, but then we wouldn't really be human anymore.

This whole scenario is filled with unjustified, unexamined assumptions.

For example, you suddenly say "I foresee a problem when the collective computing power of the network exceeds the collective computing power of the humans that administer it. Humans will no longer be able to keep up with the complexity of the system..."

Do you mean "collective intelligence"? Because if you mean collective computing power I cannot see what measure you are using (my laptop has greater computing power than me already, because it can do more arithmetic sums in one second than I have done in my life so far). And either way, this comes right after a great big AND THEN A MIRACLE HAPPENS step ...! You were talking about lots of dumb, specialized agents distributed around the world, and then all of a sudden you start talking as if they could be intelligent. Why should anyone believe they would spontaneously do that? First they are agents, then all of a sudden they are AGIs and they leave us behind: I see no reason to allow that step in the argument.

In short, it looks like an even bigger non-sequiteur than before.




Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=89707823-78502b

Reply via email to