Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Matt Mahoney wrote:
We
already have examples of reproducing agents: Code Red, SQL Slammer, Storm,
etc. A worm that can write and debug code and discover new vulnerabilities
will be unstoppable. Do you really think your AI will win the race when
you
have the extra burden of making it safe?
Yes, because these "reproducing agents" you refer to are the most
laughably small computer viruses that have no hope whatsoever of
becoming generally intelligent. At every turn, you completely
undestimate what it means for a system to be "intelligent".
There are no intelligent or self improving worms... yet. Are you confident
that none will ever be created even after we have automated human-level
understanding of code, which I presume will be one of the capabilities of AGI?
This is the Everything Just The Same, But With Robots fallacy.
You want me to imagine a scenario in which we have AGI, but in your
scenario these AGI systems are somehow not being used to produce
superintelligent systems, and these superintelligent systems are, for
some reason, not taking the elementary steps necessary to solve one of
the world's simplest problems (computer viruses).
These two suppositions are crazy. If AGI exists, how could it not be
used for building Superintelligent AGI systems? Why assume that AGI
*will* have an impact on the production of viruses, making them
extremely dangerous, but at the same time the AGIs are not used for
anything else?
This is what I mean by scenarios that are just not thought through. You
have assumed AGI, but then placed it in a context in which everything
else is just the same as it is now.
Also, RSI is an experimental process, and therefore evolutionary. We have
already gone through the information theoretic argument why this must be
the
case.
No you have not: I know of no "information theoretic argument" that
even remotely applies to the type of system that is needed to achieve
real intelligence. Furthermore, the statement that "RSI is an
experimental process, and therefore evolutionary" is just another
example of you declaring something to be true when, in fact, it is
loaded down with spurious assumptions. Your statement is a complete
non-sequiteur.
(sigh) To repeat, the argument is that an agent cannot deterministically
create an agent of greater intelligence than itself, because if it could it
would already be that smart.
You know what? I was going to reply to this argument YET AGAIN, but I
give up. I have finally had enough.
I am no longer going to correspond with you. I may occasionally reply
to your posts when you make dumb claims in front of others, thus
confusing the issues, but I will not engage with you.
You are so confused in your own mind that you do not know what you are
talking about, and it is a waste of time correcting you over and over again.
From now on, you are in my killfile.
Richard Loosemore
The best it can do is make educated guesses as
to what will increase intelligence. I don't argue that we can't do better
than evolution. (Adding more hardware is probably a safe bet). But an agent
cannot even test whether another is more intelligent. In order for me to give
a formal argument, you would have to accept a formal definition of
intelligence, such as Hutter and Legg's univeral intelligence, which is
bounded by algorithmic complexity. But you dismiss such definitions as
irrelevant. So I can only give examples, such as the ability to measure an IQ
of 200 in children but not adults, and the historical persecution of
intelligence (Socrates, Galileo, Holocaust, Khmer Rouge, etc).
A self improving agent will have to produce experimental variations and let
them be tested in a competitive environment it doesn't control or fully
understand that weeds out the weak. If it could model the environment or test
for intelligence then it could reliably improve its intelligence,
contradicting our original assumption.
This is an evolutionary process. Unfortunately, evolution is not stable. It
resides on the boundary between stability and chaos, like all incrementally
updated or adaptive algorithmically complex systems. By this I mean it tends
to a Lyapunov exponent of 0. A small perturbation in its initial state might
decay or it might grow. Critically balanced systems like this have a Zipf
distribution of catastrophes -- an inverse relation between probability and
severity. We find this property in randomly connected logic gates (frequency
vs. magnitude of state transitions) software systems (frequency vs. severity
of failures), gene regulatory systems (frequency vs. severity of mutations),
and evolution (frequency vs. severity of plagues, population explosions, mass
extinctions, and other ecological disasters).
The latter should be evident in the hierarchical organization of geologic
eras. And a singularity is a catastrophe of unprecedented scale. It could
result in the extinction of DNA based life and its replacement with
nanotechnology. Or it could result in the extinction of all intelligence.
The only stable attractor in evolution is a dead planet. (You knew this,
right?) Finally, I should note that intelligence and friendliness are not the
same as fitness. Roaches, malaria, and HIV are all formidable competitors to
homo sapiens.
-- Matt Mahoney, [EMAIL PROTECTED]
-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com