Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Matt Mahoney wrote:
I did also look at http://susaro.com/archives/category/general but there
is no
design here either, just a list of unfounded assertions. Perhaps you can
explain why you believe point #6 in particular to be true.
Perhaps you can explain why you described these as "unfounded
assertions" when I clearly stated in the post that the arguments to back
up this list will come later, and that this lst was intended just as a
declaration?
You say, "The problem with this assumption is that there is not the slightest
reason why there should be more than one type of AI, or any competition
between individual AIs, or any evolution of their design."
Which is completely false. There are many competing AI proposals right now.
Why will this change? I believe your argument is that the first AI to achieve
recursive self improvement will overwhelm all competition. Why should it be
friendly when the only goal it needs to succeed is acquiring resources?
Because you have failed to look into this in enough depth to realize
that you cannot build an AGI that will actually work, if its goal is to
do nothing but "acquire resources".
Your claim that "[this] is completely false" rests on assumptions like
these.
My point, though, is that people like you make wild assumptions that you
have not thought through, and then go around making irresponsible
declarations that AGI *will* be like this or that, when in fact the
assumptions on which you base these assertions are deeply flawed.
My list of nine misunderstandings was an attempt to redress the balance
by giving what I believe to be a summary (NOTE: it was JUST a summary,
at this stage) of what the more accurate picture is like, when you start
to make more accurate assumptions.
Now, I am sure that there will be elements of my (later) arguments that
are challengeable, but at this stage I wanted to draw a line in the
sand, and also make it clear to newcomers that there is at least one
body of thought that says that everything being assumed right now is
completely and utterly misleading.
We
already have examples of reproducing agents: Code Red, SQL Slammer, Storm,
etc. A worm that can write and debug code and discover new vulnerabilities
will be unstoppable. Do you really think your AI will win the race when you
have the extra burden of making it safe?
Yes, because these "reproducing agents" you refer to are the most
laughably small computer viruses that have no hope whatsoever of
becoming generally intelligent. At every turn, you completely
undestimate what it means for a system to be "intelligent".
Also, RSI is an experimental process, and therefore evolutionary. We have
already gone through the information theoretic argument why this must be the
case.
No you have not: I know of no "information theoretic argument" that
even remotely applies to the type of system that is needed to achieve
real intelligence. Furthermore, the statement that "RSI is an
experimental process, and therefore evolutionary" is just another
example of you declaring something to be true when, in fact, it is
loaded down with spurious assumptions. Your statement is a complete
non-sequiteur.
Richard Loosemore
-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com