BTW, for those who are newbies to this list, Matt's argument attempting to
refute RSI was extensively discussed on this list a few months ago.

In my view, I refuted his argument pretty clearly, although he does not
agree.

His mathematics is correct, but seemed to me irrelevant to real-life RSI for
two reasons:

a) assuming a system isolated from the environment, which won't actually be
the case

b) using an intelligence measure focused solely on description length rather
than incorporating runtime

ben g

On Wed, Nov 19, 2008 at 10:21 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Wed, 11/19/08, Daniel Yokomizo <[EMAIL PROTECTED]> wrote:
>
> > On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney
> > <[EMAIL PROTECTED]> wrote:
> > > Seed AI is a myth.
> > > http://www.mattmahoney.net/agi2.html (section 2).
> >
> > (I'm assuming you meant the section "5.1.
> > Recursive Self Improvement")
>
> That too, but mainly in the argument for the singularity:
>
> "If humans can produce smarter than human AI, then so can they, and faster"
>
> I am questioning the antecedent, not the consequent.
>
> RSI is not a matter of an agent with IQ of 180 creating an agent with an IQ
> of 190. Individual humans can't produce much of of anything beyond spears
> and clubs without the global economy in which we live. To count as self
> improvement, the global economy has to produce a smarter global economy.
> This is already happening.
>
> My paper on RSI referenced in section 5.1 (and submitted to JAGI) only
> applies to systems without external input. It would apply to the unlikely
> scenario of a program that could understand its own source code and rewrite
> itself until it achieved vast intelligence while being kept in isolation for
> safety reasons. This scenario often came up on the SL4 list. It was referred
> to AI boxing. It was argued that a superhuman AI could easily trick its
> relatively stupid human guards into releasing it, and there were some
> experiments where people played the role of the AI and proved just that,
> even without vastly superior intelligence.
>
> I think that the boxed AI approach has been discredited by now as being
> impractical to develop for reasons independent of its inherent danger and my
> proof that it is impossible. All of the serious projects in AI are taking
> place in open environments, often with data collected from the internet, for
> simple reasons of expediency. My argument against seed AI is in this type of
> environment. It is extremely expensive to produce a better global economy.
> The current economy is worth about US$ 1 quadrillion. No small group is
> going to control any significant part of it.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects."  -- Robert Heinlein



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to