On Wed, Oct 03, 2007 at 02:09:05PM -0400, Richard Loosemore wrote:
> 
> RSI is only what happens after you get an AGI up to the human level:  it 
> could then be used [sic] to build a more intelligent version of itself, 
> and so on up to some unknown plateau.  That plateau is often referred to 
> as "superintelligence".

Perhaps I was insufficiently clear in an earlier email.  In that
email, I sketched what RSI looked like for humans. I suggested that,
with appropriate neurosurgery, I could increase the capacity of
my short-term (working) memory, and I sketched what that might
be like, in its effects on my thought patterns.

I proposed that this kind of neuro-surgery was "RSI for humans".

Now, increasing the capacity of short-term memory for humans
is impossible, without literally growing the size of the brain,
and so that seems like a natural place to "stand pat": we're sort-of
stuck here.

However, there is no such limitation for AGI. If humans can be
made vastly smarter simply by increasing the size of short-term 
memory, then it seems that AGI can be made vastly smarter simply
by increasing its short-term memory. And this can be done at
compile-time, or even run-time, by tweaking a few parameters.
It does not require some kind of magic re-engineering of
its own algorithms. It just requires installing more RAM, 
and maybe a faster CPU.  In other words, the lack of RSI 
is not a strong barrier to AGI, in the way that it is 
for humans.

--linas

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49477431-1e687b

Reply via email to