On 10/29/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

> What's that got to do with superAGI's? This: the whole idea of a superAGI
> "taking off" rests on the assumption that the problems we face in life are
> soluble if only we - or superAGI's- have more brainpower.

<snip>


> That doesn't mean that a superior brain wouldn't have advantages, but rather
> that there would be considerable limits to its powers.Even a vast brain will
> have problems dealing with problematic, infinite problems. (And even mighty
> America with all its collective natural and artificial brainpower still has
> problems dealing with dumb peasants).
>
> What is rather disappointing to me , given that there is an awful lot of
> mathematical brainpower around here, is that there seems to be no interest
> in giving mathematical expression to the ideas I have just
> expressed.

Mike, you raise a valid point seldom appreciated in this forum, but
there's plenty of interest, for example at the Santa Fe Institute.

Another way of looking at this is that the vast cognitive capacities
of an advanced machine intelligence would face rapidly diminishing
incremental relevance within a much vaster space of possibilities, due
to being starved for sources of *relevant* novel interaction at a
level corresponding to the integral of current knowledge plus its
latent nth-order connections.

Given this practical ceiling, it is reasonable to consider the
dynamics of an asymmetric system consisting of a highly advanced
singleton and many diverse cooperating agents.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58648127-48490e

Reply via email to