I find that I agree with nearly all Loosemore's comments in his reply...

I certainly agree with Pei that, in terms of spreading the AGI meme
among researchers in academia and industry, focusing on the
Singularity aspect is not good marketing.

And, as a matter of pragmatic time-management, I am spending most of
my "AGI R&D time" working on actually getting to the point of
achieving advanced artificial cognition, rather than thinking about
how to make an advanced AGI yet more advanced.  (Though I do agree
with Eliezer Yudkowsky and others that it is important to think about
the ethics of advanced AGI's now, in advance of constructing them; and
that one wants to think very deeply before creating an AGI that has
significant potential to rapidly accelerate its own intelligence
beyond the human level.)

But, all these issues aside, I am close to certain that once we have a
near-human-level AGI, then -- if we choose to effect a transition to
superhuman-level AI -- it won't be a huge step to do so.

And, I am close to certain that once we have a superhuman-level AGI, a
host of other technologies like strong nanotech, genetic engineering,
quantum computing etc. etc. will follows.

Of course, this is all speculation and plenty of unknown things could
go wrong.  But, to me, the logic in favor of the above conclusions
seems pretty solid.

-- Ben G

On 4/15/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Eric B. Ramsay wrote:
> There is an easy assumption of most writers on this board that once the
> AGI exists, it's route to becoming a singularity is a sure thing. Why is
> that? In humans there is a wide range of "smartness" in the population.
> People face intellectual thresholds that they cannot cross because they
> just do not have enough of this smartness thing. Although as a
> physicist I understand General Relativity, I really doubt that if it had
> been left up to me that it would ever have been discovered - no matter
> how much time I was given. Do neuroscientists know where this talent
> difference comes from in terms of brain structure? Where in the designs
> for other AGI (Ben's for example) is the smartness of the AGI designed
> in? I can see how an awareness may bubble up from a design but this
> diesn't mean a system smart enough to move itself towards being a
> singularity. Even if you feed the system all the information in the
> world, it would know a lot but not be any smarter or even know how to
> make itself smarter. How many years of training will we give a brand new
> AGI before we decide it's retarded?

Eric,

I am going to address your question, as well as Pei's response that
there should not really be a direct relationship between AGI and the
Singularity.

In the course of building an AGI, we (the designers of the AGI) will
have to understand a great deal about what makes an intelligence tick.
By the time we get anything working at all, we will know a lot more
about the workings of intelligence than we do now.

Now, our first attempts to build a full intelligence will very probably
result in many test systems that have a "low IQ" -- a system that is not
capable of being as smart as its designers.

If we were standing in front of a human with that kind of low IQ, we
would face a long, hard job (and in some cases, an impossible job) to
improve their intelligence.  But that is most emphatically not the case
with a low-IQ AGI prototype.  At the very least, we would be able to
inspect the system during actual thinking episodes, in order to get
clues about what goes right and what goes wrong.

So, combining the knowledge we will have acquired during the design
phase with the vast amount of performance data available during
prototype phase, there are ample opportunities for us to improve the
design.  Specifically, we will try to find out what ingredients are
needed to make the system extremely creative.  (As well as extremely
balanced and friendly, of course).

By this means, I believe there would be no substantial obstacles to our
getting the system up to the average human level of performance.  I
cannot guarantee this, of course, but there are no in-principle reasons
why not.  In fact, there are no reasons why we should not be able to get
it up to a superhuman level of performance just by our own R&D efforts
(some people seem to think that there is something inherently impossible
about a human being able to design something smarter than itself, but
that idea is really just science-fiction hearsay, not grounded in any
real limitations).

Okay, so if we assume that we can build a roughly-human-level
intelligence, what next?  The next phase is again very different to the
case of having a human genius hanging around.  [Aside.  By 'genius' I
just mean 'very bright compared with average' - I don't mean 'person
with magically superhuman powers of intelligence and creativity'].  This
system will be capable of being augmented in a number of ways that are
simply not possible with humans.  Pure physical technology advances will
promise the uploading of the original system into faster hardware ... so
even if we and it NEVER did another stroke of work to improve its
intelligence, we might find that it would get faster every time an
electronic hardware upgrade became available.  After a few years, it
might be able to operate a thousand times faster than humans purely
because of this factor.

Second factor:  Duplication.  The original AGI (with full adult
intelligence) could be duplicated in such a way that, for every genius
machine we produce, we could build a thousand copies and get them
(persuade them) to work together as a team.  That is significant:  human
geniuses are rare, so what would happen if we could take an adult
Einstein and quickly make a thousand similar brains?  Never possible
with humans:  entirely feasible with a smart AGI.

Third factor:  Communication bandwidth.  This huge team of genius AGIs
would be able to talk to each other at rates that we can hardly even
imagine.  Human teams tend to suffer from problems when they become too
large:  some of those problems could be overcome because the AGI team
would all (effectively) be in 'telepathic' contact with one another ...
able to exchange ideas and inspiration without having to go through
managers and committee meetings.  Result:  the AGI team of a thousand
geniuses would be able to work at a thousand (or whatever) the speed of
a comparable human team.

If you combine all these factors, and even if you allow for the fact
that I have pulled 1000x numbers out of the hat just for illustration
purposes, it seems quite likely that the AGI development efforts would
quite quickly lead to a situation in which the state of the art in AGI
design was improving *very* much more rapidly than the rate at which it
would advance with only humans in the loop.

Finally, I would point out that all we need to do to achieve the
Singularity is to get AGI systems that can operate at about 1000x the
speed of humans, with no qualitative improvements over the 'bright'
level of human intelligence.  With such systems available, and with
facilities to duplicate them, we could easily find ourselves with an
amount of inventiveness that was (a) greater in quantity of brain-units
than all the human scientists and engineers that are now in the planet,
and (b) working at a speed that would enable them to produce in one year
what that number of human scientists/engineers would have produced in
the next thousand years.

Somewhere in that next thousand years is a full, safe form of nanotech.

THAT combination is the Singularity.

The path that I have described above does not contain any outrageous
assumptions, just moderate advances and hard work.


Coming back now to Pei's point:  yes, I thoroughly agree that when
people talk about the Singularity as being a result of just a gradual
quickening of the pace of technology, I shake my head and wonder what
they must be thinking:  the *general* curves of technological
improvement could be exponentential all the way up to the point where
they hit some limitations that completely invalidate the projections.
There is no earthly reason - distinct from the reasons I have given,
purely derived from projected progress in AGI research - why the general
improvement in technology should go on ramping up and send the world
into the Singularity.

I find that 'look at the curves!' argument unconvincing.

But I find the argument from AGI progress thoroughly compelling.


There are various qualifications that I know I should have made, but I
will leave them for another time (persuading the AGIs to do what we wish
is a big issue, and persuading today's AI researchers to wake up and
understand that they are beating their head against a glass ceiling
caused by the Complex Systems Problem is an even bigger issue).




Richard Loosemore
















-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to