IN response to Bob Mottram’s Thu 10/18/2007 3:38 AM post.

With regard to the fact that many people who promised to produce AI in the
past have failed -- I repeat what I have said on this list many times --
you can’t do the type of computation the human brain does without at least
something within several orders of magnitude of the computational,
representational, and (importantly) interconnect capacity of the human
brain.  And to the best of my knowledge, most AI projects until very
recently have been run on hardware with roughly one 100 millionth to about
one 100,000th such capacity.

So it is no surprise they failed.  What is surprising is that they were so
blind to the importance of hardware.

But the hardware barrier to the creation of human-level AGI is being
removed.  If we design hardware that is more optimized for AI, even at
today’s emerging 45nm semiconductor node, we could manufacture roughly
brain level hardware at mainframe prices or below.  And people in the
semiconductor industry are confident we can at least get to the 22nm node
by 2012 to 2014 which should cut prices four fold (or more, if we go to
450mm wafers).

Yes, there are reasons for caution.  There could be a nuclear war.  There
could be a major economic collapse.  The earth could be hit by an
asteroid.  One can’t be 100% sure of anything.

More specific to the particular problem, we don't know how big human-level
world knowledge is.  Most predictions suggest it is well within the range
that we can represent and compute over with hardware that could be
profitably sold for less than several hundred thousand to several million
dollars in seven to ten years, but we aren’t sure.  We don’t know how hard
it will be to focus attention and use contextual influences in ways that
yield human-level results when dealing with world-knowledge.  We don’t
what are optimal parameters for thresholding and determining the amount of
spreading activation from nodes at differing levels of activation (and/or
consciousness).  There is a lot of such tuning that will be required to
make such a system learn and think as well as a human, and we don’t know
how big a problem such tuning will be.  There are a lot of architectural
choices to be made in each of many different aspects of the system, and
getting them to all work together well could be a major headache.  There
is always the possibility of problems no one has even imagined.

So, yes, its not yet a slam dunk.

But consider the surprisingly high level of intelligence the AI community
has been able to squeeze out of machines with one 100 millionth to one
100,000th the power of the human brain, and then think what they could do
with machines 100,000 to 100 million times more powerful at computing
world-knowledge level representations.

The more enlightened people in the field have some very exiting ideas how
to use such powerful hardware to create powerful intelligences.  These
included using the trend to automatic learning of compositional and
generalization pattern hierarchies and of probabilistic rules of inference
between them.  It also includes what we have learned from reinforcement
leaning and the explosion in understanding we are deriving from the human
brain, such theories on how the brain dynamically focuses and tunes
attention.

Add all these things together and I think it is clear that if a well
funded AGI initiative gave the money to the right people (not just spread
it throughout academic AI based on seniority or somebody’s buddy system),
it would be almost certain that stunning strides could be made in the
power of artificial intelligence in 5 to 10 years.

Would we reach human level intelligence?

Barring major set backs to technological society, I would guess the
chances that a 2 billion dollar multi-team ten-year project that funded
the right people (starting within say half a year) could achieve
substantially human-level performance -- in NL understanding and
generation, vision understanding and generation, general cognition,
computer programming skills, and scientific understanding and creativity
-- would be at least 66.6%.  And the changes would be much higher within
the subsequent five to ten years.

But the chance that such a project would create dramatic and extremely
valuable advances in the power of artificial intelligence in all of these
areas in 10 years – advances  that would be worth many times the $2
Billion dollar investment -- would be at least 99%.

Remember, even machines that are substantially less intelligent than we
are at the things humans currently do best, could be millions of times
faster than us at things computers already do faster or more reliably than
us.  So, for example, producing machines with significant, though still
sub-human, levels of NL or programming capability could be the basis of
extremely valuable commercial products.

Some hype is bull**** and some hype is spreading the truth.  To say that
now is the time to start making rapid strides in AGI is spreading the
truth.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Bob Mottram [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 18, 2007 3:38 AM
To: agi@v2.listbox.com
Subject: Re: [agi] More public awarenesss that AGI is coming fast


Despite these arguments there are good reasons for caution.  When you look
at the history of AI research one thing tends to stand out - some people
never seem to learn of the dangers of hype.  Having been around for a
while I've heard many individuals make a "ten years to SAI" type of
prediction, and ten years later they were proved wrong.

Being optimistic is a good quality, but if hopes are raised too far
inevitably a backlash ensues (investors become frustrated/disillusioned
and funds get withdrawn).  I think people who are seriously interested in
the technology should be more measured in their statements, and be honest
about the degrees of uncertainty involved.




On 17/10/2007, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> THERE IS A REAL DIFFERENCE BETWEEN NOW AND 20 YEARS AGO.
>
> FIRST WE WILL HAVE ROUGHLY BRIAN LEVEL HARDWARE AT COMMERCIALLYH
> VIABLE PRICES IN A FEW YEARS.
>
> SECOND, MANY OF US HAVE A MUCH MORE DETAILED IDEA OF HOW TO ATTACK
> ALMOST ALL OF THE HARD PROBLEMS IN AGI.
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>
>
> -----Original Message-----
> From: Richard Loosemore [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, October 17, 2007 4:45 PM
> To: agi@v2.listbox.com
> Subject: Re: [agi] More public awarenesss that AGI is coming fast
>
>
> Edward W. Porter wrote:
> > In today's KurzweilAI.net mailing list is a link to an article in
> > which British Telecom's futurologist is predicting conscious
> > machines by 2015 and one brighter than people by 2020.
> >
> > I think these predictions are very reasonable, and the fact that a
> > furturologist for a major company is making this statement to the
> > public
>
> > in his capacity as an employee of such a major company indicates the
> > extent to which the tide is turning.  As I have said before on this
> > list: "The race has begun."
> >
> > (The article isn't really that valuable in terms of explaining
> > things those on this list have not already heard or thought of, but
> > is its evidence of the changing human collective consciousness on
> > subjects relating to the singularity.  Its link is
> >
> ___http://www.computerworld.com.au/index.php/id;1028029695;fp;;fpid;;p
> f;1_
> )
>
> I think the same guy made the same prediction when I met him at a
> workshop 20 years ago.
>
>
>
> Richard Loosemore
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55002167-f83028

Reply via email to