----- Original Message ----
From: Richard Loosemore <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Friday, October 20, 2006 10:14:09 AM
Subject: Re: [agi] SOTA

>"We have been searching for decades to find shortcuts to fit our 
>machines"?  When you send a child into her bedroom to search for a 
>missing library book, she will often come back 15 seconds after entering 
>the room with the statement "I searched for it and it's not there." 
>Drawing definite conclusions is about equally reliable, in both these cases.

If you have figured out how to implement AI on a PC, please share it
with us.  Until then, you will need a more convincing argument that we
aren't limited by hardware.


A lot of people smarter than you or me have been working on this problem for a 
lot longer than 15 seconds.  James first proposed association models of thought 
in 1890, about 90 years before connectionist neural models were popular.  Hebb 
proposed a model of classical conditioning in which memory is stored in the 
synapse in 1949, decades before the phenomena was actually observed in living 
organisms.  By the early 1960s we had programs that could answer natural 
language queries (the 1959 BASEBALL program), translate Russian to English, 
prove theorems in geometry, solve arithmetic word problems, and recognize 
handwritten digits.

It is not that we can't come up with the right algorithms.  It's that we don't 
have the computing power to implement them.  The most successful AI 
applications today like Google require vast computing power.

>If the brain used its hardware in such a way that (say) a million 
>neurons were  required to implement a function that, on a computer, 
>required a few hundred gates, your comparisons would be meaningless.

I doubt the brain is that inefficient.  There are lower animals that crawl with 
just a couple hundred neurons.  In higher animals, neural processing is 
expensive, so there is evolutionary pressure to compute efficiently.  Most of 
the energy you burn at rest is used by your brain.  Humans had to evolve larger 
bodies than other primates to support our larger brains.  In most neural 
models, it takes only one neuron to implement a logic gate and only one synapse 
to store a bit of memory.

>It used to be a standing joke in AI that researchers would claim there 
>was nothing wrong with their basic approach, they just needed more 
>computing power to make it work.  That was two decades ago:  has this 
>lesson been forgotten already?

I don't see why this should not still be true.  The problem is we still do not 
know just how much computing power is needed.  There is still no good estimate 
of the number of synapses in the human brain.  We only know it is probably 
between 10^12 to 10^15 and we aren't even sure of that.  So when AI is solved, 
it will probably be a surprise.


-- Matt Mahoney, [EMAIL PROTECTED]


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to