--- Alan Grimes <[EMAIL PROTECTED]> wrote:

> om
> 
> Today, I'm going to attempt to present an argument in favor of a theory
> that has resulted from my studies relating to AI. While this is one of
> the only things I have to show for my time spent on AI. I am reasonably
> confident in it's validity and hope to show why that is the case here.
> 
> Unfortunately, the implications of this theory are quite dramatic making
> the saying "extraordinary claims require extraordinary proof" central to
> the meditations leading to this posting. I will take this theory and
> then apply it to recent news articles and make the even bolder claim
> that AI has been SOLVED, and that the only thing that remains to be done
> is to create a complete AI agent from the available components.

When can we expect a demo?

> But humans incapable of symbolic thought, most
> notably autistic patients, are not really intelligent.

People with autism lack the ability to recognize faces, which leads to delayed
language and social development during childhood.  However they do not lack
symbolic thought.  From http://en.wikipedia.org/wiki/Autism

"In a pair of studies, high-functioning autistic children aged 8–15 performed
equally well, and adults better, than individually matched controls at basic
language tasks like vocabulary and spelling. Both autistic groups performed
worse than controls at complex language tasks like figurative language,
comprehension, and making inferences. As people are often sized up initially
from their basic language skills, these studies suggest that people speaking
to autistic individuals are more likely to overestimate what their audience
comprehends.[28]"

> We can sum the total of this new information over all perceptions from
> the first onwards, and find that it is on the order of 1+ log(X), or
> simply O(X) = Log(X). If we were to present the AI with random
> information and forced it to remember all of it, the *WORST*, case for
> AI is O(N). For constant input, the AI will remain static, at O(N) = 1.
> (these are space complexities).

The relationship is a little more complex.  I believe it has to do with human
brain size, which stops increasing around adolescence.  Vocabulary development
during childhood is fairly constant at about 5000 words per year.  I had
looked at the relationship between training set size and information content
as part of my original dissertation proposal, which suggests a space
complexity more like O(N/log N).  http://cs.fit.edu/~mmahoney/dissertation/

> This discovery can be used as a razor for evaluating AI projects. For
> example, anyone demanding a supercomputer to run their AI, obviously is
> barking up the wrong tree. Similarly, anyone trying to simulate a
> billion-node neural network is effectively praying for pixie dust to
> emerge from the machine and rescue them from their own lack of
> understanding. We have others who have their heads rammed up their own
> friendly asses but they aren't worth mentioning. Truly, when one
> finishes this massacre, the field of AI is left decimated and nearly
> extinct. -- nearly...

I realize there is a large gap between the algorithmic complexity of language
(10^9 bits) and the number of synapses in the human brain (about 10^15).  I
don't know why.  Some guesses:
- The brain does a lot more than process symbolic language.
- The brain has a lot of redundancy for fault tolerance.
- The brain uses inefficient brute-force algorithms for many problems where
more efficient solutions exist, such as pattern recognition, mentally rotating
3D objects, or playing chess.  Perhaps AI has failed because there are still a
lot of things that the brain does for which there is no shortcut.

If Turing and Landauer are right, then a PC has enough computational power to
pass the Turing test.  What we lack is training data, which can only come from
the experience of growing up in a human body.

> On the other hand, when you use this razor to evaluate projects which
> ostensibly have nothing to do with AI, things become extremely interesting.
> 
> http://techon.nikkeibp.co.jp/english/NEWS_EN/20070725/136751/

The article does not say if I click on a picture of a dog running across a
lawn, whether the system will retrieve pictures of dogs or pictures of brown
objects on a green background.



-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=26524725-b66d60

Reply via email to