om

Today, I'm going to attempt to present an argument in favor of a theory
that has resulted from my studies relating to AI. While this is one of
the only things I have to show for my time spent on AI. I am reasonably
confident in it's validity and hope to show why that is the case here.

Unfortunately, the implications of this theory are quite dramatic making
the saying "extraordinary claims require extraordinary proof" central to
the meditations leading to this posting. I will take this theory and
then apply it to recent news articles and make the even bolder claim
that AI has been SOLVED, and that the only thing that remains to be done
is to create a complete AI agent from the available components.

om

It can be agreed upon that for something to be intelligent, it must be
able to process information symbolically, it must be able to speak. This
may be confusing because computers are already thought of as symbolic
machines which process formal symbolic languages. An AI system, however,
processes *informal* symbolic languages. While it can be argued that
this is a requirement for all intelligent systems, it is inescapably
true when one talks about superhuman artificial intelligence because
without it, it would necessarily be sub-human. It should also be noted
that normal humans, through meditation techniques, can achieve an
asymbolic state of consciousness, this can be quite enjoyable,
especially while driving. But humans incapable of symbolic thought, most
notably autistic patients, are not really intelligent.

Having established the necessity of symbolic thought, it can then be
inferred that an AI system is a system which must support symbolic
thought, that is the ability to see a duck and call it a duck. A duck
can be photographed in a practically infinite number of ways, cross the
number of angles, with the number of distances, with the number of
lighting environments, with the number of backgrounds, with the number
of species of duck, and you get many many many patterns which all
satisfy the category "duck". Any system which can achieve reasonably
good performance at this task, must have a concept of "duck" which is
separate from any specific instance.

When we look at the information flow through an AI which thinks
symbolically, we find that it is not even useful to record every pixel
of every frame it has ever been shown. In fact, such designs invariably
lead to a combinatorial explosion which is computationally intractable.
Even more, such information is not even useful because it will never
perfectly match any future perception. Instead, the AI must always be
searching for information which can be captured by symbols, (though not
necessarily words!). Each new perception will almost certainly contain
many elements which have previously been assigned symbols by the AI, and
some that it hasn't seen before which may or may not be useful to remember.

A tabula rasa symbolic intelligence, such as a human baby, for the most
part, cannot assign symbols to things in its environment because it
doesn't have any. Instead, it must isolate elements from each perception
and attempt to use them to explain the next perception. Through trial
and error, it will acquire a set of perceptions which "just work". By
the age of four or so, a child can deal with, intellectually at least,
just about any environment. The bottom line, though, is that because we
don't have anything else, our entire intellect is based on relating
everything we encounter to things we have previously encountered.

Okay, we have a system which extracts symbolic information from
perceptions, and analyzes new perceptions with previously acquired
symbols. A somewhat less important point is that symbols don't
necessarily need to relate to raw perceptions, they can also relate to
other symbols or a mixture of symbols and perceptions in a
somewhat-heirarchical manner.

These observations may seem to be a bit simplistic but they lead to some
startling conclusions. Once again, whenever a perception is presented to
an AI, it is parsed and split up into old information (satisfying
existing symbols), and new information (not satisfying any symbols and
therefore requiring further processing). Over time, as an AI builds up
an ever increasing library of symbols, it becomes ever more efficient at
 analyzing new scenes. There is extensive anatomical and physiological
evidence to support this. We can also conclude that there exists a
growth function which determines the approximate quantity of new
information in any given perception, which is on the order of 1/x.

We can sum the total of this new information over all perceptions from
the first onwards, and find that it is on the order of 1+ log(X), or
simply O(X) = Log(X). If we were to present the AI with random
information and forced it to remember all of it, the *WORST*, case for
AI is O(N). For constant input, the AI will remain static, at O(N) = 1.
(these are space complexities).

Okay, what about the algorithmic complexity? Lets give the AI a
perception, it breaks the perception down into symbols, it must now
match the incoming symbols with it's existing library. We know it's
existing library is on the order of O(N) = log N for complex, non-random
input. If we were to do a basic search on that, the time complexity will
be M(symbols) times it's library on the order of log N. Now, if we had a
way to sort this library, we could do even better, the search time would
be M * log^2(N). that is the logarithm of the logarithm of N. If the
information is only partially orderable then the performance will be
somewhere in between those upper and lower bounds. Alternatively, if
memory weren't an issue we could attempt a hashing algorithm and achieve
a performance close to O(1), which is approximately what the human brain
does hence it's renown performance in this area.

My final claim for this part of the meditation, is that every possible
strong AI architecture will match these characteristics. We could attach
a tivo to the AI and make a few other compromises and see the AI's
performance approximate N + log(N/2), but the point still stands, Every
strong AI will be a symbolic AI, otherwise it wouldn't be able to think
in any useful manner, Every symbolic AI will encode new perceptions in
terms of symbols derived from previous perceptions, and the average
quantity of new information in each new perception will approximate the
logarithm of the number of previous perceptions. So, therefore, every
possible implementation of a successful strong AI will also have the
property of lagarithmic growth with respect to the number of new
perceptions. The scope of algorithmic complexity also ranges from slow
yet practical to ideal (O(1)).

An argument that motor-modalities are equivalent to sensory modalities
is beyond the scope of what I have the time or space to discuss here.

This discovery can be used as a razor for evaluating AI projects. For
example, anyone demanding a supercomputer to run their AI, obviously is
barking up the wrong tree. Similarly, anyone trying to simulate a
billion-node neural network is effectively praying for pixie dust to
emerge from the machine and rescue them from their own lack of
understanding. We have others who have their heads rammed up their own
friendly asses but they aren't worth mentioning. Truly, when one
finishes this massacre, the field of AI is left decimated and nearly
extinct. -- nearly...

On the other hand, when you use this razor to evaluate projects which
ostensibly have nothing to do with AI, things become extremely interesting.

http://techon.nikkeibp.co.jp/english/NEWS_EN/20070725/136751/

om!

I'm out of thyme for the nite, I desperately need my sleap (where I leap
into bed and start snoring). It took me all weekend to complete my
meditations in preparation for writing this...)

I plan to follow this up with a discussion of recent developments in AI
and the crisis the singularitarian community has sleep-walked it's way
into.

-- 
Opera: Sing it loud! :o(  )>-<

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=26439743-9243ae

Reply via email to