--- Alan Grimes <[EMAIL PROTECTED]> wrote:

> om
> 
> Today, I'm going to attempt to present an argument
> in favor of a theory
> that has resulted from my studies relating to AI.
> While this is one of
> the only things I have to show for my time spent on
> AI. I am reasonably
> confident in it's validity and hope to show why that
> is the case here.
> 
> Unfortunately, the implications of this theory are
> quite dramatic making
> the saying "extraordinary claims require
> extraordinary proof" central to
> the meditations leading to this posting. I will take
> this theory and
> then apply it to recent news articles and make the
> even bolder claim
> that AI has been SOLVED, and that the only thing
> that remains to be done
> is to create a complete AI agent from the available
> components.

This is a *very* strong claim, considering how complex
intelligence is and how much had yet to be fully
understood.

> om
> 
> It can be agreed upon that for something to be
> intelligent, it must be
> able to process information symbolically, it must be
> able to speak. This
> may be confusing because computers are already
> thought of as symbolic
> machines which process formal symbolic languages. An
> AI system, however,
> processes *informal* symbolic languages. While it
> can be argued that
> this is a requirement for all intelligent systems,
> it is inescapably
> true when one talks about superhuman artificial
> intelligence because
> without it, it would necessarily be sub-human. It
> should also be noted
> that normal humans, through meditation techniques,
> can achieve an
> asymbolic state of consciousness, this can be quite
> enjoyable,
> especially while driving. But humans incapable of
> symbolic thought, most
> notably autistic patients, are not really
> intelligent.
> 
> Having established the necessity of symbolic
> thought, it can then be
> inferred that an AI system is a system which must
> support symbolic
> thought, that is the ability to see a duck and call
> it a duck. A duck
> can be photographed in a practically infinite number
> of ways, cross the
> number of angles, with the number of distances, with
> the number of
> lighting environments, with the number of
> backgrounds, with the number
> of species of duck, and you get many many many
> patterns which all
> satisfy the category "duck". Any system which can
> achieve reasonably
> good performance at this task, must have a concept
> of "duck" which is
> separate from any specific instance.
> 
> When we look at the information flow through an AI
> which thinks
> symbolically, we find that it is not even useful to
> record every pixel
> of every frame it has ever been shown. In fact, such
> designs invariably
> lead to a combinatorial explosion which is
> computationally intractable.
> Even more, such information is not even useful
> because it will never
> perfectly match any future perception. Instead, the
> AI must always be
> searching for information which can be captured by
> symbols, (though not
> necessarily words!). Each new perception will almost
> certainly contain
> many elements which have previously been assigned
> symbols by the AI, and
> some that it hasn't seen before which may or may not
> be useful to remember.
> 
> A tabula rasa symbolic intelligence, such as a human
> baby,

The "tabula rasa" hypothesis has been debunked every
which way from Sunday. See
http://207.210.67.162/~striz/docs/tooby-1992-pfc.pdf.
In short, there are many evolutionary adaptations
which are universal to the human species, and yet only
come into play well after childhood. For example, a
baby has no sex drive, and yet we all agree that
puberty is built into the baby from birth.

> for the most
> part, cannot assign symbols to things in its
> environment because it
> doesn't have any. Instead, it must isolate elements
> from each perception
> and attempt to use them to explain the next
> perception. Through trial
> and error, it will acquire a set of perceptions
> which "just work". By
> the age of four or so, a child can deal with,
> intellectually at least,
> just about any environment. The bottom line, though,
> is that because we
> don't have anything else,

Our brains are *much* more complex than a simple
concept-abstraction-and-pattern-recognition device.
Such a device, for instance, would be unable to add
two and two, or create a whole new category of objects
it has never seen.

> our entire intellect is
> based on relating
> everything we encounter to things we have previously
> encountered.

Then how do you explain the human capacity for
*invention*, for creating things that have never been
previously encountered? If I were suddenly dumped into
the Aperture Science Enrichment Center, I would have
no first-hand experience with jumping through holes in
the wall. Such holes, after all, do not actually
exist. Yet our brains can map out rather complex
patterns of how we would act in situations which have
no real-world analogues.

> Okay, we have a system which extracts symbolic
> information from
> perceptions, and analyzes new perceptions with
> previously acquired
> symbols. A somewhat less important point is that
> symbols don't
> necessarily need to relate to raw perceptions, they
> can also relate to
> other symbols or a mixture of symbols and
> perceptions in a
> somewhat-heirarchical manner.
> 
> These observations may seem to be a bit simplistic
> but they lead to some
> startling conclusions. Once again, whenever a
> perception is presented to
> an AI, it is parsed and split up into old
> information (satisfying
> existing symbols), and new information (not
> satisfying any symbols and
> therefore requiring further processing). Over time,
> as an AI builds up
> an ever increasing library of symbols, it becomes
> ever more efficient at
>  analyzing new scenes. There is extensive anatomical
> and physiological
> evidence to support this. We can also conclude that
> there exists a
> growth function which determines the approximate
> quantity of new
> information in any given perception, which is on the
> order of 1/x.

This assumes that the pattern-matcher saturates with
repetition of stuff it has seen before. But the
universe contains *much more* information than any
human could possibly pattern-match- it's simply so
darn big.

> We can sum the total of this new information over
> all perceptions from
> the first onwards, and find that it is on the order
> of 1+ log(X), or
> simply O(X) = Log(X). If we were to present the AI
> with random
> information and forced it to remember all of it, the
> *WORST*, case for
> AI is O(N). For constant input, the AI will remain
> static, at O(N) = 1.
> (these are space complexities).

If you constantly present the AI with new information,
which bears no resemblance to previous information (no
compressibility), then wouldn't the space complexity
of the information grow linearly with time, not remain
constant?

> Okay, what about the algorithmic complexity? Lets
> give the AI a
> perception, it breaks the perception down into
> symbols, it must now
> match the incoming symbols with it's existing
> library. We know it's
> existing library is on the order of O(N) = log N for
> complex, non-random
> input. If we were to do a basic search on that, the
> time complexity will
> be M(symbols) times it's library on the order of log
> N. Now, if we had a
> way to sort this library, we could do even better,
> the search time would
> be M * log^2(N). that is the logarithm of the
> logarithm of N. If the
> information is only partially orderable then the
> performance will be
> somewhere in between those upper and lower bounds.
> Alternatively, if
> memory weren't an issue we could attempt a hashing
> algorithm and achieve
> a performance close to O(1), which is approximately
> what the human brain
> does hence it's renown performance in this area.
> 
> My final claim for this part of the meditation, is
> that every possible
> strong AI architecture will match these
> characteristics.

Every possible strong AI architecture may be *capable*
of absorbing new information and matching it to old
patterns, but that does not mean that that is *all* it
does.

> We could attach
> a tivo to the AI and make a few other compromises
> and see the AI's
> performance approximate N + log(N/2), but the point
> still stands, Every
> strong AI will be a symbolic AI, otherwise it
> wouldn't be able to think
> in any useful manner, Every symbolic AI will encode
> new perceptions in
> terms of symbols derived from previous perceptions,
> and the average
> quantity of new information in each new perception
> will approximate the
> logarithm of the number of previous perceptions.

You keep repeating this, but you have not attempted to
provide evidence.

 - Tom


       
____________________________________________________________________________________
Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, 
photos & more. 
http://mobile.yahoo.com/go?refer=1GNXIC

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=26443543-95a21f

Reply via email to