>> John said: The human brain, the only
high-level intelligent system currently known, uses language and logic for
abstract reasoning, but these are based on, and owe their existence to, a more
fundamental level of intelligence -- that of pattern-recognition,
pattern-matching, and pattern manipulation.
I agree with this wholeheartedly. But on the next
point we diverge in our thinking.
>> John said: In evolution on earth,
sensory-motor-based intelligence came first, and the use of language and
logic only later. It seems to me that the right path to true AI will
also use sensory-motor patterns as the basic building blocks of knowledge
representation. A typical human being's knowledge of the letter "A"
involves recognition of graphical representations of the
symbol, memories of its sound when spoken, procedural or muscle memory of
how to speak and write it, and memories of where it is commonly
found in its linguistic context. A system should be capable of
recognizing symbols visually or auditorially (and possibly
of generating them through motor outputs) before it should be expected
to comprehend them.
Any thoughts or
arguments? Or am I just repeating something everyone already knows?
(I honestly don't know.)
>>
Speech
recognition and visual recognition are separate problems from knowledge
representation/pattern recognition.
Hellen
Keller was blind and deaf but with some help was able to achieve knowledge
representatation and pattern recognition without the use of either hear or
sight.
Think of
the senses as the input/output devices and yes an infant's brain must first
learn to control those input output devices before it is able to learn and
communicate with the world outside itself.
But an
artificially intelligent entity already has access to an ASCII data stream that
is can do input/output to communicate outside
itself.
Of
course because a picture is worth a thousand words a program that can also do
visual recognition has access to a larger data store than one that does
not.
My
opinion on the most probable route to a true AI Entity
is:
1. Build
a better fuzzy pattern representation language with an inference mechanism for
extracting inducible information from user inputs. Fuzziness allows
the
language to understand utterances with
misspellings words run together etc...
2. Build
a bot based on said language
3. Build
a large knowledgebase which captures a large enough percentage of real world
knowledge to allow the bot learn from natural language data sources i.e. the
web.
4. Build
a pattern generator which allows the bot learn the information it has read and
build new patterns itself to represent the
knowledge.
5. Build
a reasoning module based on Bayesian logic to allow simple reasoning to be
conducted.
6. Build
a conflict resolution module to allow the Bot to resolve/correct conflicting
information or ask for help with clarification to build correct mental
model.
7. Build
a goal and planning module which allow the Bot to operate more autonomously to
aid in the goals which we give it i.e.. achieve
singularity.
Steps 1
and 2 an took me a couple years.
3 is an
ongoing effort. Into my fourth year now with 28000 patterns.
Hint: if
the pattern recognition language is good a single pattern should be able express
all the ways of expressing a single thought in a single
pattern.
This
makes the patterns longer and more complex but reduces overall work by not
forcing the bot master to write thousands of patterns to account for all
possible ways to express a single thought.
My 28000 patterns would requires match at several
orders of magnitude more inputs correctly than competing solutions including
misspellings, ungrammatical inputs
etc.
This
transforms the difficulty of step 3 from being an totally intractable problem to
a doable but still difficult/work
intensive problem.
Step 4
is keeping me awake at night thinking about
it.
Steps 5,
6 and 7 don't sound that difficult to me right now but that's only because
I haven't thought about them in enough detail.
People
have challenged the top down approach saying that such a bot would lack
grounding or the ability to tie it's knowledge to real world
inputs.
But it
should not be difficult to use a commercial voice recognition engine to
transform voice inputs into ASCII inputs. And the fuzzy
recognizer for be able to
compensate many times for the mistakes that the voice
recognition software makes in recognizing a word or two in the input
stream.
From: John Scanlon [mailto:[EMAIL PROTECTED] Sent: Sunday, May 07, 2006 2:40 AM To: agi@v2.listbox.com Subject: [agi] Logic and Knowledge Representation Is anyone interested in discussing the use of
formal logic as the foundation for knowledge representation schemes for
AI? It's a common approach, but I think it's the wrong path.
Even if you add probability or fuzzy logic, it's still insufficient for true
intelligence.
The human brain, the only high-level intelligent
system currently known, uses language and logic for abstract reasoning, but
these are based on, and owe their existence to, a more fundamental level of
intelligence -- that of pattern-recognition, pattern-matching, and pattern
manipulation.
Philosophers have grappled with the question of the
source of knowledge for as long as there have been philosophers, and one of the
best accepted answers in modern philosophy is sensory experience. Sensory
experience, including proprioception and awareness of motor outputs, in addition
to the ordinary five senses, is the material that knowledge is built out
of. The brain constructs its logical formulations out of the basic
building blocks of the sights, sounds, and feels of linguistic symbols.
The symbols themselves (letters of an alphabet, words in a language, etc.)
are built up out of lower-level sensory patterns.
In evolution on earth, sensory-motor-based
intelligence came first, and the use of language and logic only
later. It seems to me that the right path to true AI will also use
sensory-motor patterns as the basic building blocks of knowledge
representation. A typical human being's knowledge of the letter "A"
involves recognition of graphical representations of the
symbol, memories of its sound when spoken, procedural or muscle memory of
how to speak and write it, and memories of where it is commonly
found in its linguistic context. A system should be capable of
recognizing symbols visually or auditorially (and possibly
of generating them through motor outputs) before it should be expected
to comprehend them.
Any thoughts or arguments? Or am I just
repeating something everyone already knows? (I honestly don't
know.)
J.P.
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] |
- RE: [agi] Logic and Knowledge Representation Gary Miller
- Re: [agi] Logic and Knowledge Representation Ben Goertzel
- RE: [agi] Logic and Knowledge Representation Gary Miller
- Re: [agi] Logic and Knowledge Representation Ben Goertzel
- Re: [agi] Logic and Knowledge Representation Lukasz Kaiser
- [agi] a2i2 News Update Peter Voss
- Re: [agi] Logic and Knowledge Representation sanjay padmane
- RE: [agi] Logic and Knowledge Representation Gary Miller
- Re: [agi] Logic and Knowledge Representation J. Andrew Rogers
- RE: [agi] Logic and Knowledge Representa... Gary Miller
- Re: [agi] Logic and Knowledge Representation sanjay padmane