Related obliquely to the discussion about pattern discovery algorithms.... What 
is a symbol?
 
I am not sure that I am using the words in this post in exactly the same way 
they are normally used by cognitive scientists; to the extent that causes 
confusion, I'm sorry.  I'd rather use words in their strict conventional sense 
but I do not fully understand what that is.  These thoughts are fuzzier than 
I'd like; if I was better at de-fuzzifying them I might be a pro instead of an 
amateur!
 
Proposition:  a "symbol" is a token with both denotative and model-theoretic 
semantics.
 
The denotative semantacs are what makes a symbol refer to something or "be 
about" something.  The model-theoretic semantics allow symbol processing 
operations to occur (such as reasoning).
 
I believe this is a somewhat more restrictive use of the word "symbol" than is 
necessarily implied by Newell and Simon in the Physical Symbol System 
Hypothesis, but my aim is engineering rather than philosophy.
 
I'm actually somewhat skeptical that human beings use symbols in this sense for 
much of our cognition.  We appear to be a million times better at it than any 
other animal, and that is the special thing that makes us so great, but we 
still aren't very good at it.  However, most of the things we want to build AGI 
*for* require us to greatly expand the symbol processing capabilities of mere 
humans.  I think we're mostly interested in building artificial scientists and 
engineers rather than artificial musicians.  Since computer programs, 
engineering drawings, and physics theories are explicitly symbolic constructs, 
we're more interested in effectively creating symbols than in the totality of 
the murky "subsymbolic" world supporting it.  To what extent can we separate 
them?  I wish I knew.
 
In this view, "subsymbolic" simply refers to tokens that lack some of the 
features of symbols.  For example, a representation of a pixel from a camera 
has clear denotational semantics but it is not elaborated as well as a better 
symbol would be ("the light coming from direction A at time B" is not as useful 
as "the light reflecting off of Fred's pinky fingernail").  Similarly, and more 
importantly, subsymbolic products of sensory systems lack useful 
model-theoretic semantics.  The "origin of symbols" problem involves how those 
semantics arise -- and to me it's the most interesting piece of the AGI puzzle.
 
Is anybody else interested in this kind of question, or am I simply inventing 
issues that are not meaningful and useful?

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to