To all,

In response to the many postings regarding consciousness, I would like to
make some observations:

1.  Computation is often done best in a "shifted paradigm", where the
internals are NOT one-to-one associated with external entities. A good
example are modern chess playing programs, which usually play "chess" on an
80-square long linear strip with 2 out of every 10 squares being
unoccupyable. Knights can move +21, +19, +12, +8, -8, -12, -19, and -21. The
player sees a 2-D space, but the computer is entirely in a 1-D space. I
suspect (and can show neuronal characteristics that strongly suggest) that
much the same is happening with the time dimension. There appears to be
little different with this 4th dimension, except how it is interfaced with
the outside world.

2.  Paradigm "mapping" is commonplace in computing, e.g. the common practice
of providing "stream of consciousness" explanations for AI program
operation, to aid in debugging. Are such program NOT conscious because the
logic they followed was NOT time-sequential?! When asked why I made a
particular move in a chess game, it often takes me a half hour to explain a
decision that I made in seconds. Clearly, my own thought processes are NOT
time-sequential consciousness as others' here on this forum apparently are.
I believe that designing for time-sequential "conscious" operation is
starting from a VERY questionable premise.

3.  Note that dreams can span years of seemingly real experience in the
space of seconds/minutes. Clearly this process is NOT time-sequential.

4.  Note that individual brains can be organized COMPLETELY differently,
especially in multilingual people. Hence, our "wiring" almost certainly
comes from experience and not from genetics. This would seem to throw a
monkey wrench into AGI efforts to manually program such systems.

5.  I have done some thumbnail calculations as to what it would take to
maintain a human-scale AI/AGI system. These come out on the order of needing
the entire population of the earth just for software maintenance, with no
idea what might be needed to initially create such a working system. Without
"poisoning" a discussion with my own pessimistic estimates, I would like to
see some optimistic estimates for such maintenance, to see if a case could
be made that such systems might actually be maintainable.

Reinforcing my thoughts on other threads, observation of our operation is
probably NOT enough to design a human-scale AGI from, ESPECIALLY when
paradigm shifting is being done that effectively hides our actual operation.
I believe that more information is necessary, though hopefully not an entire
"readout" of a brain.

Steve Richfield



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to