On Sun, Aug 10, 2008 at 5:52 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

> Will,
>
> Maybe I should have explained the distinction more fully. A totalitarian
> system is one with an integrated system of decisionmaking, and unified
> goals. A "democratic", "conflict system is one that takes decisions with
> opposed, conflicting philosophies and goals (a la Democratic vs Republican
> parties) , fighting it out.


OpenCogPrime has aspects of both these systems, then....   (Of course, so
does the USA, but that's another story ;-)


> Cog sci treats humans as if we are rational, consistent thinkers/
> computers.


No, it just doesn't.  This is an egregious oversimplification and
mis-analysis of the cognitive science community and its research and ideas.
Look at the heuristics and biases literature, for one thing... and the
literature on analogical reasoning ... on the cognitive psychology of
emotion ... etc. etc. etc.



> AGI-ers AFAIK try to build rational, consistent (& therefore
> "totalitarian") computer systems. Actually, humans are very much conflict
> systems and to behave consistently for any extended period in any area of
> your life is a supreme and possibly heroic achievement.  A conflicted,
> non-rational system is paradoxically better psychologically as well as
> socially - and I would argue, absolutely essential for dealing with AGI
> decisions/problems as (most of us will agree) it is for social problems..



I think that non-rationality is often necessary in minds due to resource
limitations, but is best minimized as much as possible ...

It's easy to confuse true rationality with narrow-minded implementations of
rationality, which are actually NOT fully rational.  If your goal is to
create amazing new ideas, the most rational course may be to spend some time
thinking wacky thoughts that at first sight appear non-rational.

By true rationality I simply mean making judgments in accordance with
probability theory based on one's goals and the knowledge at one's
disposal.  Note that rationality does not tell you what goals to have, nor
does it apply to systems except in the context of specific goals (which may
be conceptualized by the system, or just by an oserver of the system).

I think it would be both stupid and dangerous to attempt to replicate human
irrationality in our AGI systems.

-- Ben



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to