Will,

Maybe I should have explained the distinction more fully. A totalitarian system is one with an integrated system of decisionmaking, and unified goals. A "democratic", "conflict system is one that takes decisions with opposed, conflicting philosophies and goals (a la Democratic vs Republican parties) , fighting it out. Cog sci treats humans as if we are rational, consistent thinkers/ computers. AGI-ers AFAIK try to build rational, consistent (& therefore "totalitarian") computer systems. Actually, humans are very much conflict systems and to behave consistently for any extended period in any area of your life is a supreme and possibly heroic achievement. A conflicted, non-rational system is paradoxically better psychologically as well as socially - and I would argue, absolutely essential for dealing with AGI decisions/problems as (most of us will agree) it is for social problems.. But it requires a whole new paradigm. Two minds (and two hearts) (and two cores?) are better than one. (And it's the American way).


Will/MT;>> Just as you are in a rational, specialist way picking off isolated features,
so, similarly, rational, totalitarian thinkers used to object to the crazy,
contradictory complications of the democratic, "conflict" system of
decisionmaking by contrast with their pure ideals. And hey, there *are*
crazy and inefficient features - it's a real, messy system. But, as a
whole, it works better than any rational, totalitarian, non-conflict system. Cog sci can't yet explain why, though, can it? (You guys, without realising
it, are all rational, totalitarian systembuilders).



All?  I'm a rational economically minded system builder, thank you
very much. I can't answer questions you want answered, like how will
my system reason with imagination precisely because I am not a
totalitarian. If you wish to be non-totalitarian you have set up a
system in a certain way and let the dynamics set up potentially
transform the system into something that can reason as you want.

Theoretically the system could be set up to reason as you want
straight away. But setting up a baby level system seems orders of
magnitude easier than expecting it solve problems straight away. In
which exact knowledge of the inner workings of mature imagination is
not required.

The more you ask for early results of systems, the more you are likely
to get totalitarians building your machines. Because they can get
results quick.

 Will Pearson


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to