bob: This seems to me to be the manifestation of a type of cranial
chauvinism in which information halts at the border crossing.

"Cranial chauvinism" - cool term. Entirely yours. I like it. The alternative with only a handful of mentions is "cerebral chauvinism."

It strikes me that not just AI but science generally and esp. neuroscience are cerebral chauvinists. Do you know anyone advancing the idea that the primary unit of mind is the entire distributed nervous system and not just the concentrated part that is the brain?

All this may well be AGI-relevant if it is indeed necessary to do distributed and not just centralised information processing in order to navigate the real world.

Here is a stimulating application of anti-"cerebral chauvinism" from slashdot:

""
James Lanfear (34124) on Saturday December 04 1999, @07:29AM (#1481605)
Nope, he's right, and you left out everything below the cerebrum.

The retina has some significant processing power of it's own--in fact, it is often used as a benchmark for measuring the brain's overall processing capabilities (wrongly, IMO, but that's not relevant). Both edge and movement detection are performed in the rear-most layer of the retina as a pre-processing step before the information is moved up the optic nerve.

The optic nerve feeds into the LGN (the expanded form of which is difficult to spell), which performs a great deal of processing in it's own right--it was, after all, the primary visual system for millions of years--including further (much more refined) edge and movement detection, some object recognition, attention, etc.

The result of that is forwarded to the cortex for the final processing we all know and love, which extends the capabilities of the retina and LGN greatly. However, by that stage the input has virtually no resemblance to the output from the retina, having been thoroughly digested by the lower visual systems. (One further interesting tidbit: the descending pathways, from the cortex to the LGN, have 10x the capacity of the ascending pathways. Make of that what you will.)

(Incidentally, there are a number of us who are trying to combat cerebral chauvinism in neuroscience, and you aren't helping ;-) "



-----Original Message----- From: Bob Mottram
Sent: Tuesday, August 21, 2012 9:01 PM
To: AGI
Subject: Re: [agi] Hugo de Garis on the Singhilarity Institute and the hopelessness of Friendly AI ...

On 21.08.2012 20:52, Mike Tintner wrote:
Bob M:  There seem to be a number of confusions
going on in relation to:    ...  - the origins of concepts.

Interesting. I wasn't aware of anyone in AGI having ideas about the
origins of concepts - perhaps just my ignorance. What ideas do they
have?


It's one of those things which gets hand waved over, but if a system
wants to increase its adaptability then where do the new concepts come
from?  Does it just "think" really hard?

This seems to me to be the manifestation of a type of cranial
chauvinism in which information halts at the border crossing.  That
doesn't seem like a very realistic view, and it's not in the spirit of
artificial intelligence as I understand the term.


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5
Modify Your Subscription: https://www.listbox.com/member/?&; Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to