(Sorry for triple posting...) On Sun, Mar 30, 2008 at 11:34 PM, William Pearson <[EMAIL PROTECTED]> wrote: > On 30/03/2008, Kingma, D.P. <[EMAIL PROTECTED]> wrote: > > > > Intelligence is not *only* about the modalities of the data you get, > > but modalities are certainly important. A deafblind person can still > > learn a lot about the world with taste, smell, and touch, but the > > senses one has access to defines the limits to the world model one can > > build. > > As long as you have one high bandwidth modality you should be able to > add on technological gizmos to convert information to that modality, > and thus be able to model the phenomenon from that part of the world. > > Humans manage to convert modalities E.g. > > > http://www.engadget.com/2006/04/25/the-brain-port-neural-tongue-interface-of-the-future/ > Using touch on the tongue.
Nice article. Apparently even the brain's region for perception of taste is generally adaptable to new input. > I'm not so much interested in this case, but what about the case where > you have a robot with Sonar, Radar and other sensors. But not the > normal 2 camera +2 microphone thing people imply when they say > audiovisual. That's an interesting case indeed. AGIs equipped with sonar/radar/ladar instead of 'regular' vision should be perfectly able at certain forms of spatial reasoning, but still unable to understand humans at certain subjects. Still, if you don't need your agents to completely understand humans, audiovisual senses could go out of the window. It depends on your agent's goals, I guess. ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63 Powered by Listbox: http://www.listbox.com