On Sun, Mar 30, 2008 at 10:16 PM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
>  Intelligence is not *only* about the modalities of the data you get,
>  but modalities are certainly important. A deafblind person can still
>  learn a lot about the world with taste, smell, and touch, but the
>  senses one has access to defines the limits to the world model one can
>  build.
>
>  If I put on ear muffs and a blind fold right now, I can still reason
>  quite well using touch, since I have access to a world model build
>  using e.g. vision. If you were deafblind and paralysed since your
>  birth, would you have any possibility of spatial reasoning? No, maybe
>  except for some extremely crude genetically coded heuristics.
>
>  Sure, you could argue that an intelligence purely based on text,
>  disconnected from the physical world, could be intelligent, but it
>  would have a very hard time reasoning about interaction of entities in
>  the physicial world. It would be unable to understand humans in many
>  aspects: I wouldn't call that generally intelligent.
>
>  Perception is a about learning and using a model of our physical
>  world. Input is often high-bandwidth, while output is often
>  low-bandwidth and useful for high-level processing (e.g. reasining and
>  memory). Luckily, efficient methods are arising, so I'm quite
>  optimistic about progress towards this aspect of intelligence.
>

One of the requirements that I try to satisfy with my design is
ability to equivalently perceive information encoded by seemingly
incompatible modalities. For example, visual stream can be encoded
using a set of pairs <tag,color>, where tags are unique labels that
correspond to positions of pixels. This set of pairs can be shuffled
and supplied using serial input (where tags and colors are encoded as
binary words of activation), and system must be able to reconstruct
representation as good as that supplied by naturally arranged video
input. Of course getting to that point requires careful incremental
teaching, but after that there should be no real difference (aside
from bandwidth, of course).

It might be useful to look at all concepts as 'modalities': you can
'see' your thoughts, when you know a certain theory, you can 'see' how
it's applied, how its parts interact, what obvious conclusions are.
Prewiring sensory input in a certain way merely pushes learning in
certain direction, just like inbuilt drives bias action in theirs.

This way, for example, it should be possible to teach a 'modality' for
understanding simple graphs encoded as text, so that on one hand
text-based input is sufficient, and on the other hand system
effectively perceives simple vector graphics. This trick can be used
to explain spacial concepts from natural language. But, again, video
camera might be a simpler and more powerful way to the same end, even
if visual processing is severely limited.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to