On Sun, May 4, 2008 at 11:04 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
>  The retina uses low level feature detection of spots, edges, and
>  movement to compress 137 million pixels down to 1 million optic nerve
>  fibers.  By the time it gets through the more complex feature detectors
>  of the visual cortex and into long term memory, it has been compressed
>  down to 2 bits per second.
>

Matt, this requires a reference explaining what you mean, for it to
not be nonsense. E.g. if the signal you get is 2 bit per second, you
can't detect which one of 1000 possible objects you see if you observe
it for one second (that would take 10 bits). When you consider
attention-directed sampling, much of the initial megabit per second
becomes available. Another question is how much of it all is retained
in memory or can be attended at any single time, but it becomes tricky
very fast, you'll be forced to make too many assumptions about the
semantics of what's going on during neural processing.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to