On Mon, Oct 6, 2008 at 10:03 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>
> Arguably, for instance, camera+lidar gives enough data for reconstruction
> of the visual scene ... note that lidar gives more more accurate 3D depth
> ata than stereopsis...
>

Also, for that matter, 'visual' input to an AGI needn't be raw pixels at
all, but could instead be a datastream of timestamped [depth-labeled] edges,
areas, colours, textures, etc. from fully narrow-AI pre-processed sources.
Of course such a setup could be construed to be rougly similar to the human
visual pathway between the retina on one end, though the LGN, and finally to
the layers of the primary visual cortex.

-dave



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to