On Fri, Apr 5, 2013 at 3:54 PM, Mike Tintner <[email protected]> wrote:
> Matt:  DeSTIN and Google's cat face recognizer don't do any of this. They
> just process the whole image at once.
>
> Thanks for reply. My impression - and what I was asking about - is that ALL
> current approaches process the whole image at once, not just Ben's.

Currently I'm not aware of any vision systems that model eye
movements. I don't know why not. For example, security cameras would
be a lot more useful if they were able to recognize people's faces and
zoom in.

> In which case, they miss the most important dimension of vision, which is
> that it is active/selective - as well as passive/reflective. Both
> unconscious and conscious minds choose together what to look at in a scene
> (or a face). And  there are always new ways and new things to look at and
> notice in any scene - as the visual arts endlessly demonstrate.

It is a complex algorithm, like everything else in AGI. The new phones
from Samsung will be able to track eye movements. That will help us
build better models, not just of how we move our eyes, but of what we
know. It could display text on the screen and know whether or not you
read it, thus learning your interests. I think it will lower the cost
of AGI by making it easier to collect personal data.


-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to