Bryan Bishop wrote:
On Wednesday 14 November 2007 11:28, Richard Loosemore wrote:
The complaint is not "your symbols are not connected to experience".
Everyone and their mother has an AI system that could be connected to
real world input.  The simple act of connecting to the real world is
NOT the core problem.

Are we sure? How much of the real world are we able to get into our AGI models anyway? Bandwidth is limited, much more limited than in humans and other animals. In fact, it might be the equivalent to worm tech.

To do the calculations would I just have to check out how many neurons are in a worm, how many sensory neurons, and rough information theoretic estimations as to the minimum and maximums as to amounts of information processing that the worm's sensorium could be doing?

I'm not quite sure where this is at ..... but the context of this particular discussion is the notion of 'symbol grounding' raised by Steven Harnad. I am essentially talking about how to solve the problem he described, and what exactly the problem was. Hence a lot of background behind this one, which if you don't know it might make it confusing.


Richard Loosemore


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=65202116-6cf6d0

Reply via email to