Bob Mottram wrote:

It's difficult to judge how impressive or otherwise such demos are, since it would be easy to produce an animation of this kind with trivial programming. What are we really seeing here? How much does the baby AGI know about fetching before it plays the game, and how much does it need to learn? Is it building any spatial models?

This little piece of video isn't really intended as an impressive demo of AI learning power. Obviously, what is being learned is really simple and could have been learned by a far simpler system than Novamente.

Novamente is a flexible system and can be run in a lot of ways. In the particular experiment reported here, the baby AI knows nothing about fetching before it plays the game, but it has built-in code for recognizing objects (it doesn't have to identify them as conglomerations of pixels or polygons), and it has some pretty high-level behaviors programmed in (e.g. if it has identified an object O, it can simply call the routine "goto O" assuming there are no obstacles in the way; and if it wants to pick up an object that is not too big, it can just call the routine "pick_up O")

We intend to run separate experiments oriented toward having NM learn to recognize objects, and learn how to carry out motor routines like picking-up via controlling the detailed movements of its actuators. But that is not shown here.

What is cool about the software underlying this video, from the perspective of those of us in the know about Novamente, is that the learning was done by a combination of greedy pattern mining and probabilistic logic (both implemented within the NM core system). So, getting this to work was an exercise in integrating AGISim with Novamente pattern mining, probabilistic logic, perception and action control. Getting the same learning-behavior to work using Q-learning or some other simple reinforcement learning algorithm would have been a heck of a lot simpler, but less useful in terms of paving the way for more interesting learning.

We have made Novamente's probabilistic logic system and evolutionary learning system carry out some pretty complex learning tasks, in particular domains like mining bioinformatics data; and reasoning about semantic relationships extracted from natural language text from a parser. But in these domains, of course, the input data is specially prepared to be fed into the AI system. One of the things we're doing now is figuring out how to make these sophisticated learning methods we have in the Novamente system, work effectively in the context of embodied agent control. The architecture was designed to support this, so no breakthroughs are needed, but there are loads of little adjustments to be made ... we have had to make a bunch just to get learning of "fetch" and other simple behaviors to work robustly, and will have to make a bunch more during the next 6-9 months.

-- Ben



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to