I must agree with Pei on this. Think of a reasonably large AI, say, eight light hours across. Any belief frame guaranteed to be globally consistent must be at least eight hours out of date. So if you only act on globally consistent knowledge, your reaction time is never less than your diameter.


I understand that complete consistency is not possible given finite resources....

The question I was addressing is whether, given finite resources, maximum goal-achievement-ability corresponds roughly to maximum consistency (where consistency is defined in terms of behaviors not internal representations).

But that doesn't affect the fact that, given finite resources, maximum consistency is not going to be total consistency.... Generally quite far from it, in fact.

Local parts of the AI must be able to act on knowledge that other parts of the AI are not guaranteed to possess, and yet the system as a whole must still pass some kind of guarantee for not tearing itself to bits.


Agreed, of course.

Perhaps it would come in handy to, oh, say, distinguish consequential utility functions from belief distributions in your guarantees - so that you can have globally consistent utility functions, locally inconsistent knowledge, and a graph whose edges are constraints on subgoals in adjacent vertices, all of which add up to a global guarantee that one action won't stomp on another.

Novamente works sort of like this (in the design, I mean; the goal system in the current implementation is quite simplistic).

The goal system is serviced with much greater inferential attention than the general knowledge base, so that inconsistency among goals is much less likely to be significant than inconsistency among knowledge items in general. And, the graph of constraints among subgoals is just a subgraph of Novamente's overall knowledge graph.

This does not absolutely guarantee that one action won't stomp on another, but it decreases the risk, certainly to far below the risk of knowledge-items "stomping on each other."

-- Ben


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to