Russell Wallace wrote:
On 3/11/07, *Ben Goertzel* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    YES -- anything can be represented in logic.  The question is whether
    this is a useful representational style, in the sense that it matches
    up with effective learning algorithms!!!  In some domains it is, in
    others not.


"Represented in logic" can mean a number of different things, just checking to see if everyone's talking about the same thing.

Consider, say, a 5 megapixel image. A common programming language representation would be something like:

struct point {float red, green, blue;};
point *image = new point[n];

For AGI purposes if that's our only representation well obviously we lose. So we do need to take at least some steps in the direction of a unified representation.

What should that representation look like? The answer is that logic, or some variant thereof, is as good as we're going to get. So we might have something like:

[Meaning #1]
color(point(1, 2), red) = 3
...another 14999999 similar assertions

Now we can build up further layers of information, such as:

[Meaning #2]
edge(point(4, 5), orientation(3.7))
estimated-depth(point(87, 9), 120.4)
convex(line#17)
chair(object#33)
...etc

What about the algorithms that generate those further layers of information? Well, typical image processing code is written in something like C, but again for AGI purposes if we do that we lose. So,

[Meaning #3]
we want to create (whether by hand, auto learning or most likely combination) algorithms in a more logic-oriented language, something that (at least superficially) looks more like Prolog or Haskell than C.

A classic mistake is to slide a step further and assume

[Meaning #4]
the application of those algorithms will be pure deduction and the runtime engine can be purely a theorem prover. We now know that doesn't work, at best you just run into exponential search, need procedural/heuristic components. So leave that aside.

What about the physical representation of all this? Well suppose the general logic database stores stuff as a set of explicit sentences, then

[Meaning #5]
we can use the same database to store all kinds of data as explicit sentences. Do we want to? For prototyping, sure. For production code, we'll eventually want to do things like _physically_ storing image data as an array of floats because there's a couple orders of magnitude performance improvement to be had that way - only a constant factor to be sure, but enough to be eventually worth having. But that has no bearing on the _logical_ side of things - the semantics should still be just as if everything was stored as explicit sentences using the general database engine.

So the answer to whether standardizing on logical representation is good is:

Meanings #1, 2, 3 - yes.
Meaning #4 - no.
Meaning #5 - for prototyping sure, not for production code.

(There are probably some more I'm overlooking, but that's a start.)

I'm not sure if you're just summarizing what someone would mean if they were talking about 'logical representation,' or advocating it.

Either way, from my point of view this is a dead end you have listed here.

I say that because (a) I can think of other representations that do not fit that way of thinking, and (b) I think it is a terrible mistake (an AI-field-crippling mistake) to start out by making an early commitment to a particular kind of representation.

For example, what about replacing this:

[Meaning #2]
edge(point(4, 5), orientation(3.7))
estimated-depth(point(87, 9), 120.4)
convex(line#17)
chair(object#33)
...etc

with this:

[Meaning #2(A)]
regularity_A1, regularity_A27, regularity_A81 ....
regularity_B79, regularity_B34, regularity_B22 ....
....


Where the "regularities" are just elements generated by a "regularity finding mechanism"? The "A" and "B" labels signify that there are multiple levels, so the "B" regularities are formed out of clusters of "A" level regularities, and so on. A regularity is just a pattern of some kind.

So what is the "regularity finding mechanism"? Well, it is not well defined at the outset: it is up to us to investigate and discover what regularity finding mechanisms actually produce useful elements. We should start out *agnostic* about what those mechanisms are. All kinds of posibilities exist, if we are open minded enough about what to consider.

Not only that, but we can allow that there is not one, but a cluster of such mechanisms, all of which can operate simultaneously to refine the structure of the elements.

And even more: do not have a fixed set of regularity finding mechanisms, but allow the system to have an initial set that it can learn to augment. So later on the system has extra ways to build new elements, not available at an early stage.

Finally, alongside the learning mechanisms that build these regularities there are mechanisms that *use* all of this stuff (no point in acquiring a knowledge system if you are not going to do anything with it, eh?). These are the "thinking" mechanisms, and like the learning mechanisms they are a small group of innate mechanisms at the beginning, but over time more of them can be built, so that at higher levels of this representation scheme (the regularity_Qnn level, e.g.) there are mechanisms for grabbing and using the regularities at that level.

Interestingly, some of these acquired, high-level thinking mechanisms work in an approximate way, so most of the time they will try to do their job on the elements at their level as if they didn't need the rest of the system, but when necessary they can call upon more basic processes at lower levels.

Where do "logical representations" sit in all of this? In the case of human systems, they appear to be an acquired representation, in the sense that they are a Level-Q (say) regularity, which uses aquired thinking mechanisms (laws of logical reasoning) specialized for that Q level. They seem to be independent (logic sometimes looks like a formal system), but in practice they get a lot of quiet help from lower levels (from, among other things, the human equivalent of "inference control mechanisms", without which any AI logical reasoning engine is just a big piece of cycle-gobbling junk).

And if that really is the proper place of logical representation, what good would it do us to start out by throwing away our neutrality on the "what is a regularity" question and committing straight away to the idea that a regularity is a logical atom, and the thinking mechanisms are a combination of [logical inference] + [inference control mechanism]? To me, that seems totally ad hoc.



All of this is offered as a consistent alternative to the idea of logical representations as a basic representation for AI. It is also a good sketch of how the human cognitive system does the job.

In case anyone is bothering to read this far, I should say that the above is a super-condensed version of the second chapter of a painfully huge textbook that I have been working on, sporadically, for the last couple of years. So apologies if it is too compact to make sense: I am too close to the material to be able to see where it is confusing, but I also do not want to release the full text until it is in proper shape.



Richard Loosemore.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to