Ben Goertzel wrote:
Hi,

[Richard Loosemore wrote:]
It seems that what you are saying, though, is that a KR must involve
"probabilities in some shape or form" and "the ability of a
representation to jump up a level and represent/manipulate other
representations, not just represent the world".

Yes, and these two aspects must work together so that it can sensibly
apply estimate probabilistics associated with higher-order
functions/representations...

What I am saying is not just that a KR must be capable of these two
things, but that these should be implemented at the "low level" of a
KR rather than as high-level abstractions ... i.e., the KR must permit
the cognitive system an extremely easy and ready facility at using
these, so that it can use them in representing nearly everything it
has to represent, and to manipulate them (probabilities and
higher-order functions) very freely and flexibly and efficiently...

For instance, a crisp predicate logic based KR is in principle capable
of handling probabilities, by representing them as logical structures,
but it contains no low-level way of representing probabilities, and
therefore doing uncertain inference in such a KR tends to be awkward
and inefficient....

Similarly, most NN architectures contain no explicit way to take
little NN's and encode them as inputs to other NN's.  So implementing
complex higher-order functions in the context of most NN architectures
is not really feasible, even though in principle possible by creating
appropriate networks

So, actually, I would say the very simple criteria I mentioned rule
out nearly all KR's in currency in the AI field ;-)

Having said that, I do think they are somewhat obvious, from a general
cognitive systems point of view, yeah...

But you seemed to be saying something much stronger when you used the
phrase "... it must be sensibly viewable as a probabilistic logic based
functional programming language."  I can think of huge numbers of ways
to satisfy the weak claims, above, but this latter is just one specific
choice, and I see nothing compelling me to accept anything remotely like
a probabilistic logic based functional programming language.

My claim is that any KR that implements probabilities and higher order
functions at a sufficiently low level that they can be flexibly and
adaptively deployed as needed to create new specialized
representations for new situations -- will be relatively easily
**translatable** into the form of a probabilistic logic based
functional programming language...

For example, I conjecture that the KR implemented by neuronal
assemblies and networks theoreof in the brain will be easily and
cleanly translatable into such a form, once the brain is really well
understood...

This is certainly not an empty claim.  For instance, it is different
from the claim that

--- the KR implemented by neuronal assemblies and networks theoreof in
the brain will be easily and cleanly translatable into classical
Prolog

--- the KR implemented by neuronal assemblies and networks theoreof in
the brain will be easily and cleanly translatable into crisp predicate
logic

--- the KR implemented by neuronal assemblies and networks theoreof in
the brain will be easily and cleanly translatable into a giant network
of feedforward NN's and Kohonen nets

etc.

It is a specific claim about what sort of KR's are going to be most
useful for general intelligence....  Not a mathematically rigorous
claim (because I have not formally defined "easily and cleanly
translatable", though...)


Ben,

But what you have done here is cite "name brand" knowledge
representation schemes, like [vanilla] neural nets, prolog, crisp
predicate logic, giant networks of feedforward NN's and Kohonen nets,
etc. ... as exemplified by your comment:

So, actually, I would say the very simple criteria I mentioned rule
out nearly all KR's in currency in the AI field ;-)

I have never really considered the specific KRs - the ones that people
have invented and given names to - to be the only ones worth targeting:
and in fact the ones you mention are all, as far as I am concerned, just
dead horses with a lot of flog marks on 'em.

What we should really be considering is an entire *space* of knowledge
representation formalisms, of which the labelled ones (vanilla NNs,
crisp predicate logic, etc) are just a handful of isolated examples.

What do the other knowledge representations look like?  Obviously the
space contains an infinite number of them, but the ones of greatest
interest to me would be those that could be described as "generalized
neural nets":  loosely inspired by the NN paradigm, but with
generalizations that make them differ from vanilla NN in one or more of
several dimensions.  You could:

  -  generalize the "activation" concept so it is a vector (of
integers, reals, complex numbers, or whatever);

  -  generalize the "relaxation mechanism" so it works simultaneously
along several dimensions, or works in a unitary way on a vector of
activation, or a combination of both of these, or whatever;

  -  generalize the topology, or architecture, of network, so it is not
a single, unified space but instead has specialized networks that handle
separate functions or separate modalities (this is one of the more
obvious generalizations, implicit in what the neuroscience folks do, and
implicit in the model of anyone who has several NNs in their system that
are supposed to handle different domains independently);

  -  generalize the behavior of the neurons themselves, to make them
"virtual" rather than real ... by which I mean, build a system with
neuron-like elements that can do weird things like wander around in a
space, like molecules (this idea is at the core of my own approach);

  -  generalize the learning mechanisms so they perform complex,
structured operations on neurons, and allow learned operations to be
compiled down and spread across the system,

  -  ... and so on.

I would insist that all of these systems really are knowledge
representations (not just "architectures" within which a KR resides)
because they can be used in such a way that no prior commitment to the
format of the representation is made, with the KR being allowed to grow
from the behavior of the system.  But if the KR emerges, the
architectural/functional design is implicitly defining the KR, hence
they are really one and the same thing.

For example, in some of these systems the [generalized] neurons are
allowed to develop, but when they have finished developing, their
interpretation is no longer a straightforward matter of pointing to one
and saying "this represents the concept of [APPLE]", because that
particular concept may tangle up information both about [APPLE] *and*
about some aspect of what to expect when an apple is seen (it is a
combination of a passive representation of [APPLE] with some
expectations about the activity of pie making, or about how to be
cautious about picking up fallen apples that might have wasps inside
them, shall we say).  If this is not clear, just go back to the original
connectionist work:  they pointed out that the interpretation of what
the units "meant" was post-hoc, and sometimes all but impossible - for
those folks, a KR emerged, it was not imposed.

Fabulously complex, messy stuff, of course.  But this is what you get if
you consider a knowledge representation as something that grows out of
the functioning of a system designed to capture knowledge, rather than
as a formalism that a particular group of researchers write down in the
fond hope that it will become the nucleus of a system that will capture
knowledge.

And to return to the original question:  within the vast space
encompassed by all these possibilities, there is a (still vast) subspace
in which the KRs have the ability to encode contingencies (pretty much
all of them would do that) and do some sort of meta-level operations on
concepts.  I have always taken it for granted that this subspace is the
only interesting one.

But within that subspace, is it likely that all the systems will be
isomorphic to "a probabilistic logic based functional programming language?

No way!  At least, I see no reason why they would be, and no way
anyone could come up with a reason, given the vast number of possible
variations out there.


Richard Loosemore.


P.S.  Hey, I only wrote all this detail because I knew that if I just
said "there are more KRs out there than the ones you mentioned", without
pointing to examples, someone would get on my case again.  ;-)







-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to