Ben Goertzel wrote:
Hi Richard,

I think you are precisely correct to say that one needs "a sufficiently
and appropriately flexible KR format (which is then really more of a
meta-format)" but I would object that when you go on to say that "a
probabilistic weighted, labeled hypergraph [etc]..." is a good way to
get that flexible KR format, you are underestimating the level at which
the blowback is going to happen.

What I mean by that is that the hypergraph idea is already locking down
many KR assumptions:  the nodes are not open to multiple choices for
internal active structure, they interact with other nodes in one
particular choice of interaction space, relationships between nodes are
encoded with relatively simple probabilistic clusters that have direct,
high level semantics (IIRC), and so on.   As far as flexible formats are
concerned, this is a thoroughly collapsed wave function.  The remaining
flexibility is minimal.

My strong feeling is that the neural net structure in the brain is
ALSO locking down many KR assumptions....  I think you are vastly
overestimating the amount of flexibility present in the brain's
implicit approach to KR...

But, since none of us knows how the brain does KR, we can't really do
much besides opine here...

I would note that if the brain is doing anything like Hebbian learning
btw neural clusters, then it is roughly as constrained as Novamente's
probabilistic representation ... a Hebbian link btw neural clusters is
not semantically all **that** unlike a probabilistic inheritance link
btw two Novamente nodes....

But, I can't rule out the possbility that the brain is doing some
other kind of wild metalearning voodoo that is totally obscure to us
at the moment.. though I really doubt it...

It is my fault for not being clearer in my earlier post, but this misses the point by a factor of one level.

There *can* be flexibility in the "run-time" behavior of the AGI - adapting itself in the same (or similar) way that a neural net adapts itself to the world. I'm sure the hypergraphs have that, and everything you say above pertains only to that level of flexibility.

But I wasn't talking about locking down KR assumptions at that level, I was talking about "design-time" locking down. I don't want the brain to do any "wild metalearning voodoo that is totally obscure to us at the moment", and I don't think it does that ..... I am talking about us, as scientists, not committing our models to a design that we (in effect) pull out of a hat, whilst ignoring an entire space of similar models. We need to do a systematic exploration of that space of models.

I am writing up my paper for the AGIRI workshop proceedings this week, so I will postpone further details until I have that to point to.



This was what I was saying in my AGIRI workshop presentation.

Yep, but your idea was expressed (both in your presentation and in
this email) at such a vague level of abstraction that I can't really
assess what it means...  I look forward to seeing more details at some
point..

This borders on the uncharitable: I have expressed these ideas quite clearly (clearly enough that some people have understood them very well), in previous posts, and I disagree that the central point was expressed at a vague level of abstraction. As for the workshop, the talk that I gave was interrupted by a large number of questions, comments and across-the-floor banter, to the extent that the crucial last third of what I had to say went missing. That will be remedied in the written version.


Richard Loosemore







-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to