The engine is the associative memory; AQ is the interpolater. But it's the representation that makes any scheme live or die, and that remains entirely to be worked out.

We'll have to go over this in more detail later.


Right...

I think I understand everything you said in your prior post, but I don't feel like you have a pragmatic
way to choose the representation!

I see that choices of representation can be represented as projections, assuming an initial very high dimensional representation that consists of low-level perceptions and actuator-signals. And I see that projections can be encoded as numerical vectors, in your scheme, as well...

So, e.g., using your scheme, one can represent a projection from k1 dimensions into k2 dimensions using a k1+k2 dimensional
vector (since a projection is a kind of function)....

So if one has a k1-dimensional space of perceptions and actuator signals ...

and wants to shrink it into a k2-dimensional space (so as to abstract appropriate structure) ...

then one must use projections that are encoded as vectors in a k1+k2 dimensional space...

but then one wants to shrink this into a k3 dimensional space (so as to abstract appropriate structure)...

but to do this one must use projections that are encoded as vectors in a k1+k2+k3 dimensional space ..

etc.

But, as we ascend the hierarchy of types (thru functions, functions of functions, functions of functions of functions, etc.), the sizes of the relevant function spaces would seem to get larger and larger, not smaller and smaller. So, we are getting into higher and higher and higher dimensions as we ascend the ladder here, and it seems less and less plausible to me that these big spaces of higher-order functions are going to be well-representable using collections of input/output pairs, which are going to be DAMN sparse in the relevant function-spaces, of necessity...

Yikes!!

A more tractable path IMO would be to encode functions in terms of some parametrized family of nonlinear functions, and use vectors as parameter vectors. Then, each vector encodes a nonlinear function ... and you can still do analogical quadrature on parameter vectors, assuming the nonlinear functions are generally reasonably continuous in their parameter-dependence (not too big a Lipschitz constant on average)

Then in the neural model, the nonlinear functions would be NN's, and you'd need to posit some mechanism for translating parameter vectors into NN's, but that's not hard to formulate (a recurrent NN that assumes different functional forms depending on a vector of parameters, expressed e.g. as a collection of inputs to strategically placed neurons). This process is what I conjecture to occur when hippocampus and neocortex interact in the
context of working memory operation.

But (like nearly all sensible roads ;-) this leads us back to Novamente again ... Recall that in Novamente, we have a dimensional embedding space, in which each vector corresponds to some node or link in the main AtomTable (node and link table). In particular, one can have a dimensional embedding space corresponding specifically to SchemaNodes, which embody functions (and are linked with little programs written (generally automatically) in a Lisp-like functional language). So, in this case, the mapping from embedded vectors into SchemaNodes is precisely **a mapping from parameter vectors into functions**. And, the logic of the dimensional embedding algorithm (which uses Harel and Koren's projection algorithm) ensures that the mapping from parameter vectors to Lisp-like mini-programs is not too non-smooth.

So I think the biggest difference between your approach and mine, in the use of numerical vectors, is in how we map vectors into functions. You want to actually represent functions directly as input/output pairs of vectors, whereas I want to look at vectors as indices into the space of functions, where there is a smooth dependency of function on index.

This seems a fundamental difference, both in terms of AI approach and in terms of hypothetical brain model.

[The other difference in our approaches is that to me, this numerical vector stuff is one mechanism among many, interoperating with the other mechanisms, rather than being the crux of it all...]

-- Ben G


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to