James,

The Hamiltonian paper was nice for identifying gap filler tasks as
decoupling meaning from pattern: "not a category based on the features
of the members of the category, let alone the similarity of such
features".

Here, for anyone else:

A logical re-conception of neural networks: Hamiltonian bitwise
part-whole architecture
E.F.W.Bowen,1 R.Granger,2* A.Rodriguez
https://openreview.net/pdf?id=hP4dxXvvNc8

"Part-whole architecture". A new thing. Though they 'share some
characteristics with “embeddings” in transformer architectures'.

So it's a possible alternate reason for the surprise success of
transformers. That's good. The field blunders about surprising itself.
But there's no theory behind it. Transformers just stumbled into
embedding representations because they looked at language. We need to
start thinking about why these things work. Instead of just blithely
talking about the miracle of more data. Disingenuously scaring the
world with idiotic fears about "more data" becoming conscious by
accident. Or insisting like LeCun that the secret is different data.

But I think you're missing the point of that Hamiltonian paper if you
think this decoupling of meaning from pattern is regression. I think
the point of this, and also the category theoretic representations of
Symbolica, and also quantum mechanical formalizations, is
indeterminate symbolization, even novelty.

Yeah, maybe regression will work for some things. But that ain't
language. And it ain't cognition. They are more aligned with a
different "New Kind of Science", that touted by Wolfram, new
structure, all the time. Not regression, going backward, but novelty,
creativity.

In my understanding the point with the Hamiltonian paper is that a
"position-based encoding" decouples meaning from any given pattern
which instantiates it.

Whereas the NN presentation is talking about NNs regressing to fixed
encodings. Not about an operator which "calculates energies" in real
time.

Unless I've missed something in that presentation. Is there anywhere
in the hour long presentation where they address a decoupling of
category from pattern, and the implications of this for novelty of
structure?

On Tue, May 21, 2024 at 11:36 PM James Bowery <jabow...@gmail.com> wrote:
>
> Symbolic Regression is starting to catch on but, as usual, people aren't 
> using the Algorithmic Information Criterion so they end up with unprincipled 
> choices on the Pareto frontier between residuals and model complexity if not 
> unprincipled choices about how to weight the complexity of various "nodes" in 
> the model's "expression".
>
> https://youtu.be/fk2r8y5TfNY
>
> A node's complexity is how much machine language code it takes to implement 
> it on a CPU-only implementation.  Error residuals are program literals aka 
> "constants".
>
> I don't know how many times I'm going to have to point this out to people 
> before it gets through to them (probably well beyond the time maggots have 
> forgotten what I tasted like) .

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M8418e9bd5e49f7ca08dfb816
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to