Thanks for sharing this paper.

Positively brilliant! I think this is in-line with quantum thinking and
holds great promise for quantum computing. It relates to a concept advanced
by myself and my mentor, namely, gestalt management. Penultimately, we
endeavor to most-correctly represent relativistic, multiversal realities.

This work increases the probability of success of significant value to my
Po1 theory. The day would come, where emergent requirements for locating
"needles in data haystacks", near instantaneously, would place an
unrelenting demand on these types of networks. I think this type of
architecture - when fully matured - would be perfectly suited for that.

On Wed, May 22, 2024 at 6:35 AM Rob Freeman <chaotic.langu...@gmail.com>
wrote:

> James,
>
> The Hamiltonian paper was nice for identifying gap filler tasks as
> decoupling meaning from pattern: "not a category based on the features
> of the members of the category, let alone the similarity of such
> features".
>
> Here, for anyone else:
>
> A logical re-conception of neural networks: Hamiltonian bitwise
> part-whole architecture
> E.F.W.Bowen,1 R.Granger,2* A.Rodriguez
> https://openreview.net/pdf?id=hP4dxXvvNc8
>
> "Part-whole architecture". A new thing. Though they 'share some
> characteristics with “embeddings” in transformer architectures'.
>
> So it's a possible alternate reason for the surprise success of
> transformers. That's good. The field blunders about surprising itself.
> But there's no theory behind it. Transformers just stumbled into
> embedding representations because they looked at language. We need to
> start thinking about why these things work. Instead of just blithely
> talking about the miracle of more data. Disingenuously scaring the
> world with idiotic fears about "more data" becoming conscious by
> accident. Or insisting like LeCun that the secret is different data.
>
> But I think you're missing the point of that Hamiltonian paper if you
> think this decoupling of meaning from pattern is regression. I think
> the point of this, and also the category theoretic representations of
> Symbolica, and also quantum mechanical formalizations, is
> indeterminate symbolization, even novelty.
>
> Yeah, maybe regression will work for some things. But that ain't
> language. And it ain't cognition. They are more aligned with a
> different "New Kind of Science", that touted by Wolfram, new
> structure, all the time. Not regression, going backward, but novelty,
> creativity.
>
> In my understanding the point with the Hamiltonian paper is that a
> "position-based encoding" decouples meaning from any given pattern
> which instantiates it.
>
> Whereas the NN presentation is talking about NNs regressing to fixed
> encodings. Not about an operator which "calculates energies" in real
> time.
>
> Unless I've missed something in that presentation. Is there anywhere
> in the hour long presentation where they address a decoupling of
> category from pattern, and the implications of this for novelty of
> structure?
>
> On Tue, May 21, 2024 at 11:36 PM James Bowery <jabow...@gmail.com> wrote:
> >
> > Symbolic Regression is starting to catch on but, as usual, people aren't
> using the Algorithmic Information Criterion so they end up with
> unprincipled choices on the Pareto frontier between residuals and model
> complexity if not unprincipled choices about how to weight the complexity
> of various "nodes" in the model's "expression".
> >
> > https://youtu.be/fk2r8y5TfNY
> >
> > A node's complexity is how much machine language code it takes to
> implement it on a CPU-only implementation.  Error residuals are program
> literals aka "constants".
> >
> > I don't know how many times I'm going to have to point this out to
> people before it gets through to them (probably well beyond the time
> maggots have forgotten what I tasted like) .

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M24e8a3387f6852d9e8287be3
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to