"Importantly, the new entity ¢X is not a category based on the
features of the members of the category, let alone the similarity of
such features"
Oh, nice. I hadn't seen anyone else making that point. This paper 2023?
That's what I was saying. Nice. A vindication. Such categories
decouple the
Tokens inside transformers are supervised internal symbols.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M102516027fd65ca8c1f90b8b
Delivery options:
From
*A logical re-conception of neural networks: Hamiltonian bitwise part-whole
architecture*
> *From hierarchical statistics to abduced symbols*It is perhaps useful to
> envision some of the ongoing devel-
> opments that are arising from enlarging and elaborating the
> Hamiltonian logic net
On Mon, May 20, 2024 at 9:49 AM Rob Freeman
wrote:
> Well, I don't know number theory well, but what axiomatization of
> maths are you basing the predictions in your series on?
>
> I have a hunch the distinction I am making is similar to a distinction
> about the choice of axiomatization. Which
Well, I don't know number theory well, but what axiomatization of
maths are you basing the predictions in your series on?
I have a hunch the distinction I am making is similar to a distinction
about the choice of axiomatization. Which will be random. (The
randomness demonstrated by Goedel's
On Sun, May 19, 2024 at 11:32 PM Rob Freeman
wrote:
> James,
>
> My working definition of "truth" is a pattern that predicts. And I'm
> tending away from compression for that.
>
2, 4, 6, 8
does it mean
2n?
or does it mean
10?
Related to your sense of "meaning" in (Algorithmic Information)
On Saturday, May 18, 2024, at 6:53 PM, Matt Mahoney wrote:
> Surely you are aware of the 100% failure rate of symbolic AI over the last 70
> years? It should work in theory, but we have a long history of
> underestimating the cost, lured by the early false success of covering half
> of the