James,

Not sure whether all that means you think category theory might be
useful for AI or not.

Anyway, I was moved to post those examples by Rich Hickey and Bartoz
Milewsky in my first post to this thread, by your comment that ideas
of indeterminate categories might annoy what you called 'the risible
tradition of so-called "type theories" in both mathematics and
programming languages'. I see the Hickey and Milewsky refs as examples
of ideas of indeterminate category entering computer programming
theory too.

Whether posted on the basis of a spurious connection or not, thanks
for the Granger HNet paper. That's maybe the most interesting paper
I've seen this year. As I say, it's the only reference I've seen other
than my own presenting the idea that relational categories liberate
category from any given pattern instantiating it. Which I see as
distinct from regression.

The ideas of relational category in that paper might really shift the
needle for current language models.

That as distinct from the older "grammar of mammalian brain capacity"
paper, which I frankly think is likely a dead end.

Real time "energy relaxation" finding new relational categories, as in
the Hamiltonian Net paper, is what I am pushing for. I see current
LLMs as incorporating a lot of that power by accident. But because
they still concentrate on the patterns, and not the relational
generating procedure, they do it only by becoming "large". We need to
understand the (relational) theory behind it in order to jump out of
the current LLM "local minimum".

On Thu, May 23, 2024 at 11:47 PM James Bowery <jabow...@gmail.com> wrote:
>
>
> On Wed, May 22, 2024 at 10:34 PM Rob Freeman <chaotic.langu...@gmail.com> 
> wrote:
>>
>> On Wed, May 22, 2024 at 10:02 PM James Bowery <jabow...@gmail.com> wrote:
>> > ...
>> > You correctly perceive that the symbolic regression presentation is not to 
>> > the point regarding the HNet paper.  A big failing of the symbolic 
>> > regression world is the same as it is in the rest of computerdom:  Failure 
>> > to recognize that functions are degenerate relations and you had damn well 
>> > better have thought about why you are degenerating when you do so.  But 
>> > likewise, when you are speaking about second-order theories (as opposed to 
>> > first-order theories), such as Category Theory, you had damn well have 
>> > thought about why you are specializing second-order predicate calculus 
>> > when you do so.
>> >
>> > Not being familiar with Category Theory I'm in no position to critique 
>> > this decision to specialize second-order predicate calculus.  I just 
>> > haven't seen Category Theory presented as a second-order theory.  Perhaps 
>> > I could understand Category Theory thence where the enthusiasm for 
>> > Category Theory comes from if someone did so.
>> >
>> > This is very much like my problem with the enthusiasm for type theories in 
>> > general.
>>
>> You seem to have an objection to second order predicate calculus.
>
>
> On the contrary; I see second order predicate calculus as foundational to any 
> attempt to deal with process which, in the classical case, is computation.
>
>> Dismissing category theory because you equate it to that. On what
>> basis do you equate them? Why do you reject second order predicate
>> calculus?
>
>
> I don't "dismiss" category theory.  It's just that I've never seen a category 
> theorist describe it as a second order theory.   Even in type theories 
> covering computation one finds such phenomena as the Wikipedia article on 
> "Type theory as a logic" lacking any reference to "second order".
>
> If I appear to "equate" category theory and second order predicate calculus 
> it is because category theory is a second order theory.  But beyond that, I 
> have an agenda related to Tom Etter's attempt to flesh out his theory of 
> "mind and matter" which I touched on in my first response to this thread 
> about fixing quantum logic.  An aspect of this project is the proof that 
> identity theory belongs to logic in the form of relative identity theory.  My 
> conjecture is that it ends up belonging to second order logic (predicate 
> calculus), which is why I resorted to Isabelle (HOL proof assistant).
>
>> What I like about category theory (as well as quantum formulations) is
>> that I see it as a movement away from definitions in terms of what
>> things are, and towards definitions in terms of how things are
>> related. Which fits with my observations of variation in objects
>> (grammar initially) defying definition, but being accessible to
>> definition in terms of relations.
>
>
> On this we heartily agree.  Why do you think first-order predicate calculus 
> is foundational to Codd's so-called "relational algebra"?  Why do you think 
> that "updates" aka "transactions" aka "atomic actions" are so problematic 
> within that first order theory?
>
>> > But I should also state that my motivation for investigating Granger et 
>> > al's approach to ML is based not the fact that it focuses on abduced 
>> > relations -- but on its basis in "The grammar of mammalian brain capacity" 
>> > being a neglected order of grammar in the Chomsky Hierarchy: High Order 
>> > Push Down Automata.  The fact that the HNet paper is about abduced 
>> > relations was one of those serendipities that the prospector in me sees as 
>> > a of gold in them thar HOPDAs.
>>
>> Where does the Granger Hamiltonian net paper mention "The grammar of
>> mammalian brain capacity"? If it's not mentioned, how do you think
>> they imply it?
>
>
> My apologies for not providing the link to the paper by Granger and Rodriguez:
>
> https://arxiv.org/abs/1612.01150
>
>> > To wrap up, your definition of "regression" seems to differ from mine in 
>> > the sense that, to me, "regression" is synonymous with data-driven 
>> > modeling which is that aspect of learning, including machine learning, 
>> > concerned with what IS as opposed to what OUGHT to be the case.
>>
>> The only time that paper mentions regression seems to indicate that
>> they are also making a distinction between their relational encoding
>> and regression:
>>
>> 'LLMs ... introduce sequential information supplementing the standard
>> classification-based “isa” relation, although much of the information
>> is learned via regression, and remains difficult to inspect or
>> explain'
>>
>> How do you relate their relational encoding to regression?
>
>
> Consider the phrase "symbolic regression" in the context of abduced 
> categories as N-ary relations.  Such abductions are reification of symbols.  
> (It is rather ironic that by posting a link to both the HNet paper and to the 
> symbolic regression video, confusion between the two resulted.)
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M992ffa753ac5eb36a2500a92
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to