On Tue, May 21, 2024 at 9:35 PM Rob Freeman <chaotic.langu...@gmail.com>
wrote:

> ....
>
> Whereas the NN presentation is talking about NNs regressing to fixed
> encodings. Not about an operator which "calculates energies" in real
> time.
>
> Unless I've missed something in that presentation. Is there anywhere
> in the hour long presentation where they address a decoupling of
> category from pattern, and the implications of this for novelty of
> structure?
>

You correctly perceive that the symbolic regression presentation is not to
the point regarding the HNet paper.  A big failing of the symbolic
regression world is the same as it is in the rest of computerdom:  Failure
to recognize that functions are degenerate relations and you had damn well
better have thought about why you are degenerating when you do so.  But
likewise, when you are speaking about second-order theories (as
opposed to first-order
theories <https://en.wikipedia.org/wiki/List_of_first-order_theories>),
such as Category Theory, you had damn well have thought about why you are
*specializing* second-order predicate calculus when you do so.

Not being familiar with Category Theory I'm in no position to critique this
decision to specialize second-order predicate calculus.  I just haven't
seen Category Theory presented *as* a second-order theory.  Perhaps I could
understand Category Theory thence where the enthusiasm for Category Theory
comes from if someone did so.

This is very much like my problem with the enthusiasm for type theories in
general.

But I should also state that my motivation for investigating Granger et
al's approach to ML is based *not* the fact that it focuses on abduced
*relations* -- but on its basis in "The grammar of mammalian brain
capacity" being a neglected order of grammar in the Chomsky Hierarchy: High
Order Push Down Automata.  The fact that the HNet paper is about abduced
*relations* was one of those serendipities that the prospector in me sees
as a of gold in them thar HOPDAs.

To wrap up, your definition of "regression" seems to differ from mine in
the sense that, to me, "regression" is synonymous with data-driven modeling
which is that aspect of learning, including machine learning, concerned
with what IS as opposed to what OUGHT to be the case.


>
> On Tue, May 21, 2024 at 11:36 PM James Bowery <jabow...@gmail.com> wrote:
> >
> > Symbolic Regression is starting to catch on but, as usual, people aren't
> using the Algorithmic Information Criterion so they end up with
> unprincipled choices on the Pareto frontier between residuals and model
> complexity if not unprincipled choices about how to weight the complexity
> of various "nodes" in the model's "expression".
> >
> > https://youtu.be/fk2r8y5TfNY
> >
> > A node's complexity is how much machine language code it takes to
> implement it on a CPU-only implementation.  Error residuals are program
> literals aka "constants".
> >
> > I don't know how many times I'm going to have to point this out to
> people before it gets through to them (probably well beyond the time
> maggots have forgotten what I tasted like) .

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Mac2ae2959e680fe509d66197
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to