On Mon, May 6, 2024 at 9:22 PM Rob Freeman <chaotic.langu...@gmail.com>
wrote:

> ...
> James: "Physics Informed Machine Learning". "Building models from data
> using optimization and regression techniques".
>
> Fine. If you have a physics to constrain it to. We don't have that
> "physics" for language.
>

At all levels of abstraction where natural science is applicable, people
adopt its unspoken presumption which is that mathematics is useful.  This
is what makes Solomonoff's proof relevant despite the intractability of
proving that one has found the ideal mathematical model.  The hard sciences
are merely the most *obvious* level of abstraction in which one may
recognize this.


> Richard Granger you say? The brain is constrained to be a "nested stack"?
>
>
> https://www.researchgate.net/publication/343648662_Toward_the_quantification_of_cognition


Any constraint on the program search (aka search for the ultimate
algorithmic encoding of all data in evidence at any given level of
abstraction) is a prior.  The thing that makes the high order push down
automata (such as nested stacks) interesting is that it may provide a
constraint on program search that evolution has found useful enough to hard
wire into the structure of the human brain -- specifically in the ratio of
"capital investment" between sub-modules of brain tissue.  This is a
constraint, the usefulness of which, may be suspected as generally
applicable to the extent that human cognition is generally applicable.


>
> Language is a nested stack? Possibly. Certainly you get a (softish)
> ceiling of recursion starting level 3. The famous, level 2: "The rat
> the cat chased escaped" (OK) vs. level 3: "The rat the cat the dog bit
> chased escaped." (Borderline not OK.)
>
> How does that contradict my assertion that such nested structures must
> be formed on the fly, because they are chaotic attractors of
> predictive symmetry on a sequence network?
>
> On the other hand, can fixed, pre-structured, nested stacks explain
> contradictory (semantic) categories, like "strong tea" (OK) vs
> "powerful tea" (not OK)?
>
> Unless stacks form on the fly, and can contradict, how can we explain
> that "strong" can be a synonym (fit in the stack?) for "powerful" in
> some contexts, but not others?
>
> On the other hand, a constraint like an observation of limitations on
> nesting, might be a side effect of the other famous soft restriction,
> the one on dependency length. A restriction on dependency length is an
> easier explanation for nesting limits, and fits with the model that
> language is just a sequence network, which gets structured (into
> substitution groups/stacks?) on the fly.
>
> On Mon, May 6, 2024 at 11:06 PM James Bowery <jabow...@gmail.com> wrote:
> >
> > Let's give the symbolists their due:
> >
> > https://youtu.be/JoFW2uSd3Uo?list=PLMrJAkhIeNNQ0BaKuBKY43k4xMo6NSbBa
> >
> > The problem isn't that symbolists have nothing to offer, it's just that
> they're offering it at the wrong level of abstraction.
> >
> > Even in the extreme case of LLM's having "proven" that language modeling
> needs no priors beyond the Transformer model and some hyperparameter
> tweaking, there are language-specific priors acquired over the decades if
> not centuries that are intractable to learn.
> >
> > The most important, if not conspicuous, one is Richard Granger's
> discovery that Chomsky's hierarchy elides the one grammar category that
> human cognition seems to use.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Mf038b68611937324cad488c0
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to