On Thu, May 9, 2024 at 6:15 AM James Bowery <jabow...@gmail.com> wrote:
>
> Shifting this thread to a more appropriate topic.
>
> ---------- Forwarded message ---------
>>
>> From: Rob Freeman <chaotic.langu...@gmail.com>
>> Date: Tue, May 7, 2024 at 8:33 PM
>> Subject: Re: [agi] Hey, looks like the goertzel is hiring...
>> To: AGI <agi@agi.topicbox.com>
>
>
>> I'm disappointed you don't address my points James. You just double
>> down that there needs to be some framework for learning, and that
>> nested stacks might be one such constraint.
> ...
>> Well, maybe for language a) we can't find top down heuristics which
>> work well enough and b) we don't need to, because for language a
>> combinatorial basis is actually sitting right there for us, manifest,
>> in (sequences of) text.
>
>
> The origin of the Combinatorial Hierarchy thence ANPA was the Cambridge 
> Language Research Unit.

Interesting tip about the Cambridge Language Research Unit. Inspired
by Wittgenstein?

But this history means what?

> PS:  I know I've disappointed you yet again for not engaging directly your 
> line of inquiry.  Just be assured that my failure to do so is not because I 
> in any way discount what you are doing -- hence I'm not "doubling down" on 
> some opposing line of thought -- I'm just not prepared to defend Granger's 
> work as much as I am prepared to encourage you to take up your line of 
> thought directly with him and his school of thought.

Well, yes.

Thanks for the link to Granger's work. It looks like he did a lot on
brain biology, and developed a hypothesis that the biology of the
brain split into different regions is consistent with aspects of
language suggesting limits on nested hierarchy.

But I don't see it engages in any way with the original point I made
(in response to Matt's synopsis of OpenCog language understanding.)
That OpenCog language processing didn't fail because it didn't do
language learning (or even because it didn't attempt "semantic"
learning first.) That it was somewhat the opposite. That OpenCog
language failed because it did attempt to find an abstract grammar.
And LLMs succeed to the extent they do because they abandon a search
for abstract grammar, and just focus on prediction.

That's just my take on the OpenCog (and LLM) language situation.
People can take it or leave it.

Criticisms are welcome. But just saying, oh, but hey look at my idea
instead... Well, it might be good for people who are really puzzled
and looking for new ideas.

I guess it's a problem for AI research in general that people rarely
attempt to engage with other people's ideas. They all just assert
their own ideas. Like Matt's reply to the above... "Oh no, the real
problem was they didn't try to learn semantics..."

If you think OpenCog language failed instead because it didn't attempt
to learn grammar as nested stacks, OK, that's your idea. Good luck
trying to learn abstract grammar as nested stacks.

Actual progress in the field stumbles along by fits and starts. What's
happened in 30 years? Nothing much. A retreat to statistical
uncertainty about grammar in the '90s with HMMs? A first retreat to
indeterminacy. Then, what, 8 years ago the surprise success of
transformers, a cross-product of embedding vectors which ignores
structure and focuses on prediction. Why did it succeed? You, because
transformers somehow advance the nested stack idea? Matt, because
transformers somehow advance the semantics first idea?

My idea is that they advance the idea that a search for an abstract
grammar is flawed (in practice if not in theory.)

My idea is consistent with the ongoing success of LLMs. Which get
bigger and bigger, and don't appear to have any consistent structure.
But also their failures. That they still try to learn that structure
as a fixed artifact.

Actually, as far as I know, the first model in the LLM style of
indeterminate grammar as a cross-product of embedding vectors, was
mine.

***If anyone can point to an earlier precedent I'd love to see it.***

So LLMs feel like a nice vindication of those early ideas to me.
Without embracing the full extent of them. They still don't grasp the
full point. I don't see reason to be discouraged in it.

And it seems by chance that the idea seems consistent with the
emergent structure theme of this thread. With the difference that with
language, we have access to the emergent system, bottom-up, instead of
top down, the way we do with physics, maths.

But everyone is working on their own thing. I just got drawn in by
Matt's comment that OpenCog didn't do language learning.

-Rob

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mc80863f9a44a6d34f3ba12a6
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to