On Mon, Jun 17, 2024 at 3:22 PM Quan Tesla <quantes...@gmail.com> wrote:
>
> Rob, basically you're reiterating what I've been saying here all along. To 
> increase contextualization and instill robustness in the LLM systemic 
> hierarchies. Further, that it seems to be critically lacking within current 
> approaches.
>
> However, I think this is fast changing, and soon enough, I expect 
> breakthroughs in this regard. Neural linking could be one of those solutions.
>
> While it may not be exactly the same as your hypothesis (?), is it because 
> it's part of your PhD that you're not willing to acknowledge that this 
> theoretical work may have been completed by another researcher more than 17 
> years ago, even submitted for review and subsequently approved? The market, 
> especially Japan, grabbed this research as fast as they could. It's the West 
> that turned out to be all "snooty" about its meaningfulness, yet, it was the 
> West that reviewed and approved of it. Instead of serious collaboration, is 
> research not perhaps being hamstrung by the NIH (Not Invented Here) syndrome, 
> acting like a stuck handbrake?

You intrigue me. "Contextualization ... in LLM systemic hierarchies"
was completed and approved 17 years ago?

"Contextualization" is a pretty broad word. I think the fact that
Bengio retreated to distributed representation with "Neural Language
Models" around... 2003(?) might be seen as one acceptance of... if not
contextualization, at least indeterminacy (I see Bengio refers to "the
curse of dimensionality".) But I see nothing about structure until
Coecke et co. around 2007. And even they (and antecedents going back
to the early '90s with Smolensky?) I'm increasingly appreciating seem
trapped in their tensor formalisms.

The Bengio thread, if it went anywhere, it stayed stuck on structure
until deep learning rescued it with LSTM. And then "attention".

Anyway, the influence of Coecke seems to be tiny. And basically
mis-construed. I think Linas Vepstas followed it, but only saw
encouragement to seek other mathematical abstractions of grammar. And
OpenCog wasted a decade trying to learn those grammars.

Otherwise, I've been pretty clear that I think there are hints to what
I'm arguing in linguistics and maths going back decades, and in
philosophy going back centuries. The linguistics ones specifically
ignored by machine learning.

But that any of this, or anything like it was "grabbed ... as fast as
they could" by the market in Japan, is a puzzle to me (17 years ago?
Specifically 17?)

As is the idea that the West failed to use it, even having "reviewed
and approved it", because it was "snooty" about... Japan's market
having grabbed it first?

Sadly Japanese research in AI, to my knowledge, has been dead since
their big push in the 1980s. Dead, right through their "lost" economic
decades. I met the same team I knew working on symbolic machine
translation grammars 1989-91, at a conference in China in 2002, and as
far as I know they were still working on refinements to the same
symbolic grammar. 10 more years. Same team. Same tech. Just one of the
係長 had become 課長.

What is this event from 17 years ago?

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M2187d1c831913c8c67e1fc9c
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to