On Sat, May 4, 2024 at 4:53 AM Matt Mahoney <mattmahone...@gmail.com> wrote:
>
> ... OpenCog was a hodgepodge of a hand coded structured natural language 
> parser, a toy neural vision system, and a hybrid fuzzy logic knowledge 
> representation data structure that was supposed to integrate it all together 
> but never did after years of effort. There was never any knowledge base or 
> language learning algorithm.

Good summary of the OpenCog system Matt.

But there was a language learning algorithm. Actually there was more
of a language learning algorithm in OpenCog than there is now in LLMs.
That's been the problem with OpenCog. By contrast LLMs don't try to
learn grammar. They just try to learn to predict words.

Rather than the mistake being that they had no language learning
algorithm, the mistake was OpenCog _did_ try to implement a language
learning algorithm.

By contrast the success, with LLMs, came to those who just tried to
predict words. Using a kind of vector cross product across word
embedding vectors, as it turns out.

Trying to learn grammar was linguistic naivety. You could have seen it
back then. Hardly anyone in the AI field has any experience with
language, actually, that's the problem. Even now with LLMs. They're
all linguistic naifs. A tragedy for wasted effort for OpenCog. Formal
grammars for natural language are unlearnable. I was telling Linas
that since 2011. I posted about it here numerous times. They spent a
decade, and millions(?) trying to learn a formal grammar.

Meanwhile vector language models which don't coalesce into formal
grammars, swooped in and scooped the pool.

That was NLP. But more broadly in OpenCog too, the problem seems to be
that Ben is still convinced AI needs some kind of symbolic
representation to build chaos on top of. A similar kind of error.

I tried to convince Ben otherwise the last time he addressed the
subject of semantic primitives in this AGI Discussion Forum session
two years ago, here:

March 18, 2022, 7AM-8:30AM Pacific time: Ben Goertzel leading
discussion on semantic primitives
https://singularitynet.zoom.us/rec/share/qwLpQuc_4UjESPQyHbNTg5TBo9_U7TSyZJ8vjzudHyNuF9O59pJzZhOYoH5ekhQV.2QxARBxV5DZxtqHQ?startTime=1647613120000

Starting timestamp 1:24:48, Ben says, disarmingly:

"For f'ing decades, which is ridiculous, it's been like, OK, I want to
explore these chaotic dynamics and emergent strange attractors, but I
want to explore them in a very fleshed out system, with a rich
representational capability, interacting with a complex world, and
then we still haven't gotten to that system ... Of course, an
alternative approach could be taken as you've been attempting, of ...
starting with the chaotic dynamics but in a simpler setting. ... But I
think we have agreed over the decades that to get to human level AGI
you need structure emerging from chaos. You need a system with complex
chaotic dynamics, you need structured strange attractors there, you
need the system's own pattern recognition to be recognizing the
patterns in these structured strange attractors, and then you have
that virtuous cycle."

So he embraces the idea cognitive structure is going to be chaotic
attractors, as he did when he wrote his "Chaotic Logic" book back in
1994. But he's still convinced the chaos needs to emerge on top of
some kind of symbolic representation.

I think there's a sunken cost fallacy at work. So much is invested in
the paradigm of chaos appearing on top of a "rich" symbolic
representation. He can't try anything else.

As I understand it, Hyperon is a re-jig of the software for this
symbol based "atom" network representation, to make it easier to
spread the processing load over networks.

As a network representation, the potential is there to merge insights
of no formal symbolic representation which has worked for LLMs, with
chaos on top which was Ben's earlier insight.

I presented on that potential at a later AGI Discussion Forum session.
But mysteriously the current devs failed to upload the recording for
that session.

> Maybe Hyperon will go better. But I suspect that LLMs on GPU clusters will 
> make it irrelevant.

Here I disagree with you. LLMs are at their own dead-end. What they
got right was to abandon formal symbolic representation. They likely
generate their own version of chaos, but they are unaware of it. They
are still trapped in their own version of the "learning" idea. Any
chaos generated is frozen and tangled in their enormous
back-propagated networks. That's why they exhibit no structure,
hallucinate, and their processing of novelty is limited to rough
mapping to previous knowledge. The solution will require a different
way of identifying chaotic attractors in networks of sequences.

A Hyperon style network might be a better basis to make that advance.
It would have to abandon the search for a symbolic representation.
LLMs can show the way there. Make prediction not representation the
focus. Just start with any old (sequential) tokens. But in contrast to
LLMs, instead of back-prop to find groupings which predict, we can
find groupings that predict in another way. Simple. It's mostly just
abandoning back-prop, use another way to find (chaotic attractor)
groupings which predict, on the fly.

When that insight will happen, I don't know. We have company Extropic
now, which are attempting to model distributions using heat noise.
Heat noise instead of back-prop. Modelling predictive symmetries in a
network using heat noise might lead them to it.

Really, any kind of noise in a network might be used to find these
predictive symmetry groups on the fly. Someone may stumble on it soon.

When they do, that'll make GPU clusters irrelevant. Nvidia down. And
no more talk of 7T investment in power generation needed. Mercifully!

-Rob

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M1e353eeb7f68d8b3cd250053
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to