Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-06 Thread Nanograte Knowledge Technologies
In quantum systems, symmetry emerges from asymmetry. The transitioning logic 
from one such a quantum state to another remains at the forefront of physics. 
The thermodynamical approach is also aligned to this way of reasoning about AGI.

perhaps, we need to continuously be asking ourselves: "What is the Western 
AGI?" vs "What is the Eastern AGI?" Inter alia, all design starts with a 
systems view and a clear identification of the core system as well as its 
constraints.

Given the above fractal question, it seems highly likely that if one asked a 
dev team in the Far Est vs a team in the West, one would get different answers. 
In truth, AGI isn't one thing. If it should be compared to the development of 
the atomic bomb, it should be accepted that more than 3 versions of a 
mainstream-AGI truth would be discernable.

For now, there's no right, or wrong way, only experimentation.

However, this remains a valid question, worthy of reliable input. "What is 
future AGI?" Thus, we remove the research biases and take a stab at defining a 
system of the future, which could then be elucidated as knowledge grows and 
scientific progress is announced. What is hampering real progress? I think it 
is vested interests. That "takeback" energy loop is the real problem here. Does 
an AGI care about what it gets out of being an AGI? If done correctly, it 
doesn't. It simply fulfils its singular functionality. AGI doesn't have to have 
real emotions. All it has to do is convince human beings that it does. In 
others words, for personality, try coding a highly-functional sociopathic 
tendency towards an intermediate point on the stochastic scale.

I think I might've just hit the nail on the head. Think of AGI as a sociopathic 
system.

First consciousness, therefore AGI. You're doing it the wrong way round. AGI 
won't emerge from all your code. It's not your AGI. On the contrary, AGI 
already exists, and would exist in quantumphysical gestalt, therefore absorb 
and inherently digest all "AGI" code. if AGI emerges from anything, it does so 
from the geometry of spacetime.

Seems, all physicists are unknowingly working towards a common goal, by 
different names.

If the AGI system you're busy developing isn't a function of light, it's 
probably obsolete. Moore's law isn't AGI compatible. Only nature's laws would 
provide sufficient power without the thermal complications to power a real AGI.

From: Matt Mahoney 
Sent: Monday, 06 May 2024 16:49
To: AGI 
Subject: Re: [agi] Hey, looks like the goertzel is hiring...

The problem with AGI is Wolpert's law. A can predict B or B can
predict A but not both. When we try to understand our own brains,
that's the special case of A = B. You can't. It is the same with AGI.
If you want to create an agent smarter than you, it can predict you
but you can't predict it. Otherwise, it is not as intelligent as you.
That is why LLMs work but we don't know how.

OpenCog's approach to language modeling was the traditional pipeline
of lexical tokenizing, grammar parsing, and semantics in that order.
It works fine for compilers but not for natural language. Children
learn to segment continuous speech before they learn any vocabulary
and they learn semantics before grammar. There are plenty of examples.
How do you parse "I ate pizza with pepperoni/a fork/Bob"? You can't
parse without knowing what the words mean. It turns out that learning
language this way takes a lot more computation because you need a
neural network with separate layers for phonemes or letters, tokens,
semantics, and grammar in that order.

How much computation? For a text only model, about 1 GB of text. For
AGI, the human brain has 86B neurons and 600T connections at 10 Hz.
You need about 10 petaflops, 1 petabyte and several years of training
video. If you want it faster than raising a child, then you need more
compute. That is why we had the AGI winter. Now it is spring. Before
summer, we need several billion of those to automate human labor and
our $1 quadrillion economy.

On Mon, May 6, 2024 at 12:11 AM Rob Freeman  wrote:
>
> On Sat, May 4, 2024 at 4:53 AM Matt Mahoney  wrote:
> >
> > ... OpenCog was a hodgepodge of a hand coded structured natural language 
> > parser, a toy neural vision system, and a hybrid fuzzy logic knowledge 
> > representation data structure that was supposed to integrate it all 
> > together but never did after years of effort. There was never any knowledge 
> > base or language learning algorithm.
>
> Good summary of the OpenCog system Matt.
>
> But there was a language learning algorithm. Actually there was more
> of a language learning algorithm in OpenCog than there is now in LLMs.
> That's been the problem with OpenCog. By contrast LLMs don't try to
> learn grammar. They just try to learn to predict words.
>
> Rather than the mistake being that they had no language learning
> algorithm, the mistake was OpenCog _did_ try to implement a language
> learning algorithm.

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-06 Thread Rob Freeman
Addendum: another candidate for this variational model for finding
distributions to replace back-prop (and consequently with the
potential to capture predictive structure which is chaotic attractors.
Though they don't appreciate the need yet.) There's Extropic, which is
proposing using heat noise. And, another, LiquidAI. If it's true
LiquidAI have nodes which are little reservoir computers, potentially
that might work on a similar variational estimation/generation of
distributions basis. Joscha Bach is involved with that. Though I don't
know in what capacity.

James: "Physics Informed Machine Learning". "Building models from data
using optimization and regression techniques".

Fine. If you have a physics to constrain it to. We don't have that
"physics" for language.

Richard Granger you say? The brain is constrained to be a "nested stack"?

https://www.researchgate.net/publication/343648662_Toward_the_quantification_of_cognition

Language is a nested stack? Possibly. Certainly you get a (softish)
ceiling of recursion starting level 3. The famous, level 2: "The rat
the cat chased escaped" (OK) vs. level 3: "The rat the cat the dog bit
chased escaped." (Borderline not OK.)

How does that contradict my assertion that such nested structures must
be formed on the fly, because they are chaotic attractors of
predictive symmetry on a sequence network?

On the other hand, can fixed, pre-structured, nested stacks explain
contradictory (semantic) categories, like "strong tea" (OK) vs
"powerful tea" (not OK)?

Unless stacks form on the fly, and can contradict, how can we explain
that "strong" can be a synonym (fit in the stack?) for "powerful" in
some contexts, but not others?

On the other hand, a constraint like an observation of limitations on
nesting, might be a side effect of the other famous soft restriction,
the one on dependency length. A restriction on dependency length is an
easier explanation for nesting limits, and fits with the model that
language is just a sequence network, which gets structured (into
substitution groups/stacks?) on the fly.

On Mon, May 6, 2024 at 11:06 PM James Bowery  wrote:
>
> Let's give the symbolists their due:
>
> https://youtu.be/JoFW2uSd3Uo?list=PLMrJAkhIeNNQ0BaKuBKY43k4xMo6NSbBa
>
> The problem isn't that symbolists have nothing to offer, it's just that 
> they're offering it at the wrong level of abstraction.
>
> Even in the extreme case of LLM's having "proven" that language modeling 
> needs no priors beyond the Transformer model and some hyperparameter 
> tweaking, there are language-specific priors acquired over the decades if not 
> centuries that are intractable to learn.
>
> The most important, if not conspicuous, one is Richard Granger's discovery 
> that Chomsky's hierarchy elides the one grammar category that human cognition 
> seems to use.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Me078486d3e7a407326e33a8a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-06 Thread James Bowery
Let's give the symbolists their due:

https://youtu.be/JoFW2uSd3Uo?list=PLMrJAkhIeNNQ0BaKuBKY43k4xMo6NSbBa

The problem isn't that symbolists have nothing to offer, it's just that
they're offering it at the wrong level of abstraction.

Even in the extreme case of LLM's having "proven" that language modeling
needs no priors beyond the Transformer model and some hyperparameter
tweaking, there are language-specific priors acquired over the decades if
not centuries that are intractable to learn.

The most important, if not conspicuous, one is Richard Granger's discovery
that Chomsky's hierarchy elides the one grammar category that human
cognition seems to use.


On Sun, May 5, 2024 at 11:11 PM Rob Freeman 
wrote:

> On Sat, May 4, 2024 at 4:53 AM Matt Mahoney 
> wrote:
> >
> > ... OpenCog was a hodgepodge of a hand coded structured natural language
> parser, a toy neural vision system, and a hybrid fuzzy logic knowledge
> representation data structure that was supposed to integrate it all
> together but never did after years of effort. There was never any knowledge
> base or language learning algorithm.
>
> Good summary of the OpenCog system Matt.
>
> But there was a language learning algorithm. Actually there was more
> of a language learning algorithm in OpenCog than there is now in LLMs.
> That's been the problem with OpenCog. By contrast LLMs don't try to
> learn grammar. They just try to learn to predict words.
>
> Rather than the mistake being that they had no language learning
> algorithm, the mistake was OpenCog _did_ try to implement a language
> learning algorithm.
>
> By contrast the success, with LLMs, came to those who just tried to
> predict words. Using a kind of vector cross product across word
> embedding vectors, as it turns out.
>
> Trying to learn grammar was linguistic naivety. You could have seen it
> back then. Hardly anyone in the AI field has any experience with
> language, actually, that's the problem. Even now with LLMs. They're
> all linguistic naifs. A tragedy for wasted effort for OpenCog. Formal
> grammars for natural language are unlearnable. I was telling Linas
> that since 2011. I posted about it here numerous times. They spent a
> decade, and millions(?) trying to learn a formal grammar.
>
> Meanwhile vector language models which don't coalesce into formal
> grammars, swooped in and scooped the pool.
>
> That was NLP. But more broadly in OpenCog too, the problem seems to be
> that Ben is still convinced AI needs some kind of symbolic
> representation to build chaos on top of. A similar kind of error.
>
> I tried to convince Ben otherwise the last time he addressed the
> subject of semantic primitives in this AGI Discussion Forum session
> two years ago, here:
>
> March 18, 2022, 7AM-8:30AM Pacific time: Ben Goertzel leading
> discussion on semantic primitives
>
> https://singularitynet.zoom.us/rec/share/qwLpQuc_4UjESPQyHbNTg5TBo9_U7TSyZJ8vjzudHyNuF9O59pJzZhOYoH5ekhQV.2QxARBxV5DZxtqHQ?startTime=164761312
>
> Starting timestamp 1:24:48, Ben says, disarmingly:
>
> "For f'ing decades, which is ridiculous, it's been like, OK, I want to
> explore these chaotic dynamics and emergent strange attractors, but I
> want to explore them in a very fleshed out system, with a rich
> representational capability, interacting with a complex world, and
> then we still haven't gotten to that system ... Of course, an
> alternative approach could be taken as you've been attempting, of ...
> starting with the chaotic dynamics but in a simpler setting. ... But I
> think we have agreed over the decades that to get to human level AGI
> you need structure emerging from chaos. You need a system with complex
> chaotic dynamics, you need structured strange attractors there, you
> need the system's own pattern recognition to be recognizing the
> patterns in these structured strange attractors, and then you have
> that virtuous cycle."
>
> So he embraces the idea cognitive structure is going to be chaotic
> attractors, as he did when he wrote his "Chaotic Logic" book back in
> 1994. But he's still convinced the chaos needs to emerge on top of
> some kind of symbolic representation.
>
> I think there's a sunken cost fallacy at work. So much is invested in
> the paradigm of chaos appearing on top of a "rich" symbolic
> representation. He can't try anything else.
>
> As I understand it, Hyperon is a re-jig of the software for this
> symbol based "atom" network representation, to make it easier to
> spread the processing load over networks.
>
> As a network representation, the potential is there to merge insights
> of no formal symbolic representation which has worked for LLMs, with
> chaos on top which was Ben's earlier insight.
>
> I presented on that potential at a later AGI Discussion Forum session.
> But mysteriously the current devs failed to upload the recording for
> that session.
>
> > Maybe Hyperon will go better. But I suspect that LLMs on GPU clusters
> will make it irrelevant.
> 
> Here I 

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-06 Thread Matt Mahoney
The problem with AGI is Wolpert's law. A can predict B or B can
predict A but not both. When we try to understand our own brains,
that's the special case of A = B. You can't. It is the same with AGI.
If you want to create an agent smarter than you, it can predict you
but you can't predict it. Otherwise, it is not as intelligent as you.
That is why LLMs work but we don't know how.

OpenCog's approach to language modeling was the traditional pipeline
of lexical tokenizing, grammar parsing, and semantics in that order.
It works fine for compilers but not for natural language. Children
learn to segment continuous speech before they learn any vocabulary
and they learn semantics before grammar. There are plenty of examples.
How do you parse "I ate pizza with pepperoni/a fork/Bob"? You can't
parse without knowing what the words mean. It turns out that learning
language this way takes a lot more computation because you need a
neural network with separate layers for phonemes or letters, tokens,
semantics, and grammar in that order.

How much computation? For a text only model, about 1 GB of text. For
AGI, the human brain has 86B neurons and 600T connections at 10 Hz.
You need about 10 petaflops, 1 petabyte and several years of training
video. If you want it faster than raising a child, then you need more
compute. That is why we had the AGI winter. Now it is spring. Before
summer, we need several billion of those to automate human labor and
our $1 quadrillion economy.

On Mon, May 6, 2024 at 12:11 AM Rob Freeman  wrote:
>
> On Sat, May 4, 2024 at 4:53 AM Matt Mahoney  wrote:
> >
> > ... OpenCog was a hodgepodge of a hand coded structured natural language 
> > parser, a toy neural vision system, and a hybrid fuzzy logic knowledge 
> > representation data structure that was supposed to integrate it all 
> > together but never did after years of effort. There was never any knowledge 
> > base or language learning algorithm.
>
> Good summary of the OpenCog system Matt.
>
> But there was a language learning algorithm. Actually there was more
> of a language learning algorithm in OpenCog than there is now in LLMs.
> That's been the problem with OpenCog. By contrast LLMs don't try to
> learn grammar. They just try to learn to predict words.
>
> Rather than the mistake being that they had no language learning
> algorithm, the mistake was OpenCog _did_ try to implement a language
> learning algorithm.
>
> By contrast the success, with LLMs, came to those who just tried to
> predict words. Using a kind of vector cross product across word
> embedding vectors, as it turns out.
>
> Trying to learn grammar was linguistic naivety. You could have seen it
> back then. Hardly anyone in the AI field has any experience with
> language, actually, that's the problem. Even now with LLMs. They're
> all linguistic naifs. A tragedy for wasted effort for OpenCog. Formal
> grammars for natural language are unlearnable. I was telling Linas
> that since 2011. I posted about it here numerous times. They spent a
> decade, and millions(?) trying to learn a formal grammar.
>
> Meanwhile vector language models which don't coalesce into formal
> grammars, swooped in and scooped the pool.
>
> That was NLP. But more broadly in OpenCog too, the problem seems to be
> that Ben is still convinced AI needs some kind of symbolic
> representation to build chaos on top of. A similar kind of error.
>
> I tried to convince Ben otherwise the last time he addressed the
> subject of semantic primitives in this AGI Discussion Forum session
> two years ago, here:
>
> March 18, 2022, 7AM-8:30AM Pacific time: Ben Goertzel leading
> discussion on semantic primitives
> https://singularitynet.zoom.us/rec/share/qwLpQuc_4UjESPQyHbNTg5TBo9_U7TSyZJ8vjzudHyNuF9O59pJzZhOYoH5ekhQV.2QxARBxV5DZxtqHQ?startTime=164761312
>
> Starting timestamp 1:24:48, Ben says, disarmingly:
>
> "For f'ing decades, which is ridiculous, it's been like, OK, I want to
> explore these chaotic dynamics and emergent strange attractors, but I
> want to explore them in a very fleshed out system, with a rich
> representational capability, interacting with a complex world, and
> then we still haven't gotten to that system ... Of course, an
> alternative approach could be taken as you've been attempting, of ...
> starting with the chaotic dynamics but in a simpler setting. ... But I
> think we have agreed over the decades that to get to human level AGI
> you need structure emerging from chaos. You need a system with complex
> chaotic dynamics, you need structured strange attractors there, you
> need the system's own pattern recognition to be recognizing the
> patterns in these structured strange attractors, and then you have
> that virtuous cycle."
>
> So he embraces the idea cognitive structure is going to be chaotic
> attractors, as he did when he wrote his "Chaotic Logic" book back in
> 1994. But he's still convinced the chaos needs to emerge on top of
> some kind of symbolic representation.
>
> I think