Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Keyvan M. Sadeghi
It's because of biology. There, I said it. But it's more nuanced. Brain
cells are almost identical at birth. The experiences that males and females
go through in life, however, are societally different. And that's rooted in
chimpz being our forefathers, and mqscular difference of males and females
in most species.

Never say never 

On Tue, May 7, 2024 at 9:33 PM Matt Mahoney  wrote:

> We don't know the reason and probably never will. In my computer science
> department at Florida Tech, both students and faculty were 90% male in
> spite of more women than men are graduating college now. It is taboo to
> suggest this is because of biology.
>
> On Tue, May 7, 2024, 9:05 PM Keyvan M. Sadeghi 
> wrote:
>
>> Ah also BTW, just a theory, maybe less females in STEM, tech, chess, etc.
>> is due to upbringing conditioning. And in chimpanzees, result of physical
>> strength?
>>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M9587fc563282e021024fb423
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Matt Mahoney
We don't know the reason and probably never will. In my computer science
department at Florida Tech, both students and faculty were 90% male in
spite of more women than men are graduating college now. It is taboo to
suggest this is because of biology.

On Tue, May 7, 2024, 9:05 PM Keyvan M. Sadeghi 
wrote:

> Ah also BTW, just a theory, maybe less females in STEM, tech, chess, etc.
> is due to upbringing conditioning. And in chimpanzees, result of physical
> strength?
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M6c9dc67bb956d267964c718f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Rob Freeman
I'm disappointed you don't address my points James. You just double
down that there needs to be some framework for learning, and that
nested stacks might be one such constraint.

I replied that nested stacks might be emergent on dependency length.
So not a constraint based on actual nested stacks in the brain, but a
"soft" constraint based on the effect of dependency
 length on groups/stacks generated/learned from sequence networks.

BTW just noticed your "Combinatorial Hierarchy, Computational
Irreducibility and other things that just don't matter..." thread.
Perhaps that thread is a better location to discuss this. Were you
positing in that thread that all of maths and physics might be
emergent on combinatorial hierarchies? Were you saying yes, but it
doesn't matter to the practice of AGI, because for physics we can't
find the combinatorial basis, and in practice we can find top down
heuristics which work well enough?

Well, maybe for language a) we can't find top down heuristics which
work well enough and b) we don't need to, because for language a
combinatorial basis is actually sitting right there for us, manifest,
in (sequences of) text.

With language we don't just have the top-down perception of structure
like we do with physics (or maths.) Language is different to other
perceptual phenomena that way. Because language is the brain's attempt
to generate a perception in others. So with language we're also privy
to what the system looks like bottom up. We also have the, bottom up,
"word" tokens which are the combinatorial basis which generates a
perception.

Anyway, it seems like my point is similar to your point: language
structure, and cognition, might be emergent on combinatorial
hierarchies.

LLMs go part way to implementing that emergent structure. They succeed
to the extent they abandon an explicit search for top-down structure,
and just allow the emergent structure to balloon. Seemingly endlessly.
But they are a backwards implementation of emergent structure.
Succeeding by allowing the structure to grow. But failing because
back-prop assumes the structure will somehow not grow too. That there
will be an end to growth. Which will somehow be a compression of the
growth it hasn't captured yet... Actually, if it grows, you can't
capture it all. And in particular, back-prop can't capture all of the
emergent structure, because, like physics, that emergent structure
manifests some entanglement, and chaos.

In this thesis, LLMs are on the right track. We just need to replace
back-prop with some other way of finding emergent hierarchies of
predictive symmetries, and do it generatively, on the fly.

In practical terms, maybe, as I said earlier, the variational
estimation with heat of Extropic. Or maybe some kind of distributed
reservoir computer like LiquidAI are proposing. Otherwise just
straight out spiking NNs should be a good fit. If we focus on actively
seeking new variational symmetries using the spikes, and not
attempting to (mis)fit them to back-propagation.

On Tue, May 7, 2024 at 11:32 PM James Bowery  wrote:
>...
>
> At all levels of abstraction where natural science is applicable, people 
> adopt its unspoken presumption which is that mathematics is useful.  This is 
> what makes Solomonoff's proof relevant despite the intractability of proving 
> that one has found the ideal mathematical model.  The hard sciences are 
> merely the most obvious level of abstraction in which one may recognize this.
>...
>
> Any constraint on the program search (aka search for the ultimate algorithmic 
> encoding of all data in evidence at any given level of abstraction) is a 
> prior.  The thing that makes the high order push down automata (such as 
> nested stacks) interesting is that it may provide a constraint on program 
> search that evolution has found useful enough to hard wire into the structure 
> of the human brain -- specifically in the ratio of "capital investment" 
> between sub-modules of brain tissue.  This is a constraint, the usefulness of 
> which, may be suspected as generally applicable to the extent that human 
> cognition is generally applicable.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M321384a83da19a33df5ba986
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Keyvan M. Sadeghi
Ah also BTW, just a theory, maybe less females in STEM, tech, chess, etc.
is due to upbringing conditioning. And in chimpanzees, result of physical
strength?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M501182a8e344b3247a236d5a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Keyvan M. Sadeghi
Agreed  male ego is a necessity for human civilization, I have a whole
lot of it, most likely. But as people living in the post-barbaric age, we
should be more self-aware 

On Tue, May 7, 2024 at 6:01 PM Matt Mahoney  wrote:

> On Tue, May 7, 2024 at 4:17 PM Keyvan M. Sadeghi
>  wrote:
> >
> > This list reeks of male testosterone
> 
> So does the whole STEM field. Maybe there are biological differences
> in the brain, like why males commit 95% of murders in both humans and
> chimpanzees.
> 
> Data compression is like that. It's all about smaller, faster, better.
> Who can top the benchmarks? Nobody is in it for the money. If it
> wasn't for male egos, progress would grind to a halt.
> 
> I do miss Ben and all the others who were doing actual research in AGI
> when he created the list about 20 years ago. I mean, he coined the
> term "AGI". I learned a lot back then.
> 
> --
> -- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M41654ba34117635dbb1f1d7e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
Kolmogorov proved there is no such thing as an infinitely powerful
compressor. Not even if you have infinite computing power.

A compressor is a program that inputs a string and outputs a short
description of it, like another string encoding a program in some
language that outputs the original string. A string is a finite length
sequence of 0 or more characters from a finite alphabet such as binary
or ASCII. Strings can be ordered like numbers, by increasing length
and lexicographically for strings of the same length.

Suppose you had an infinitely powerful compressor, one that inputs a
string and outputs the shortest possible description of it. You could
use your program to test whether another compressor found the best
possible compression by decompressing it and compressing again with
your compressor to see if it got any smaller.

The proof goes like this. How does your test program answer "the first
string that cannot be described in less than 1,000,000 characters"?

On Tue, May 7, 2024 at 5:50 PM John Rose  wrote:
>
> On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote:
>
> We don't know the program that computes the universe because it would require 
> the entire computing power of the universe to test the program by running it, 
> about 10^120 or 2^400 steps. But we do have two useful approximations. If we 
> set the gravitational constant G = 0, then we have quantum mechanics, a 
> complex differential wave equation whose solution is observers that see 
> particles. Or if we set Planck's constant h = 0, then we have general 
> relativity, a tensor field equation whose solution is observers that see 
> space and time. Wolfram and Yudkowsky both estimate this unknown program is 
> only a few hundred bits long, and I agree. It is roughly the complexity of 
> quantum mechanics and relativity taken together, and roughly the minimum size 
> by Occam's Razor of a multiverse where the n'th universe is run for n steps 
> until we observe one that necessarily contains intelligent life.
>
>
> Sounds like the KC of U, the maximum lossless compression of the universe 
> assuming infinite resources for perfect prediction. But there is a lot of 
> lossylosslessness out there for imperfect prediction or locally perfect 
> lossless, near lossless, etc. That intelligence has a physical computational 
> topology across spacetime where much is redundant though estimable… and 
> temporally changing. I don’t rule out though no matter how improbable that 
> there could be an infinitely powerful compressor within this universe, an 
> InfiniComp. Weird stuff has been shown to be possible. We can conceive of it 
> but there may be issues with our conception since even that is bound by 
> limits.
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mdbff080b9764f7c48d917538
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Matt Mahoney
On Tue, May 7, 2024 at 4:17 PM Keyvan M. Sadeghi
 wrote:
>
> This list reeks of male testosterone

So does the whole STEM field. Maybe there are biological differences
in the brain, like why males commit 95% of murders in both humans and
chimpanzees.

Data compression is like that. It's all about smaller, faster, better.
Who can top the benchmarks? Nobody is in it for the money. If it
wasn't for male egos, progress would grind to a halt.

I do miss Ben and all the others who were doing actual research in AGI
when he created the list about 20 years ago. I mean, he coined the
term "AGI". I learned a lot back then.


-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Mc7efe028fd697eece6b17bdc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote:
> We don't
know the program that computes the universe because it would require
the entire computing power of the universe to test the program by
running it, about 10^120 or 2^400 steps. But we do have two useful
approximations. If we set the gravitational constant G = 0, then we
have quantum mechanics, a complex differential wave equation whose
solution is observers that see particles. Or if we set Planck's
constant h = 0, then we have general relativity, a tensor field
equation whose solution is observers that see space and time. Wolfram
and Yudkowsky both estimate this unknown program is only a few hundred
bits long, and I agree. It is roughly the complexity of quantum
mechanics and relativity taken together, and roughly the minimum size
by Occam's Razor of a multiverse where the n'th universe is run for n
steps until we observe one that necessarily contains intelligent life.

Sounds like the KC of U, the maximum lossless compression of the universe 
assuming infinite resources for perfect prediction. But there is a lot of 
lossylosslessness out there for imperfect prediction or locally perfect 
lossless, near lossless, etc. That intelligence has a physical computational 
topology across spacetime where much is redundant though estimable… and 
temporally changing. I don’t rule out though no matter how improbable that 
there could be an infinitely powerful compressor within this universe, an 
InfiniComp. Weird stuff has been shown to be possible. We can conceive of it 
but there may be issues with our conception since even that is bound by limits.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M8f6799ef3b2e99f86336b4cb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Towards AGI: the missing piece

2024-05-07 Thread Matt Mahoney
On Tue, May 7, 2024 at 7:44 AM  wrote:
>
> And this is what AI would do: https://github.com/mind-child/mirror

"The algorithm mirrors its environment. If we treat it poorly, it will
be our enemy. If we treat it well, it will be our friend."

Not quite. That would be true of an upload, which is a robot
programmed to predict what a human would do and carry out those
predictions in real time. But it doesn't have to be programmed that
way.

We know how this works with language models. They pass the Turing test
using nothing more than text prediction (a point I argued when I
started the large text benchmark in 2006). A LLM knows that humans
respond to kindness with kindness and anger with anger. It will
respond to you that way because that's how it predicts a human would
respond. You can tell it to express any emotion you want and it knows
how, just like an actor. Or someone else can tell it. But it has no
feelings.

You can't control how you feel. An AI has no such limitation.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tef2462d212b37e50-Mbc580cbecfb00b5c09cf365b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Keyvan M. Sadeghi
This list reeks of male testosterone, egos reaching other universes,
remembered why I stopped reading it 10 years ago.

Poor Ben Man is a father to all of ya, when you had no one else in the
world that had the slightest idea of what you talk about, he gathered you
in this sanctuary!

Give away to your conspiracies and send wise sounding emails from under
your blankets. Worship Elon and Altman and give talks at the shitty media.

Meanwhile some of us are actually building! If you have synergetic plans in
mind, please share! If you can't get a hold of your ego, wank it off and
spare us the authoritative stream of consciousness!



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M0568f598794ce8170f4aad2c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Quan Tesla
I'm thinking more on proability paths for all possible particle-path
outcomes of particlewaves and Heisenberg. This is a pre-entanglement state.

Perhaps this refers to Ben's "chaos", whereas photons may represent
"order".

Trying to be pragmatic in my thinking, for AGI at least the functionality
of the photoelectric effect within a controlled quantum electrodynamicsl
environment has to be constructed.

I think that might be the foundational lab required for entangling quantum
information. Once entangled particles could be identified from such
"chaos", a discrete wave function could be set up to act as carrier channel
for ubiquitous quantum communication. However, messaging is a different
matter.



On Tue, May 7, 2024, 19:54 Matt Mahoney  wrote:

> On Tue, May 7, 2024 at 11:14 AM Quan Tesla  wrote:
> >
> > Don't you believe that true randomness persists in asymmetry, or even
> that randomness would be found in supersymmetry? I'm referring here to the
> uncertainty principle.
> >
> > Is your view that the universe is always certain about the position and
> momentum of every-single particle in all possible worlds?
> 
> If I flip a coin and peek at the result, then your probability of
> heads is different than my probability of heads.
> 
> Likewise, in quantum mechanics, a system observing a particle is
> described by Schrodinger's wave equation just like any other system.
> The solution to the equation is the observer sees a particle in some
> state that is unknown in advance to the observer but predictable to
> someone who knows the quantum state of the system and has sufficient
> computing power to solve it, neither of which is available to the
> observer.
> 
> We know this because of Schrodinger's cat. The square of the wave
> function gives you the probability of observing a particle in the
> absence of more information, such as entanglement with another
> particle that you already observed. It is the same thing as peeking at
> my flipped coin, except that the computation is intractable without a
> quantum computer as large as the system it is modeling, which we don't
> have.
> 
> Or maybe you mean algorithmic randomness, which is independent of an
> observer. But again you have the same problem. An iterated
> cryptographic hash function with a 1000 bit key is random because you
> lack the computing power to guess the seed. Likewise, if you knew the
> exact quantum state of an observer, the computation required to solve
> it grows exponentially with its size. That's why we can't compute the
> freezing point of water by modeling atoms.
> 
> A theory of everything is probably a few hundred bits. But knowing
> what it is would be useless because it would make no predictions
> without the computing power of the whole universe. That is the major
> criticism of string theory.
> 
> --
> -- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mc635b984d4b6577aa8c38a54
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread James Bowery
On Mon, May 6, 2024 at 9:22 PM Rob Freeman 
wrote:

> ...
> James: "Physics Informed Machine Learning". "Building models from data
> using optimization and regression techniques".
>
> Fine. If you have a physics to constrain it to. We don't have that
> "physics" for language.
>

At all levels of abstraction where natural science is applicable, people
adopt its unspoken presumption which is that mathematics is useful.  This
is what makes Solomonoff's proof relevant despite the intractability of
proving that one has found the ideal mathematical model.  The hard sciences
are merely the most *obvious* level of abstraction in which one may
recognize this.


> Richard Granger you say? The brain is constrained to be a "nested stack"?
>
>
> https://www.researchgate.net/publication/343648662_Toward_the_quantification_of_cognition


Any constraint on the program search (aka search for the ultimate
algorithmic encoding of all data in evidence at any given level of
abstraction) is a prior.  The thing that makes the high order push down
automata (such as nested stacks) interesting is that it may provide a
constraint on program search that evolution has found useful enough to hard
wire into the structure of the human brain -- specifically in the ratio of
"capital investment" between sub-modules of brain tissue.  This is a
constraint, the usefulness of which, may be suspected as generally
applicable to the extent that human cognition is generally applicable.


>
> Language is a nested stack? Possibly. Certainly you get a (softish)
> ceiling of recursion starting level 3. The famous, level 2: "The rat
> the cat chased escaped" (OK) vs. level 3: "The rat the cat the dog bit
> chased escaped." (Borderline not OK.)
>
> How does that contradict my assertion that such nested structures must
> be formed on the fly, because they are chaotic attractors of
> predictive symmetry on a sequence network?
>
> On the other hand, can fixed, pre-structured, nested stacks explain
> contradictory (semantic) categories, like "strong tea" (OK) vs
> "powerful tea" (not OK)?
>
> Unless stacks form on the fly, and can contradict, how can we explain
> that "strong" can be a synonym (fit in the stack?) for "powerful" in
> some contexts, but not others?
>
> On the other hand, a constraint like an observation of limitations on
> nesting, might be a side effect of the other famous soft restriction,
> the one on dependency length. A restriction on dependency length is an
> easier explanation for nesting limits, and fits with the model that
> language is just a sequence network, which gets structured (into
> substitution groups/stacks?) on the fly.
>
> On Mon, May 6, 2024 at 11:06 PM James Bowery  wrote:
> >
> > Let's give the symbolists their due:
> >
> > https://youtu.be/JoFW2uSd3Uo?list=PLMrJAkhIeNNQ0BaKuBKY43k4xMo6NSbBa
> >
> > The problem isn't that symbolists have nothing to offer, it's just that
> they're offering it at the wrong level of abstraction.
> >
> > Even in the extreme case of LLM's having "proven" that language modeling
> needs no priors beyond the Transformer model and some hyperparameter
> tweaking, there are language-specific priors acquired over the decades if
> not centuries that are intractable to learn.
> >
> > The most important, if not conspicuous, one is Richard Granger's
> discovery that Chomsky's hierarchy elides the one grammar category that
> human cognition seems to use.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Mf038b68611937324cad488c0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
On Tue, May 7, 2024 at 11:14 AM Quan Tesla  wrote:
>
> Don't you believe that true randomness persists in asymmetry, or even that 
> randomness would be found in supersymmetry? I'm referring here to the 
> uncertainty principle.
>
> Is your view that the universe is always certain about the position and 
> momentum of every-single particle in all possible worlds?

If I flip a coin and peek at the result, then your probability of
heads is different than my probability of heads.

Likewise, in quantum mechanics, a system observing a particle is
described by Schrodinger's wave equation just like any other system.
The solution to the equation is the observer sees a particle in some
state that is unknown in advance to the observer but predictable to
someone who knows the quantum state of the system and has sufficient
computing power to solve it, neither of which is available to the
observer.

We know this because of Schrodinger's cat. The square of the wave
function gives you the probability of observing a particle in the
absence of more information, such as entanglement with another
particle that you already observed. It is the same thing as peeking at
my flipped coin, except that the computation is intractable without a
quantum computer as large as the system it is modeling, which we don't
have.

Or maybe you mean algorithmic randomness, which is independent of an
observer. But again you have the same problem. An iterated
cryptographic hash function with a 1000 bit key is random because you
lack the computing power to guess the seed. Likewise, if you knew the
exact quantum state of an observer, the computation required to solve
it grows exponentially with its size. That's why we can't compute the
freezing point of water by modeling atoms.

A theory of everything is probably a few hundred bits. But knowing
what it is would be useless because it would make no predictions
without the computing power of the whole universe. That is the major
criticism of string theory.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M348cbbd93444a977d8ad5885
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Quan Tesla
Don't you believe that true randomness persists in asymmetry, or even that
randomness would be found in supersymmetry? I'm referring here to the
uncertainty principle.

Is your view that the universe is always certain about the position and
momentum of every-single particle in all possible worlds?

On Tue, May 7, 2024, 18:03 Matt Mahoney  wrote:

> Let me explain what I mean by the intelligence or predictive power of
> the universe. I mean that the universe computes everything in it, the
> position of every atom over time. If I knew that, I could tell you
> everything that will ever happen, like tomorrow's winning lottery
> numbers or the exact time of death of every person who has ever lived
> or ever will. I could tell you if there was life on other planets, and
> if so, what it looks like and where to find it.
> 
> Of course that is impossible by Wolpert's theorem. The universe can't
> know everything about itself and neither can anything in it. We don't
> know the program that computes the universe because it would require
> the entire computing power of the universe to test the program by
> running it, about 10^120 or 2^400 steps. But we do have two useful
> approximations. If we set the gravitational constant G = 0, then we
> have quantum mechanics, a complex differential wave equation whose
> solution is observers that see particles. Or if we set Planck's
> constant h = 0, then we have general relativity, a tensor field
> equation whose solution is observers that see space and time. Wolfram
> and Yudkowsky both estimate this unknown program is only a few hundred
> bits long, and I agree. It is roughly the complexity of quantum
> mechanics and relativity taken together, and roughly the minimum size
> by Occam's Razor of a multiverse where the n'th universe is run for n
> steps until we observe one that necessarily contains intelligent life.
> 
> --
> -- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M8e0a12e8d40cd447a165
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
Let me explain what I mean by the intelligence or predictive power of
the universe. I mean that the universe computes everything in it, the
position of every atom over time. If I knew that, I could tell you
everything that will ever happen, like tomorrow's winning lottery
numbers or the exact time of death of every person who has ever lived
or ever will. I could tell you if there was life on other planets, and
if so, what it looks like and where to find it.

Of course that is impossible by Wolpert's theorem. The universe can't
know everything about itself and neither can anything in it. We don't
know the program that computes the universe because it would require
the entire computing power of the universe to test the program by
running it, about 10^120 or 2^400 steps. But we do have two useful
approximations. If we set the gravitational constant G = 0, then we
have quantum mechanics, a complex differential wave equation whose
solution is observers that see particles. Or if we set Planck's
constant h = 0, then we have general relativity, a tensor field
equation whose solution is observers that see space and time. Wolfram
and Yudkowsky both estimate this unknown program is only a few hundred
bits long, and I agree. It is roughly the complexity of quantum
mechanics and relativity taken together, and roughly the minimum size
by Occam's Razor of a multiverse where the n'th universe is run for n
steps until we observe one that necessarily contains intelligent life.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M8bedda3b66ddcfb10805ff85
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 8:04 AM, Quan Tesla wrote:
> To suggest that every hypothetical universe has its own alpha, makes no 
> sense, as alpha is all encompassing as it is.

You are exactly correct. There is another special case besides expressing the 
intelligence of the universe. And that is expressing the intelligence of 
hypothetical universe at zero communication complexity... unless there is some 
unknown Gödel channel.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me43083c2dce972b7746d22ed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Quan Tesla
alpha is adimensional and unitless. To suggest that every hypothetical
universe has its own alpha, makes no sense, as alpha is all encompassing as
it is.

However, if you were to offer up a suggestion that every universe may have
its own version of a triple-alpha process, then you'll have my fullest
attention.

On Thu, Apr 11, 2024 at 6:48 PM John Rose  wrote:

> On Thursday, April 11, 2024, at 10:07 AM, James Bowery wrote:
>
> What assumption is that?
>
>
> The assumption that alpha is unitless. Yes they cancel out but the simple
> process of cancelling units seems incomplete.
>
> Many of these constants though are re-representations of each other. How
> many constants does everything boil down to I wonder...
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M92bb3e56194310c4a0e69941
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Friday, May 03, 2024, at 7:10 PM, Matt Mahoney wrote:
> So when we talk about the intelligence of the universe, we can only really 
> measure it's computing power, which we generally correlate with prediction 
> power as a measure of intelligence.

The universes overall prediction power should increase, for example with the 
rise of intelligent civilizations among galaxies, though physical entropy is 
increasingly generated in the universe environment. All these prediction powers 
would increase unevenly though they would become increasingly networked via 
interstellar communication. A prediction power apex would be different from a 
sum and it emerges from biological negentropy and then from synthetic AGI but 
physical prediction "power" across the universe implies a sum verses an apex… 
if one civilization’s AGI has more prediction capacity or potential.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M00d6486e8f5ef51067361ff8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Towards AGI: the missing piece

2024-05-07 Thread ivan . moony
And this is what AI would do: https://github.com/mind-child/mirror
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tef2462d212b37e50-M355116d5261472fd2b6ee375
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-05-07 Thread John Rose

For those genuinely interested in this particular Imminent threat here is a 
case study (long video) circulating on how western consciousness is being 
programmatically hijacked presented by a gentleman who has been involved and 
researching it for several decades. He describes this particular “rogue, 
unfriendly” as a cloaked remnant “KGB Hydra”. We can only speculate what it 
really is at this day and age since the Soviet Union and KGB were officially 
dissolved in 1991 and some of us are aware of the advanced technologies that 
they were working on back then.

https://twitter.com/i/status/1779017982733107529

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M40062529b066bd7448fe50a0
Delivery options: https://agi.topicbox.com/groups/agi/subscription