Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-11 Thread Quan Tesla
Thanks for the references Rob. I'll be sure to pay the links a proper
visit. Yes, De Bono was on every consultant's lips for a while.

Not corporate, but in specialist operational training for the military.
This included doctrine, drills, deployment and R in counter-insurgency
warfare.

I appreciate your views on quantum bastardization. In my case, I
continually test my work against comparative, industry-standard frameworks
and "methodologies". Not many SSMs around though.

I'm also taking your point about technical AGI specifics on board. My
contention is that via my method the MMI for knowledge engineering has been
completed and extensively tested in the field for more than 10 years, on
commercisl projects in the public and private sectors. It think it's ready
for automation. This is where the gifted developers would feature.

This state of completion should then settle ongoing disputes around
ambiguity, nestedness, hierarchy, and so forth. I'm not claiming
perfection, but the work's been done, and well done.

 I've been extracting heuristics and axioms from the resultant BOK. One
such being a 6x6 matrix for probability-based, holistic-systems
specification. I think it's Cox that'll tell us that this feature would
satisfy the definition for the method being a quantum-enabled system.

Why carry on reinventing the wheel because it wasn't invented in one's
backyard? In general, I just find such reasoning suboptimal.

My SSM's approach is dedicated to any system specification being mostly
driven by core systems, as an inside-out (atomic) focus. That's the closest
to the standard model we can probably get.

The quest for including functionality for entanglement and quantum gravity
is now on. My hypothesis is that the Po1 equation would hold the key to
evolutionary functionality. I refer to this mechanism as the triple-alpha1
process.

Could such a system generate its own light energy? Theoretically, yes. AGI
would be energy self generative.

The fractal specification method embraces quantum coherence. Thus,
normalizing components as pure, polymorphic objects. Further, being
inherently driven by meaningfulness, in the sense of emergence (outcomes
management).

Last, satisfying a clinical requirement in providing 1-step mutation
functionality. This translates into tracable knowledge mutation (as
evolutionary systems mimicking NDA).

I've been investigating if the matured  diagrams could be converted into
rich knowledge graphs. Given the method and output in adherence to IEEE
compliance, I see no reason why this cannot be done. I see a fit.

The resultant structure of systems information is standardized in a common,
symbolic language, context dependent, content independent, robust, and
scalable.

I'm more purist epistomologist today than bandwagoneer. Hence, I still
integrate the new with the existing. I imagine my methodology as a proper
reasoning and decision-making engine within a version of AGI. Perhaps, in
the role of a co-enabler of human2machine consciousness. Those algorithms
are doable (credit to Penrose and Hamaroff).

The prof mentor/friend and I have been integrating (theoretically) my
method with KIM, their statistical knowledge engine. That introduces the
4x4 PDCA (Plan, Do, Check, Act) matrix.

These are but components for functional AGI. We have no plan to actually
bring an AGI version to "life" yet, which places me outside the general
competitor ring and fully independent. No doubt though, we're busy
designing an AGI version, independently of each other.

A few white papers (with definitions and references), and research results
of mine (using the diagramming "blocks, and arrows" method - aka "Essence")
and supporting industry-integrated architectural frameworks can be
viewed/downloaded on Researchgate. Might be worth a quick browse?  Happy to
engage in further discussions, without divulging deeper algorithms.

The main search string would be: Robert Benjamin and tacit knowledge
engineering

Good chat!

On Sat, May 11, 2024, 09:39 Rob Freeman  wrote:

> In the corporate training domain, you must have come across Edward de
> Bono? I recall he also focuses on discontinuous change and novelty.
>
> Certainly I would say there is broad scope for the application of,
> broadly quantum flavoured, AI based insights about meaning in broader
> society. Not just project management. But not knowing how your
> "Essence" works, I can't comment how much that coincides with what I
> see.
>
> There's a lot of woo woo which surrounds quantum, so I try to use
> analogies sparingly. But for ways to present it, you might look at Bob
> Coecke's books. I believe he has invented a whole visual,
> diagrammatic, system for talking about quantum systems. He is proud of
> having used it to teach high school students. The best reference for
> that might be his book "Picturing Quantum Processes".
>
> Thanks for your interest in reading more about the solutions I see. I
> guess I've been lazy in not putting out more formal 

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-10 Thread Rob Freeman
In the corporate training domain, you must have come across Edward de
Bono? I recall he also focuses on discontinuous change and novelty.

Certainly I would say there is broad scope for the application of,
broadly quantum flavoured, AI based insights about meaning in broader
society. Not just project management. But not knowing how your
"Essence" works, I can't comment how much that coincides with what I
see.

There's a lot of woo woo which surrounds quantum, so I try to use
analogies sparingly. But for ways to present it, you might look at Bob
Coecke's books. I believe he has invented a whole visual,
diagrammatic, system for talking about quantum systems. He is proud of
having used it to teach high school students. The best reference for
that might be his book "Picturing Quantum Processes".

Thanks for your interest in reading more about the solutions I see. I
guess I've been lazy in not putting out more formal presentations.
Most of what I have written has been fairly technical, and directed at
language modeling.

The best non-technical summary might be an essay I posted on substack, end '22:

https://robertjohnfreeman.substack.com/p/essay-response-to-question-which

That touches briefly on the broader social implications of subjective
truth, and how a subjective truth which is emergent of objective
structural principles, might provide a new objective social consensus.

On quantum indeterminacy emerging from the complexity of combinations
of perfectly classical and observable elements, I tried to present
myself in contrast to Bob Coecke's top-down quantum grammar approach,
on the Entangled Things podcast:

https://www.entangledthings.com/entangled-things-rob-freeman

You could look at my Facebook group, Oscillating Networks for AI.
Check out my Twitter, @rob_freeman.

Technically, the best summary is probably still my AGI-21
presentation. Here's the workshop version of that, with discussion at
the end:

https://www.youtube.com/watch?v=YiVet-b-NM8

On Fri, May 10, 2024 at 9:18 PM Quan Tesla  wrote:
>
> Rob.
>
> Thank you for being candid. My verbage isn't deliberate. I don't seek 
> traction, or funding for what I do. There's no real justification for your 
> mistrust.
>
> Perhaps, let me provide some professional background instead. As an 
> independent researcher, I follow scientific developments among multiple 
> domains, seeking coherence and sense-making for my own scientific endeavor, 
> spanning 25 years. AGI has been a keen interest of mine since 2013. For AGI, 
> I advocate pure machine consciousness, shying away from biotech approaches.
>
> My field of research interest stems from a previous career in cross-cultural 
> training, and the many challenges it presented in the 80's. As 
> designer/administrator/manager and trainer, one could say I fell in love with 
> optimal learning methodologies and associated technologies.
>
> Changing careers, I started in mainframe operating to advance to programming, 
> systems analysis and design, information and business engineering and 
> ultimately contracting consultant. My one, consistent research area remained 
> knowledge engineering, especialky tacit-knowledge engineering. Today, I 
> promote the idea for a campus specializing in quantum systems engineering. 
> I'm generally regarded as being a pracademic of sorts.
>
> Like many of us practitioners here, I too was fortunate to learn with a 
> number of founders and world-class methodologists.
>
> In 1998, my job in banking was researcher/architect to the board of a 5-bank 
> merger, today part of the Barclays Group. As futurist architect and peer 
> reviewer, I was introduced to quantum physics. Specifically, in context of 
> the discovery of the quark.
>
> I realized that future, exponential complexity was approaching, especially 
> for knowledge organizations. I researched possible solutions worldwide, but 
> found none at that time, which concerned me deeply.
>
> Industries seemed to be rushing into the digital revolution without a 
> rekiable, methodological management foundation in place. As architect, I had 
> nothing to offer as a useful, 10-year futures outlook either. I didn't feel 
> competent to be the person to address that apparent gap.
>
> A good colleague of mine was a proven IE methodologist and consultant to IBM 
> Head Office. I approached him twice with my concerns, asking him to adapt his 
> proven IE methodogy to address the advancing future. He didn't take my 
> concerns seriously at all.
>
> For the next year, the future seemed ever-more clearer to me, yet I couldn't 
> find anyone to develop a future aid for enterprises as a roadmap toolkit, or 
> a coping mechanism for a complex-adaptive reality.  The world was hung up on 
> UML and Object oriented technologies.
>
> In desperation, I decided how, even though I probably was less suitable for 
> the job, to develop the future toolkit I had the vision of.
>
> That start was 25 years ago. Today, I have a field tested, hand 

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-10 Thread Quan Tesla
Rob

Thank you for being candid. My verbage isn't deliberate. I don't seek
traction, or funding for what I do. There's no real justification for your
mistrust.

Perhaps, let me provide some professional background instead. As an
independent researcher, I follow scientific developments among multiple
domains, seeking coherence and sense-making for my own scientific endeavor,
spanning 25 years. AGI has been a keen interest of mine since 2013. For
AGI, I advocate pure machine consciousness, shying away from biotech
approaches.

My field of research interest stems from a previous career in
cross-cultural training, and the many challenges it presented in the 80's.
As designer/administrator/manager and trainer, one could say I fell in love
with optimal learning methodologies and associated technologies.

Changing careers, I started in mainframe operating to advance to
programming, systems analysis and design, information and business
engineering and ultimately contracting consultant. My one, consistent
research area remained knowledge engineering, especialky tacit-knowledge
engineering. Today, I promote the idea for a campus specializing in quantum
systems engineering. I'm generally regarded as being a pracademic of sorts.

Like many of us practitioners here, I too was fortunate to learn with a
number of founders and world-class methodologists.

In 1998, my job in banking was researcher/architect to the board of a
5-bank merger, today part of the Barclays Group. As futurist architect and
peer reviewer, I was introduced to quantum physics. Specifically, in
context of the discovery of the quark.

I realized that future, exponential complexity was approaching, especially
for knowledge organizations. I researched possible solutions worldwide, but
found none at that time, which concerned me deeply.

Industries seemed to be rushing into the digital revolution without a
rekiable, methodological management foundation in place. As architect, I
had nothing to offer as a useful, 10-year futures outlook either. I didn't
feel competent to be the person to address that apparent gap.

A good colleague of mine was a proven IE methodologist and consultant to
IBM Head Office. I approached him twice with my concerns, asking him to
adapt his proven IE methodogy to address the advancing future. He didn't
take my concerns seriously at all.

For the next year, the future seemed ever-more clearer to me, yet I
couldn't find anyone to develop a future aid for enterprises as a roadmap
toolkit, or a coping mechanism for a complex-adaptive reality.  The world
was hung up on UML and Object oriented technologies.

In desperation, I decided how, even though I probably was less suitable for
the job, to develop the future toolkit I had the vision of.

That start was 25 years ago. Today, I have a field tested, hand
methodology, which if I had to give it a name, I'd call it: "Essence".

As new science emerges, I update it with relevant algorithms and look for a
pro-bono project of sufficient complexity to test it on. E.g., I focused on
establishing a predictable baseline for rhe covid19 experience.

Furthermore, during the last 18 months, I assisted a visiobary in Cleveland
with converting his holistic, 4D diagrammatical representation into mature,
system models. Presently, he's still working on his lexicon.  That was in
support of their community based, Cleveland inner-city rejuvenation
project.

During that test, I added vector specification to the quantum-enabled
systems engineering method. That addition now offers deabstraction
management to X dimensions.

My research continues, my intent being to marry my methodolody with Feynman
diagrams and Haramein's latest unified field theory. My modest
contributions have been published independently, but as publications are,
my public-domain knowledge dates back to 10 years ago. Old stuff.

I do protect my personal IP. The investment was considerable. E g., for the
past 10 years I've been actively involved with informal, applied learning
with a retired prof at NCSU.

We grok the latest thinking and advances. In this manner, I discovered a
new pattern in nature, which we called the Po1 (the pattern of oneness).

This is a fractal-of-fractals pattern, which potentially holds great
promise for future society, inasmuch as helping to extract energy and
matter from space and distributing it around the globe, spacecraft, and
other planets.

Unfortunately, it also holds great promise for warcraft, which I'm
personally not interested in. This view has frustrated progress, as I
refuse to be drawn into speculations about neutron bombs.

As such, I don't discuss details of the Po1, or even write them down. I've
even "brain encrypted" them against remote viewing.

IMO, when I see the frustration on this group by supersmart,
exceptionally-talented, yet stubborn and sometimes short-sighted
individuals, I sometimes feel compelled to try and provide a nudge. Even
Ben can do with it. We all could. After all, we're 

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-10 Thread stefan.reich.maker.of.eye via AGI
On Friday, May 10, 2024, at 6:57 AM, Rob Freeman wrote:
> Quan. You may be talking sense, but you've got to tone down the
buzzwords by a whole bunch. It's suspicious when you jam so many in
together.
You put it very generously :D
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mcd49e5194ca7b552e3628f9c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-09 Thread Rob Freeman
Quan. You may be talking sense, but you've got to tone down the
buzzwords by a whole bunch. It's suspicious when you jam so many in
together.

If you think there's a solution there, what are you doing about it in practice?

Be more specific. For instance, within the span of what I understand
here I might guess at relevance for Coecke's "Togetherness":

>From quantum foundations via natural language meaning to a theory of everything
https://arxiv.org/pdf/1602.07618.pdf

Or Tomas Mikolov's (key instigator of word2vec?) attempts to get
funding to explore evolutionary computational automata.

Tomas Mikolov - "We can design systems where complexity seems to be
growing" (Another one from AGI-21. It can be hard to motivate yourself
to listen to a whole conference, but when you pay attention, there can
be interesting stuff on the margins.)
https://youtu.be/CnsqHSCBgX0?t=10859

There's also an Artificial Life, ALife, community. Which seems to be
quite big in Japan. A group down in Okinawa under Tom Froese, anyway.
(Though they seem to go right off the edge and focus on some kind of
community consciousness.) But also in the ALife category I think of
Bert Chan, recently moved to Google(?).

https://biofish.medium.com/lenia-beyond-the-game-of-life-344847b10a72

All of that. And what Dreyfus called Heideggerian AI. Associated with
Rodney Brooks, and his "Fast, Cheap, and Out of Control", Artificial
Organism bots. It had a time in Europe especially, Luc Steels, Rolf
Pfeifer? The recently lost Daniel Dennett.

Why Heideggerian AI failed and how fixing it would require making it
more Heideggerian☆
Hubert L.Dreyfus
https://cid.nada.kth.se/en/HeideggerianAI.pdf

How would you relate what you are saying to all of these?

I'm sympathetic to them all. Though I think they miss the insight of
predictive symmetries. Which language drives you to. And what LLMs
stumbled on too. And that's held them up. Held them up for 30 years or
more.

ALife had a spike around 1995. Likely influencing Ben and his Chaotic
Logic book, too. They had the complex system idea back then, they just
didn't have a generative principle to bring it all together.

Meanwhile LLMs have kind of stumbled on the generative principle.
Though they remain stuck in the back-prop paradigm, and unable to
fully embrace the complexity.

I put myself in the context of all those threads. Though I kind of
worked back to them, starting with the language problem, and finding
the complexity as I went. As I say, language drives you to deal with
predictive symmetries. I think ALife has stalled for 30 years because
it hasn't had a central generative principle. What James might call a
"prior". Language offers a "prior" (predictive symmetries.) Combine
that with ALife complex systems, and you start to get something.

But that's to go off on my own tangent again.

Anyway, if you can be more specific, or put what you're saying in the
context of something someone else is doing, you might get more
traction.

On Thu, May 9, 2024 at 3:10 PM Quan Tesla  wrote:
>
> Rob, not butting in, but rather adding to what you said (see quotation below).
>
> The conviction across industries that hierachy (systems robustness) persist 
> only in descending and/or ascending structures, though true, can be proven to 
> be somewhat incomplete.
>
> There's another computational way to derive systems-control hierarchy(ies) 
> from. This is the quantum-engineering way (referred to before), where 
> hierachy lies hidden within contextual abstraction, identified via case-based 
> decision making and represented via compound functionality outcomes. 
> Hierarchy as a centre-outwards, in the sense of emergent, essential 
> characteristic of a scalable system. Not deterministically specified.
>
> In an evolutionary sense, hierarchies are N-nestable and self discoverable. 
> With the addition of integrated vectors, knowledge graphs may also be 
> derived, instead of crafted.
>
> Here, I'm referring to 2 systems hierarchies in particular. 'A', a hierarchy 
> of criticality (aka constraints) and 'B', a hierarchy of priority (aka 
> systemic order).
>
> Over the lifecycles of a growing system, as it mutates and evolve in 
> relevance (optimal semantics), hierarchy would start resembling - without 
> compromising -NNs and LLMs.
>
> Yes, a more-holistic envelope then, a new, quantum reality, where 
> fully-recursive functionality wasn't only guaranteed, but correlation and 
> association became foundational, architectural principles.
>
> This is the future of quantum systems engineering, which I believe quantum 
> computing would eventually lead all researchers to. Frankly, without it, 
> we'll remain stuck in the quagmire of early 1990s+ functional 
> analysis-paralysis, by any name.
>
> I'll hold out hope for that one, enlightened developer to make that quantum 
> leap into exponential systems computing. A seachange is needed.
>
> Inter-alia, Rob Freeman said: "And it seems by chance that the idea seems 
> consistent 

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-09 Thread James Bowery
On Thu, May 9, 2024 at 2:15 AM Rob Freeman 
wrote:

> On Thu, May 9, 2024 at 6:15 AM James Bowery  wrote:
> ...>
> > The origin of the Combinatorial Hierarchy thence ANPA was the Cambridge
> Language Research Unit.
>
> Interesting tip about the Cambridge Language Research Unit. Inspired
> by Wittgenstein?
>

I suspect much more by Turing's involvement with Colossus.  As I previously
mentioned.

But this history means what?

Spooks.

Let me tell you a little story:

Circa 1982, I was working on the first mass market electronic newspaper
(joint venture between Knight-Ridder and AT) called VIEWTRON
.  In
something of a departure from my formal job description as futures
architect, somehow management authority was bypassed to task me directly
with implementing a *specification* for encryption in conjunction with the
Bell Labs guys who were burning ROMs for the Western Electric NAPLPS
terminal. The spec called for key exchange relying entirely on DES.  The
guy who mysteriously interceded as my manager pro temp -- the name escapes
me at the moment -- rode me to implement the spec as stated without any
discussion -- in *direct* violation of my role as future's architect.  I
brought up the fact that key exchange should be based on public keys and
that the 56 bit DES key standard had already been shown to be breakable.
Moreover, the controversy involved a questionable relationship between the
DES standards committee, IBM and the NSA -- and that I didn't think the
*future* of VIEWTRON's nationwide rollout should lock in such a
questionable key exchange let alone 56-bit DES.

That's when my "manager" told me he was "a former NSA employee" without
further comment.

Let me tell you another little story:

The guy who invented Burroughs's zero address architecture and instituted
magnet ink for banking routing and account numbers was a colleague of mine
who sent me the following email in response to the announcement of the
Hutter Prize


Computerdom does not have a lot of art in inference engines (making
> predictions). The most effective inference engine that I know of is the
> software done for Colossus, Turing's code breaking "computer" of WWII. The
> Brits still treat that software as classified even though the hardware has
> been declassified for years. So far as I know, nobody outside of UK knows
> the details of that software. My point here is that drawing understanding
> from natural languages is a relatively small art practiced mostly by
> cryptoanalysts. And my further point is that the natural language of
> interest (be it English, Chinese, Mayan or ...) has a major influence on
> how one (person or program) goes about doing analyses and making
> inferences. From a practical perspective, the Hutter challenge would be
> much more tractable for at least me if I could do it in Chinese. My first
> PhD student was Jun Gu who is currently Chief Information Scientist for
> PRC. His thesis was on efficient compression technologies. If you wish, you
> can share these thoughts with whomever you please.


Bob Johnson Prof. Emeritus Computer Science Univ. of Utah


I met Bob as part of a startup which turned out to have strong connections
to the NSA.

The fact that Algorithmic Information is a fundamental advance over Shannon
Information with clear applications in cryptography, combined with the fact
that this has been known since the early 1960s in the open literature
without it having any significant impact on computational models in the
social sciences aka "prediction" of the consequences of various social
theories, stinks to high heaven.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M5f4ea79513dd780d7be1dafe
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-09 Thread James Bowery
On Thu, May 9, 2024 at 2:15 AM Rob Freeman 
wrote:

> On Thu, May 9, 2024 at 6:15 AM James Bowery  wrote:
> ...
> Criticisms are welcome. But just saying, oh, but hey look at my idea
> instead...
>

I may have confused you by conflating two levels of abstraction -- only one
of which is "my idea" (which isn't my idea at all but merely an idea that
has been around forever without garnering the attention it deserves):

1) Abstract grammar as a prior.
2) The proper structure for incorporating priors,  whatever they may be.

Forget about #1.  That was just an example -- a conjecture if you will --
that I found appealing as an under-appreciated prior but  distracted from
the much more important point of #2 which was about priors in general.

#2 is exemplified by the link I provided to physics informed machine
learning

which
is appropriate to bring up in the context of this particular post about the
ir/relevance of physics.  The point is not "physics". Physics is merely one
knowledge domain that, because it is "hard", is useful because the
technique of incorporating its priors into machine learning is exemplary.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M4a92b688c0804deb6a6a12a1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-09 Thread Quan Tesla
Rob, not butting in, but rather adding to what you said (see quotation
below).

The conviction across industries that hierachy (systems robustness) persist
only in descending and/or ascending structures, though true, can be proven
to be somewhat incomplete.

There's another computational way to derive systems-control hierarchy(ies)
from. This is the quantum-engineering way (referred to before), where
hierachy lies hidden within contextual abstraction, identified via
case-based decision making and represented via compound functionality
outcomes. Hierarchy as a centre-outwards, in the sense of emergent,
essential characteristic of a scalable system. Not deterministically
specified.

In an evolutionary sense, hierarchies are N-nestable and self discoverable.
With the addition of integrated vectors, knowledge graphs may also be
derived, instead of crafted.

Here, I'm referring to 2 systems hierarchies in particular. 'A', a
hierarchy of criticality (aka constraints) and 'B', a hierarchy of priority
(aka systemic order).

Over the lifecycles of a growing system, as it mutates and evolve in
relevance (optimal semantics), hierarchy would start resembling - without
compromising -NNs and LLMs.

Yes, a more-holistic envelope then, a new, quantum reality, where
fully-recursive functionality wasn't only guaranteed, but correlation and
association became foundational, architectural principles.

This is the future of quantum systems engineering, which I believe quantum
computing would eventually lead all researchers to. Frankly, without it,
we'll remain stuck in the quagmire of early 1990s+ functional
analysis-paralysis, by any name.

I'll hold out hope for that one, enlightened developer to make that quantum
leap into exponential systems computing. A seachange is needed.

Inter-alia, Rob Freeman said: "And it seems by chance that the idea seems
consistent with the emergent structure theme of this thread. With the
difference that with language, we have access to the emergent system,
bottom-up, instead of top down, the way we do with physics, maths."

On Thu, May 9, 2024, 11:15 Rob Freeman  wrote:

> On Thu, May 9, 2024 at 6:15 AM James Bowery  wrote:
> >
> > Shifting this thread to a more appropriate topic.
> >
> > -- Forwarded message -
> >>
> >> From: Rob Freeman 
> >> Date: Tue, May 7, 2024 at 8:33 PM
> >> Subject: Re: [agi] Hey, looks like the goertzel is hiring...
> >> To: AGI 
> >
> >
> >> I'm disappointed you don't address my points James. You just double
> >> down that there needs to be some framework for learning, and that
> >> nested stacks might be one such constraint.
> > ...
> >> Well, maybe for language a) we can't find top down heuristics which
> >> work well enough and b) we don't need to, because for language a
> >> combinatorial basis is actually sitting right there for us, manifest,
> >> in (sequences of) text.
> >
> >
> > The origin of the Combinatorial Hierarchy thence ANPA was the Cambridge
> Language Research Unit.
>
> Interesting tip about the Cambridge Language Research Unit. Inspired
> by Wittgenstein?
>
> But this history means what?
>
> > PS:  I know I've disappointed you yet again for not engaging directly
> your line of inquiry.  Just be assured that my failure to do so is not
> because I in any way discount what you are doing -- hence I'm not "doubling
> down" on some opposing line of thought -- I'm just not prepared to defend
> Granger's work as much as I am prepared to encourage you to take up your
> line of thought directly with him and his school of thought.
> 
> Well, yes.
> 
> Thanks for the link to Granger's work. It looks like he did a lot on
> brain biology, and developed a hypothesis that the biology of the
> brain split into different regions is consistent with aspects of
> language suggesting limits on nested hierarchy.
> 
> But I don't see it engages in any way with the original point I made
> (in response to Matt's synopsis of OpenCog language understanding.)
> That OpenCog language processing didn't fail because it didn't do
> language learning (or even because it didn't attempt "semantic"
> learning first.) That it was somewhat the opposite. That OpenCog
> language failed because it did attempt to find an abstract grammar.
> And LLMs succeed to the extent they do because they abandon a search
> for abstract grammar, and just focus on prediction.
> 
> That's just my take on the OpenCog (and LLM) language situation.
> People can take it or leave it.
> 
> Criticisms are welcome. But just saying, oh, but hey look at my idea
> instead... Well, it might be good for people who are really puzzled
> and looking for new ideas.
> 
> I guess it's a problem for AI research in general that people rarely
> attempt to engage with other people's ideas. They all just assert
> their own ideas. Like Matt's reply to the above... "Oh no, the real
> problem was they didn't try to learn semantics..."
> 
> If you think OpenCog language failed instead because it didn't attempt
> to 

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-09 Thread Rob Freeman
On Thu, May 9, 2024 at 6:15 AM James Bowery  wrote:
>
> Shifting this thread to a more appropriate topic.
>
> -- Forwarded message -
>>
>> From: Rob Freeman 
>> Date: Tue, May 7, 2024 at 8:33 PM
>> Subject: Re: [agi] Hey, looks like the goertzel is hiring...
>> To: AGI 
>
>
>> I'm disappointed you don't address my points James. You just double
>> down that there needs to be some framework for learning, and that
>> nested stacks might be one such constraint.
> ...
>> Well, maybe for language a) we can't find top down heuristics which
>> work well enough and b) we don't need to, because for language a
>> combinatorial basis is actually sitting right there for us, manifest,
>> in (sequences of) text.
>
>
> The origin of the Combinatorial Hierarchy thence ANPA was the Cambridge 
> Language Research Unit.

Interesting tip about the Cambridge Language Research Unit. Inspired
by Wittgenstein?

But this history means what?

> PS:  I know I've disappointed you yet again for not engaging directly your 
> line of inquiry.  Just be assured that my failure to do so is not because I 
> in any way discount what you are doing -- hence I'm not "doubling down" on 
> some opposing line of thought -- I'm just not prepared to defend Granger's 
> work as much as I am prepared to encourage you to take up your line of 
> thought directly with him and his school of thought.

Well, yes.

Thanks for the link to Granger's work. It looks like he did a lot on
brain biology, and developed a hypothesis that the biology of the
brain split into different regions is consistent with aspects of
language suggesting limits on nested hierarchy.

But I don't see it engages in any way with the original point I made
(in response to Matt's synopsis of OpenCog language understanding.)
That OpenCog language processing didn't fail because it didn't do
language learning (or even because it didn't attempt "semantic"
learning first.) That it was somewhat the opposite. That OpenCog
language failed because it did attempt to find an abstract grammar.
And LLMs succeed to the extent they do because they abandon a search
for abstract grammar, and just focus on prediction.

That's just my take on the OpenCog (and LLM) language situation.
People can take it or leave it.

Criticisms are welcome. But just saying, oh, but hey look at my idea
instead... Well, it might be good for people who are really puzzled
and looking for new ideas.

I guess it's a problem for AI research in general that people rarely
attempt to engage with other people's ideas. They all just assert
their own ideas. Like Matt's reply to the above... "Oh no, the real
problem was they didn't try to learn semantics..."

If you think OpenCog language failed instead because it didn't attempt
to learn grammar as nested stacks, OK, that's your idea. Good luck
trying to learn abstract grammar as nested stacks.

Actual progress in the field stumbles along by fits and starts. What's
happened in 30 years? Nothing much. A retreat to statistical
uncertainty about grammar in the '90s with HMMs? A first retreat to
indeterminacy. Then, what, 8 years ago the surprise success of
transformers, a cross-product of embedding vectors which ignores
structure and focuses on prediction. Why did it succeed? You, because
transformers somehow advance the nested stack idea? Matt, because
transformers somehow advance the semantics first idea?

My idea is that they advance the idea that a search for an abstract
grammar is flawed (in practice if not in theory.)

My idea is consistent with the ongoing success of LLMs. Which get
bigger and bigger, and don't appear to have any consistent structure.
But also their failures. That they still try to learn that structure
as a fixed artifact.

Actually, as far as I know, the first model in the LLM style of
indeterminate grammar as a cross-product of embedding vectors, was
mine.

***If anyone can point to an earlier precedent I'd love to see it.***

So LLMs feel like a nice vindication of those early ideas to me.
Without embracing the full extent of them. They still don't grasp the
full point. I don't see reason to be discouraged in it.

And it seems by chance that the idea seems consistent with the
emergent structure theme of this thread. With the difference that with
language, we have access to the emergent system, bottom-up, instead of
top down, the way we do with physics, maths.

But everyone is working on their own thing. I just got drawn in by
Matt's comment that OpenCog didn't do language learning.

-Rob

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mc80863f9a44a6d34f3ba12a6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-08 Thread James Bowery
Shifting this thread to a more appropriate topic.

-- Forwarded message -

> From: Rob Freeman 
> Date: Tue, May 7, 2024 at 8:33 PM
> Subject: Re: [agi] Hey, looks like the goertzel is hiring...
> To: AGI 
>

I'm disappointed you don't address my points James. You just double
> down that there needs to be some framework for learning, and that
> nested stacks might be one such constraint.


If I "double down" on 2+2=4, please understand that it is because I like a
sure bet.  Did you perhaps instead mean that I *re-asserted an obvious
point* which disappointed you because:

A) I would insult your intelligence rather than seeing that what you were
saying was not in conflict with the obvious and
B) failed to pick up on the nuanced point you were making that was not so
obvious
?

...

BTW just noticed your "Combinatorial Hierarchy, Computational
> Irreducibility and other things that just don't matter..." thread.
> Perhaps that thread is a better location to discuss this. Were you
> positing in that thread that all of maths and physics might be
> emergent on combinatorial hierarchies? Were you saying yes, but it
> doesn't matter to the practice of AGI, because for physics we can't
> find the combinatorial basis, and in practice we can find top down
> heuristics which work well enough?


Almost but not quite.  My point is that even if we can find the ultimate
deterministic algorithm for the universe (ie: its "combinatorial basis"),
it's virtually certain we can't execute that deterministic algorithm to
predict things in a deterministic manner.  We're almost without exception
resorting to statistical dynamics to predict things.  People who bring
"computational complexity" into this are stating the obvious, again, but in
such a manner as to confuse the reality of the natural sciences which is
that we some how manage to muddle through despite the fact that one level's
intractable computational complexity is another level's tractable
computational complexity because we learn how to abstract and live with the
resulting inaccuracies.

Well, maybe for language a) we can't find top down heuristics which
> work well enough and b) we don't need to, because for language a
> combinatorial basis is actually sitting right there for us, manifest,
> in (sequences of) text.


The origin of the Combinatorial Hierarchy thence ANPA was the Cambridge
Language Research Unit .

I suspect this was one of many offshoots of the Colossus project's
cryptographic research.

This, by the way, is one reason I suspect that there has been so much
resistance to Algorithmic Information as causal model selection.

Imagine if the Catholic Church had been able to suppress the ideas of the
scientific method while keeping them alive in house.

PS:  I know I've disappointed you yet again for not engaging directly your
line of inquiry.  Just be assured that my failure to do so is not because I
in any way discount what you are doing -- hence I'm not "doubling down" on
some *opposing* line of thought -- I'm just not prepared to defend
Granger's work as much as I am prepared to encourage you to take up your
line of thought directly with him and his school of thought.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M35e33add840c38e4404c1040
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-08 Thread John Rose
On Tuesday, May 07, 2024, at 6:53 PM, Matt Mahoney wrote:
> Kolmogorov proved there is no such thing as an infinitely powerful
compressor. Not even if you have infinite computing power.

Compressing the universe is a unique case especially being supplied with 
infinite computing power. Would the compressor ever stop? And would we be copy 
compressing the universe or actually compressing the full universe as data 
including the compressor itself. Would the compressor only run once since the 
whole universe would potentially go with it prohibiting another compression 
comparison or a decompression.

Assuming we are actually compressing the universe and not a copy, and there is 
no infinitely powerful compressor according to Kolmogorov, then it seems that 
the universe might still expand against the finite compressor that is being 
supplied with infinite power.

But then does the infinite power come from within the U or from outside 
somehow... h…
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M1f1c33b606b4df64d1bdc119
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
Kolmogorov proved there is no such thing as an infinitely powerful
compressor. Not even if you have infinite computing power.

A compressor is a program that inputs a string and outputs a short
description of it, like another string encoding a program in some
language that outputs the original string. A string is a finite length
sequence of 0 or more characters from a finite alphabet such as binary
or ASCII. Strings can be ordered like numbers, by increasing length
and lexicographically for strings of the same length.

Suppose you had an infinitely powerful compressor, one that inputs a
string and outputs the shortest possible description of it. You could
use your program to test whether another compressor found the best
possible compression by decompressing it and compressing again with
your compressor to see if it got any smaller.

The proof goes like this. How does your test program answer "the first
string that cannot be described in less than 1,000,000 characters"?

On Tue, May 7, 2024 at 5:50 PM John Rose  wrote:
>
> On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote:
>
> We don't know the program that computes the universe because it would require 
> the entire computing power of the universe to test the program by running it, 
> about 10^120 or 2^400 steps. But we do have two useful approximations. If we 
> set the gravitational constant G = 0, then we have quantum mechanics, a 
> complex differential wave equation whose solution is observers that see 
> particles. Or if we set Planck's constant h = 0, then we have general 
> relativity, a tensor field equation whose solution is observers that see 
> space and time. Wolfram and Yudkowsky both estimate this unknown program is 
> only a few hundred bits long, and I agree. It is roughly the complexity of 
> quantum mechanics and relativity taken together, and roughly the minimum size 
> by Occam's Razor of a multiverse where the n'th universe is run for n steps 
> until we observe one that necessarily contains intelligent life.
>
>
> Sounds like the KC of U, the maximum lossless compression of the universe 
> assuming infinite resources for perfect prediction. But there is a lot of 
> lossylosslessness out there for imperfect prediction or locally perfect 
> lossless, near lossless, etc. That intelligence has a physical computational 
> topology across spacetime where much is redundant though estimable… and 
> temporally changing. I don’t rule out though no matter how improbable that 
> there could be an infinitely powerful compressor within this universe, an 
> InfiniComp. Weird stuff has been shown to be possible. We can conceive of it 
> but there may be issues with our conception since even that is bound by 
> limits.
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mdbff080b9764f7c48d917538
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote:
> We don't
know the program that computes the universe because it would require
the entire computing power of the universe to test the program by
running it, about 10^120 or 2^400 steps. But we do have two useful
approximations. If we set the gravitational constant G = 0, then we
have quantum mechanics, a complex differential wave equation whose
solution is observers that see particles. Or if we set Planck's
constant h = 0, then we have general relativity, a tensor field
equation whose solution is observers that see space and time. Wolfram
and Yudkowsky both estimate this unknown program is only a few hundred
bits long, and I agree. It is roughly the complexity of quantum
mechanics and relativity taken together, and roughly the minimum size
by Occam's Razor of a multiverse where the n'th universe is run for n
steps until we observe one that necessarily contains intelligent life.

Sounds like the KC of U, the maximum lossless compression of the universe 
assuming infinite resources for perfect prediction. But there is a lot of 
lossylosslessness out there for imperfect prediction or locally perfect 
lossless, near lossless, etc. That intelligence has a physical computational 
topology across spacetime where much is redundant though estimable… and 
temporally changing. I don’t rule out though no matter how improbable that 
there could be an infinitely powerful compressor within this universe, an 
InfiniComp. Weird stuff has been shown to be possible. We can conceive of it 
but there may be issues with our conception since even that is bound by limits.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M8f6799ef3b2e99f86336b4cb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Quan Tesla
I'm thinking more on proability paths for all possible particle-path
outcomes of particlewaves and Heisenberg. This is a pre-entanglement state.

Perhaps this refers to Ben's "chaos", whereas photons may represent
"order".

Trying to be pragmatic in my thinking, for AGI at least the functionality
of the photoelectric effect within a controlled quantum electrodynamicsl
environment has to be constructed.

I think that might be the foundational lab required for entangling quantum
information. Once entangled particles could be identified from such
"chaos", a discrete wave function could be set up to act as carrier channel
for ubiquitous quantum communication. However, messaging is a different
matter.



On Tue, May 7, 2024, 19:54 Matt Mahoney  wrote:

> On Tue, May 7, 2024 at 11:14 AM Quan Tesla  wrote:
> >
> > Don't you believe that true randomness persists in asymmetry, or even
> that randomness would be found in supersymmetry? I'm referring here to the
> uncertainty principle.
> >
> > Is your view that the universe is always certain about the position and
> momentum of every-single particle in all possible worlds?
> 
> If I flip a coin and peek at the result, then your probability of
> heads is different than my probability of heads.
> 
> Likewise, in quantum mechanics, a system observing a particle is
> described by Schrodinger's wave equation just like any other system.
> The solution to the equation is the observer sees a particle in some
> state that is unknown in advance to the observer but predictable to
> someone who knows the quantum state of the system and has sufficient
> computing power to solve it, neither of which is available to the
> observer.
> 
> We know this because of Schrodinger's cat. The square of the wave
> function gives you the probability of observing a particle in the
> absence of more information, such as entanglement with another
> particle that you already observed. It is the same thing as peeking at
> my flipped coin, except that the computation is intractable without a
> quantum computer as large as the system it is modeling, which we don't
> have.
> 
> Or maybe you mean algorithmic randomness, which is independent of an
> observer. But again you have the same problem. An iterated
> cryptographic hash function with a 1000 bit key is random because you
> lack the computing power to guess the seed. Likewise, if you knew the
> exact quantum state of an observer, the computation required to solve
> it grows exponentially with its size. That's why we can't compute the
> freezing point of water by modeling atoms.
> 
> A theory of everything is probably a few hundred bits. But knowing
> what it is would be useless because it would make no predictions
> without the computing power of the whole universe. That is the major
> criticism of string theory.
> 
> --
> -- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mc635b984d4b6577aa8c38a54
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
On Tue, May 7, 2024 at 11:14 AM Quan Tesla  wrote:
>
> Don't you believe that true randomness persists in asymmetry, or even that 
> randomness would be found in supersymmetry? I'm referring here to the 
> uncertainty principle.
>
> Is your view that the universe is always certain about the position and 
> momentum of every-single particle in all possible worlds?

If I flip a coin and peek at the result, then your probability of
heads is different than my probability of heads.

Likewise, in quantum mechanics, a system observing a particle is
described by Schrodinger's wave equation just like any other system.
The solution to the equation is the observer sees a particle in some
state that is unknown in advance to the observer but predictable to
someone who knows the quantum state of the system and has sufficient
computing power to solve it, neither of which is available to the
observer.

We know this because of Schrodinger's cat. The square of the wave
function gives you the probability of observing a particle in the
absence of more information, such as entanglement with another
particle that you already observed. It is the same thing as peeking at
my flipped coin, except that the computation is intractable without a
quantum computer as large as the system it is modeling, which we don't
have.

Or maybe you mean algorithmic randomness, which is independent of an
observer. But again you have the same problem. An iterated
cryptographic hash function with a 1000 bit key is random because you
lack the computing power to guess the seed. Likewise, if you knew the
exact quantum state of an observer, the computation required to solve
it grows exponentially with its size. That's why we can't compute the
freezing point of water by modeling atoms.

A theory of everything is probably a few hundred bits. But knowing
what it is would be useless because it would make no predictions
without the computing power of the whole universe. That is the major
criticism of string theory.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M348cbbd93444a977d8ad5885
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Quan Tesla
Don't you believe that true randomness persists in asymmetry, or even that
randomness would be found in supersymmetry? I'm referring here to the
uncertainty principle.

Is your view that the universe is always certain about the position and
momentum of every-single particle in all possible worlds?

On Tue, May 7, 2024, 18:03 Matt Mahoney  wrote:

> Let me explain what I mean by the intelligence or predictive power of
> the universe. I mean that the universe computes everything in it, the
> position of every atom over time. If I knew that, I could tell you
> everything that will ever happen, like tomorrow's winning lottery
> numbers or the exact time of death of every person who has ever lived
> or ever will. I could tell you if there was life on other planets, and
> if so, what it looks like and where to find it.
> 
> Of course that is impossible by Wolpert's theorem. The universe can't
> know everything about itself and neither can anything in it. We don't
> know the program that computes the universe because it would require
> the entire computing power of the universe to test the program by
> running it, about 10^120 or 2^400 steps. But we do have two useful
> approximations. If we set the gravitational constant G = 0, then we
> have quantum mechanics, a complex differential wave equation whose
> solution is observers that see particles. Or if we set Planck's
> constant h = 0, then we have general relativity, a tensor field
> equation whose solution is observers that see space and time. Wolfram
> and Yudkowsky both estimate this unknown program is only a few hundred
> bits long, and I agree. It is roughly the complexity of quantum
> mechanics and relativity taken together, and roughly the minimum size
> by Occam's Razor of a multiverse where the n'th universe is run for n
> steps until we observe one that necessarily contains intelligent life.
> 
> --
> -- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M8e0a12e8d40cd447a165
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Matt Mahoney
Let me explain what I mean by the intelligence or predictive power of
the universe. I mean that the universe computes everything in it, the
position of every atom over time. If I knew that, I could tell you
everything that will ever happen, like tomorrow's winning lottery
numbers or the exact time of death of every person who has ever lived
or ever will. I could tell you if there was life on other planets, and
if so, what it looks like and where to find it.

Of course that is impossible by Wolpert's theorem. The universe can't
know everything about itself and neither can anything in it. We don't
know the program that computes the universe because it would require
the entire computing power of the universe to test the program by
running it, about 10^120 or 2^400 steps. But we do have two useful
approximations. If we set the gravitational constant G = 0, then we
have quantum mechanics, a complex differential wave equation whose
solution is observers that see particles. Or if we set Planck's
constant h = 0, then we have general relativity, a tensor field
equation whose solution is observers that see space and time. Wolfram
and Yudkowsky both estimate this unknown program is only a few hundred
bits long, and I agree. It is roughly the complexity of quantum
mechanics and relativity taken together, and roughly the minimum size
by Occam's Razor of a multiverse where the n'th universe is run for n
steps until we observe one that necessarily contains intelligent life.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M8bedda3b66ddcfb10805ff85
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 8:04 AM, Quan Tesla wrote:
> To suggest that every hypothetical universe has its own alpha, makes no 
> sense, as alpha is all encompassing as it is.

You are exactly correct. There is another special case besides expressing the 
intelligence of the universe. And that is expressing the intelligence of 
hypothetical universe at zero communication complexity... unless there is some 
unknown Gödel channel.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me43083c2dce972b7746d22ed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread Quan Tesla
alpha is adimensional and unitless. To suggest that every hypothetical
universe has its own alpha, makes no sense, as alpha is all encompassing as
it is.

However, if you were to offer up a suggestion that every universe may have
its own version of a triple-alpha process, then you'll have my fullest
attention.

On Thu, Apr 11, 2024 at 6:48 PM John Rose  wrote:

> On Thursday, April 11, 2024, at 10:07 AM, James Bowery wrote:
>
> What assumption is that?
>
>
> The assumption that alpha is unitless. Yes they cancel out but the simple
> process of cancelling units seems incomplete.
>
> Many of these constants though are re-representations of each other. How
> many constants does everything boil down to I wonder...
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M92bb3e56194310c4a0e69941
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Friday, May 03, 2024, at 7:10 PM, Matt Mahoney wrote:
> So when we talk about the intelligence of the universe, we can only really 
> measure it's computing power, which we generally correlate with prediction 
> power as a measure of intelligence.

The universes overall prediction power should increase, for example with the 
rise of intelligent civilizations among galaxies, though physical entropy is 
increasingly generated in the universe environment. All these prediction powers 
would increase unevenly though they would become increasingly networked via 
interstellar communication. A prediction power apex would be different from a 
sum and it emerges from biological negentropy and then from synthetic AGI but 
physical prediction "power" across the universe implies a sum verses an apex… 
if one civilization’s AGI has more prediction capacity or potential.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M00d6486e8f5ef51067361ff8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-03 Thread Matt Mahoney
We don't have any way of measuring IQs much over 150 because of the problem
of the tested knowing more than the tester. So when we talk about the
intelligence of the universe, we can only really measure it's computing
power, which we generally correlate with prediction power as a measure of
intelligence.

Seth Lloyd estimated that the universe has enough mass (10^53 kg) which if
converted to energy (10^70 J) to support 10^120 qubit flips over the 13.8
billion years since the big bang. Additionally he estimated that by
encoding bits by the positions and velocities of the universe's 10^80
particles within the limits of the Heisenberg uncertainty principle gives
about 10^90 bits of storage.

I independently derived similar numbers. The Bekenstein bound of the Hubble
radius limits the entropy of the observable universe to 2.95 x 10^122 bits.
But most of that is unusable heat. The Landauer limit of the universe at
the CMB temperature of 3 K allows about 10^92 bits to be written before the
heat death of the universe.

On Fri, May 3, 2024, 2:56 PM John Rose  wrote:

> Expressing the intelligence of the universe is a unique case, verses say
> expressing the intelligence of an agent like a human mind. A human mind is
> very lossy verses the universe where there is theoretically no loss. If
> lossy and lossless were a duality then the universe would be a singularity
> of lossylosslessness.
>
> There is a strange reflective duality though in that when one attempts to
> mathematically/algorithmically express the intelligence of the universe the
> universe at that movement is expressing the intelligence of the agent since
> the agent's conceptual expression is contained and created by the universe.
>
> Whatever happened to Wissner-Gross's Causal Entropic Force I haven't heard
> of that in a while...
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Ma2b92ffe1a4a3e4a0cc538bf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-03 Thread John Rose
Expressing the intelligence of the universe is a unique case, verses say 
expressing the intelligence of an agent like a human mind. A human mind is very 
lossy verses the universe where there is theoretically no loss. If lossy and 
lossless were a duality then the universe would be a singularity of 
lossylosslessness.

There is a strange reflective duality though in that when one attempts to 
mathematically/algorithmically express the intelligence of the universe the 
universe at that movement is expressing the intelligence of the agent since the 
agent's conceptual expression is contained and created by the universe.

Whatever happened to Wissner-Gross's Causal Entropic Force I haven't heard of 
that in a while...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me821389c43b756e156ceef66
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-21 Thread John Rose
If the fine structure constant was tunable across different hypothetical 
universes how would that affect the overall intelligence of each universe? Dive 
into that rabbit hole, express and/or algorithmicize the intelligence of a 
universe. There are several potential ways to do that, some of which offer 
rather curious implications.

Apparently though alpha may vary significantly within our own universe... 
according to some unsubstantiated articles I've read.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M292d0a064091603346d3095e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-18 Thread John Rose
On Thursday, April 11, 2024, at 1:13 PM, James Bowery wrote:
> Matt's use of Planck units in his example does seem to support your 
> suspicion.  Moreover, David McGoveran's Ordering Operator Calculus approach 
> to the proton/electron mass ratio (based on just the first 3 of the 4 levels 
> of the CH) does treat those pure/dimensionless numbers as possessing a 
> physical dimension -- mass IIRC.

Different alphas across different hypothetical universes might affect the 
overall intelligence of each universe. Perhaps affecting the rate at which 
intelligence increases. I don’t buy what some say though that if alpha wasn’t 
perfectly tuned to what it is now then intelligent life wouldn’t exist. It 
might exist but in a different form. Unless there is some particular strong 
physical coupling.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M9f71087c9ae68ae4aae0896e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread James Bowery
On Thu, Apr 11, 2024 at 9:48 AM John Rose  wrote:

> On Thursday, April 11, 2024, at 10:07 AM, James Bowery wrote:
>
> What assumption is that?
>
>
> The assumption that alpha is unitless. Yes they cancel out but the simple
> process of cancelling units seems incomplete.
>
> Many of these constants though are re-representations of each other. How
> many constants does everything boil down to I wonder...
>

Matt's use of Planck units in his example does seem to support your
suspicion.  Moreover, David McGoveran's Ordering Operator Calculus approach
to the proton/electron mass ratio (based on just the first 3 of the 4
levels of the CH) does treat those pure/dimensionless numbers as possessing
a physical dimension -- mass IIRC.

BTW, Dave has refuted Cantor as part of his discrete *and finite *approach
to the foundation of physics:

https://www.academia.edu/93528167/Interval_Arguments_Two_Refutations_of_Cantors_1874_and_1878_1_Arguments

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mc871de4f250d7974630c8d81
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 10:07 AM, James Bowery wrote:
> What assumption is that?

The assumption that alpha is unitless. Yes they cancel out but the simple 
process of cancelling units seems incomplete.

Many of these constants though are re-representations of each other. How many 
constants does everything boil down to I wonder...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M1b3cc8ce2f8e3f5ba2c77697
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread James Bowery
On Thu, Apr 11, 2024 at 6:59 AM John Rose  wrote:

> ...
> I also question though the unitless assumption.
>

What assumption is that?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M0e5739d577580f79b29e32a3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
> "Abstract Fundamental physical constants need not be constant, neither 
> spatially nor temporally."

If we could remote view somehow across multiple multiverse instances 
simultaneously in various non-deterministic states and perceive how the 
universe structure varies across different alphas. Do the different universe 
alphas coalesce to a similar value temporally? I think they may get stuck at 
different stabilization states and have non-continuous variation across 
universes. But if they trended to the same value that would tell you something 
about a core inception algorithm.

Have to read up on contemporary cosmology… I have assumed a sort of injection 
model. But the injection might really be a generative perception as if each 
universe is generatively perceived from a consciously creative rendition. The 
different alpha structures may then give insight then into any injector 
cognition model…. Kind of speculative though.

I also question though the unitless assumption.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Ma10187a154c485f1f53d8506
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-10 Thread James Bowery
https://arxiv.org/pdf/2309.12083.pdf
"Varying fundamental constants meet Hubble"

Abstract Fundamental physical constants need not be constant, neither
spatially
nor temporally. – This seeming simple statement has profound implications
for a
wide range of physical processes and interactions, and can be probed
through a
number of observations. In this chapter, we highlight how CMB measurements
can
constrain variations of the fine-structure constant and the electron rest
mass during the cosmological recombination era. The sensitivity of the CMB
anisotropies to
these constants arises because they directly affect the cosmic ionization
history and
Thomson scattering rate, with a number of subtle atomic physics effects
coming together. *Recent studies have revealed that variations of the
electron rest mass can indeed alleviate the Hubble tension, as we explain
here*. Future opportunities through
measurements of the cosmological recombination radiation are briefly
mentioned,
highlighting how these could provide an exciting avenue towards uncovering
the
physical origin of the Hubble tension experimentally.

On Sun, Apr 7, 2024 at 7:53 PM James Bowery  wrote:

> Erratum:
> replace CH₄ ≈(ε=0.5±0.002%) PlanckMass/ProtonMass = αGproton
> with CH₄ ≈(ε=0.5±0.002%) PlanckMass^2/ProtonMass^2 = αGproton
>
> The square term arises due to the fact that gravitation arises in the
> multiplicative interaction between two masses.
>
> On Sun, Apr 7, 2024 at 7:51 PM James Bowery  wrote:
>
>>
>>
>> On Sat, Apr 6, 2024 at 2:29 PM Matt Mahoney 
>> wrote:
>>
>>> One problem with estimating the size of a proton from the size of the
>>> universe is that it implies that the proton or one of the constants it is
>>> derived from isn't constant.
>>>
>>
>> And this same problem applies to 2ƛₑCH₄ ≈(ε=0.81±0.15%)  H₀⁻¹c
>> CH₄ = 2^(2^(2^(2^2-1)-1)-1)-1 (+3+7+127)
>> CH₄ ≈ 2^(2^(2^(2^2-1)-1)-1)-1
>> (not methane of course)
>>
>> But not to:
>> CH₄ ≈(ε=0.5±0.002%) PlanckMass/ProtonMass = αGproton
>>
>> ƛₑ² = "quantum metric" = Compton Area of the electron (see below abstract)
>>
>> Interestingly, the Planck Area is increasingly viewed as more fundamental
>> than the Planck Length -- in large part due to its relationship to
>> information theoretic concerns such as you point out in the problematic
>> relationship to the "Age of the Universe".
>>
>>
>>> Universal semiclassical equations based on the quantum metric for a
>>> two-band system
>>> C.
>>> Leblanc, G. Malpuech, and D. D. Solnyshkov
>>> Phys. Rev. B 104, 134312 – Published 26 October 2021
>>> ABSTRACT
>>> We derive semiclassical equations of motion for an accelerated wave
>>> packet in a two-band system. We show that these equations can be formulated
>>> in terms of the static band geometry described by the quantum metric. We
>>> consider the specific cases of the Rashba Hamiltonian with and without a
>>> Zeeman term. The semiclassical trajectories are in full agreement with the
>>> ones found by solving the Schrödinger equation. This formalism successfully
>>> describes the adiabatic limit and the anomalous Hall effect traditionally
>>> attributed to Berry curvature. It also describes the opposite limit of
>>> coherent band superposition, giving rise to a spatially oscillating
>>> *Zitterbewegung* motion, and all intermediate cases. At k=0, such a
>>> wave packet exhibits a circular trajectory in real space, with its radius
>>> given by the *square root of the quantum metric*. This quantity appears
>>> as a *universal length scale*, providing a geometrical origin of the
>>> Compton wavelength. The quantum metric semiclassical approach could be
>>> extended to an arbitrary number of bands.
>>
>>
>>
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M12e2ecff3b6449c73574d2c5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-07 Thread James Bowery
Erratum:
replace CH₄ ≈(ε=0.5±0.002%) PlanckMass/ProtonMass = αGproton
with CH₄ ≈(ε=0.5±0.002%) PlanckMass^2/ProtonMass^2 = αGproton

The square term arises due to the fact that gravitation arises in the
multiplicative interaction between two masses.

On Sun, Apr 7, 2024 at 7:51 PM James Bowery  wrote:

>
>
> On Sat, Apr 6, 2024 at 2:29 PM Matt Mahoney 
> wrote:
>
>> One problem with estimating the size of a proton from the size of the
>> universe is that it implies that the proton or one of the constants it is
>> derived from isn't constant.
>>
>
> And this same problem applies to 2ƛₑCH₄ ≈(ε=0.81±0.15%)  H₀⁻¹c
> CH₄ = 2^(2^(2^(2^2-1)-1)-1)-1 (+3+7+127)
> CH₄ ≈ 2^(2^(2^(2^2-1)-1)-1)-1
> (not methane of course)
>
> But not to:
> CH₄ ≈(ε=0.5±0.002%) PlanckMass/ProtonMass = αGproton
>
> ƛₑ² = "quantum metric" = Compton Area of the electron (see below abstract)
>
> Interestingly, the Planck Area is increasingly viewed as more fundamental
> than the Planck Length -- in large part due to its relationship to
> information theoretic concerns such as you point out in the problematic
> relationship to the "Age of the Universe".
>
>
>> Universal semiclassical equations based on the quantum metric for a
>> two-band system
>> C.
>> Leblanc, G. Malpuech, and D. D. Solnyshkov
>> Phys. Rev. B 104, 134312 – Published 26 October 2021
>> ABSTRACT
>> We derive semiclassical equations of motion for an accelerated wave
>> packet in a two-band system. We show that these equations can be formulated
>> in terms of the static band geometry described by the quantum metric. We
>> consider the specific cases of the Rashba Hamiltonian with and without a
>> Zeeman term. The semiclassical trajectories are in full agreement with the
>> ones found by solving the Schrödinger equation. This formalism successfully
>> describes the adiabatic limit and the anomalous Hall effect traditionally
>> attributed to Berry curvature. It also describes the opposite limit of
>> coherent band superposition, giving rise to a spatially oscillating
>> *Zitterbewegung* motion, and all intermediate cases. At k=0, such a wave
>> packet exhibits a circular trajectory in real space, with its radius given
>> by the *square root of the quantum metric*. This quantity appears as a 
>> *universal
>> length scale*, providing a geometrical origin of the Compton wavelength.
>> The quantum metric semiclassical approach could be extended to an arbitrary
>> number of bands.
>
>
>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mf8e004f6c5d4582f8664a337
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-07 Thread James Bowery
On Sat, Apr 6, 2024 at 2:29 PM Matt Mahoney  wrote:

> One problem with estimating the size of a proton from the size of the
> universe is that it implies that the proton or one of the constants it is
> derived from isn't constant.
>

And this same problem applies to 2ƛₑCH₄ ≈(ε=0.81±0.15%)  H₀⁻¹c
CH₄ = 2^(2^(2^(2^2-1)-1)-1)-1 (+3+7+127)
CH₄ ≈ 2^(2^(2^(2^2-1)-1)-1)-1
(not methane of course)

But not to:
CH₄ ≈(ε=0.5±0.002%) PlanckMass/ProtonMass = αGproton

ƛₑ² = "quantum metric" = Compton Area of the electron (see below abstract)

Interestingly, the Planck Area is increasingly viewed as more fundamental
than the Planck Length -- in large part due to its relationship to
information theoretic concerns such as you point out in the problematic
relationship to the "Age of the Universe".


> Universal semiclassical equations based on the quantum metric for a
> two-band system
> C.
> Leblanc, G. Malpuech, and D. D. Solnyshkov
> Phys. Rev. B 104, 134312 – Published 26 October 2021
> ABSTRACT
> We derive semiclassical equations of motion for an accelerated wave packet
> in a two-band system. We show that these equations can be formulated in
> terms of the static band geometry described by the quantum metric. We
> consider the specific cases of the Rashba Hamiltonian with and without a
> Zeeman term. The semiclassical trajectories are in full agreement with the
> ones found by solving the Schrödinger equation. This formalism successfully
> describes the adiabatic limit and the anomalous Hall effect traditionally
> attributed to Berry curvature. It also describes the opposite limit of
> coherent band superposition, giving rise to a spatially oscillating
> *Zitterbewegung* motion, and all intermediate cases. At k=0, such a wave
> packet exhibits a circular trajectory in real space, with its radius given
> by the *square root of the quantum metric*. This quantity appears as a 
> *universal
> length scale*, providing a geometrical origin of the Compton wavelength.
> The quantum metric semiclassical approach could be extended to an arbitrary
> number of bands.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M09a9c81983c6f9a7c0515d3b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-06 Thread Matt Mahoney
One problem with estimating the size of a proton from the size of the
universe is that it implies that the proton or one of the constants it is
derived from isn't constant.

Using a rough order of magnitude calculation, the volume of the universe
divided by its entropy (given by its boundary surface area in Planck units)
gives the volume of a proton. The volume of the universe is on the order of
(Tc)^3 where T is the age of the universe (13.8 billion years = 4.35E17 s)
and c is the speed of light (2.998E8 m/s). A Planck length is sqrt(hbar
G/c^3) where hbar is the reduced Planck constant h/2pi = 1.054E-34 kg m^2/s
and G is the gravitational constant 6.6743E-11 m^3/kg s^2.

Thus, the area of the universe in Planck units is of the order A = T^2
c^5/hG and the volume of a proton is of the order A/(Tc)^3 = c^2/hGT. The
proton radius is the cube root of that. Since T is increasing, it means
either that protons are getting smaller, or c is increasing, or G or h is
decreasing. But there is no evidence for any of that. The only constant
that we can't measure precisely enough to detect any changes in a few years
is G. But if G were changing we should be able to see the effects in
distant galaxies.

An exact value using the Bekenstein bound (A/(4 ln 2)) of the Hubble radius
gives 2.96E122 bits and a proton radius of 1.959E-15 m (1.959 fm). The
Compton wavelength (mass/h) is 1.321 fm and the measured radius using
electron scattering is 0.84-0.87 fm.

One problem is Tc is not the actual size of the universe. The event horizon
we see at 13.8B light years is now 46B light years away, and would be
infinite if the universe weren't accelerating away. And this is just part
of a larger universe of unknown size that we can't see because it would
take longer than the age of the universe for its light to reach us.

Also the Bekenstein bound is just an upper bound on entropy that is only
reached for black holes. The universe apparently only has 31% of this mass
(4% stars and 27% dark matter, which I presume consists of ordinary matter
in smaller objects not orbiting stars. Dark matter forms halos around
galaxies, exactly where we would expect rogue planets and comets to be
scattered).

The other 69% is dark energy, which is what ordinary gravity would look
like to an observer falling into a universe sized black hole. The event
horizon would appear to wrap around due to gravitational lensing and cause
other galaxies to accelerate away in all directions. If this is so, then
there should be a small opening, which I believe is the CMB cold spot
behind the Eridanus void, the largest known region of empty space in the
universe. We really should point Hubble or JWST at it to see what's there.

How is this related to AGI? It is pretty obvious that this universe is
finely tuned to be compatible with intelligent life. If the relative masses
of the proton and neutron were different, or if G, c, h, or alpha differed
much, then stars would not undergo fusion or go supernova and scatter the
right elements to form planets with complex molecules. The anthropic
principle suggests that the universe is as big as it needs to be. There are
10^24 planets in the observable universe and an unknown number, immensely
larger, beyond that. You only need one to evolve life. If you were hoping
to throw together some chemicals and see molecules start to self replicate
and evolve, you may be waiting a long long time.

On Thu, Apr 4, 2024, 4:55 PM James Bowery  wrote:

> I suppose it is worth pointing out that there is another CH4 coincidence,
> not quite  as impressive as the protonAlphaG coincidence, but involving
> multiplying the 1/2 electron spin by 2 for a full return to its original
> phase:
>
> 0.8±0.15% relative error with the light age of the universe
>
> (* Electron Phase Factor 1 and Light Age of the Universe *)
> ReducedElectronComptonWavelength=codata["ElectronComptonWavelength"]/(2*Pi)
> FullSpinElectron = 2 * ReducedElectronComptonWavelength (* 720 degrees =
> spin 1 *)
> LightAgeUniverseCH4=UnitConvert[CH4*FullSpinElectron,"LightYear"]
> LightAgeUniverse =
> UnitConvert[codata["UniverseAge"]*codata["SpeedOfLight"],"LightYear"]
> RelativeError[LightAgeUniverse,LightAgeUniverseCH4]
> (3.86159267\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](96\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]12\[NegativeVeryThinSpace])*10^-13)m
> (7.72318535\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](92\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]23\[NegativeVeryThinSpace])*10^-13)m
> (1.388932811\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](2\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]4\[NegativeVeryThinSpace])*10^10)ly
> = (1.3778\[PlusMinus]0.0020)*10^10ly
> = 0.0081\[PlusMinus]0.0015
>
> On Wed, Apr 3, 2024 at 1:38 PM James Bowery  wrote:
>
>> BTW* These proton, gravitation Large Number Coincidences are strong
>> enough that it pretty much rules out the idea 

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-04 Thread James Bowery
I suppose it is worth pointing out that there is another CH4 coincidence,
not quite  as impressive as the protonAlphaG coincidence, but involving
multiplying the 1/2 electron spin by 2 for a full return to its original
phase:

0.8±0.15% relative error with the light age of the universe

(* Electron Phase Factor 1 and Light Age of the Universe *)
ReducedElectronComptonWavelength=codata["ElectronComptonWavelength"]/(2*Pi)
FullSpinElectron = 2 * ReducedElectronComptonWavelength (* 720 degrees =
spin 1 *)
LightAgeUniverseCH4=UnitConvert[CH4*FullSpinElectron,"LightYear"]
LightAgeUniverse =
UnitConvert[codata["UniverseAge"]*codata["SpeedOfLight"],"LightYear"]
RelativeError[LightAgeUniverse,LightAgeUniverseCH4]
(3.86159267\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](96\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]12\[NegativeVeryThinSpace])*10^-13)m
(7.72318535\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](92\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]23\[NegativeVeryThinSpace])*10^-13)m
(1.388932811\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](2\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]4\[NegativeVeryThinSpace])*10^10)ly
= (1.3778\[PlusMinus]0.0020)*10^10ly
= 0.0081\[PlusMinus]0.0015

On Wed, Apr 3, 2024 at 1:38 PM James Bowery  wrote:

> BTW* These proton, gravitation Large Number Coincidences are strong enough
> that it pretty much rules out the idea that gravitational phenomena can be
> attributed to anything but hadronic matter -- and that includes the 80% or
> so of gravitational phenomena attributed sometimes to "dark" matter.   So,
> does this mean some form of MOND (caused by hadronic matter)  and/or
> alternatively, some weakly interacting form of hadronic matter is
> necessary?
>
> * and I realize this is getting pretty far removed from anything relevant
> to practical "AGI" except insofar as the richest man in the world (last I
> heard) was the guy who wants to use it to discover what makes "the
> simulation" tick (xAI) and he's the guy who founded OpenAI, etc.
>
> On Wed, Apr 3, 2024 at 1:23 PM James Bowery  wrote:
>
>> Mark Rohrbaugh's formula, that I used to calculate the proton radius to a
>> higher degree of precision than QED or current measurements, results in a
>> slightly higher relative error with respect to the Hubble Surface
>> prediction, but that could be accounted for by the 11% tolerance in the
>> Hubble Surface calculation derived from the Hubble Radius, or the 2%
>> tolerance in the Hubble Volume calculation taken in ratio with the proton
>> volume calculated from the proton radius:
>>
>>
>> pradiusRohrbaugh=(8.41235641\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](35\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]26\[NegativeVeryThinSpace])*10^-16)m
>> pradiusRohrbaughPL=UnitConvert[pradiusRohrbaugh,"PlanckLength"]
>> pvolumeRohrbaugh=(4/3) Pi pradiusRohrbaughPL^3
>> h2pvolumeRohrbaugh=codata["HubbleVolume"]/pvolumeRohrbaugh
>>
>> RelativeError[QuantityMagnitude[h2pvolumeRohrbaugh],QuantityMagnitude[hsurface]]
>> (8.41235641\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](35\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]26\[NegativeVeryThinSpace])*10^-16)m
>> (5.20484478\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](84\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]16\[NegativeVeryThinSpace])*10^19)Subscript[l,
>> P]
>> (5.90625180\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](6\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]5\[NegativeVeryThinSpace])*10^59)Subsuperscript[l,
>> P, 3]
>> = (1.025\[PlusMinus]0.019)*10^123
>> = -0.123\[PlusMinus]0.022
>>
>>
>>
>> On Tue, Apr 2, 2024 at 9:16 AM James Bowery  wrote:
>>
>>> I get it now:
>>>
>>> pradius = UnitConvert[codata["ProtonRMSChargeRadius"],"PlanckLength"]
>>> = (5.206\[PlusMinus]0.012)*10^19Subscript[l, P]
>>> pvolume=(4/3) Pi pradius^3
>>> = (5.91\[PlusMinus]0.04)*10^59Subsuperscript[l, P, 3]
>>> h2pvolume=codata["HubbleVolume"]/pvolume
>>> = (1.024\[PlusMinus]0.020)*10^123
>>> hsurface=UnitConvert[4 Pi codata["HubbleLength"]^2,"PlanckArea"]
>>> = (8.99\[PlusMinus]0.11)*10^122Subsuperscript[l, P, 2]
>>> RelativeError[QuantityMagnitude[h2pvolume],QuantityMagnitude[hsurface]]
>>> = -0.122\[PlusMinus]0.023
>>>
>>> As Dirac-style "Large Number Coincidences" go, a -12±2% relative error
>>> is quite remarkable since Dirac was intrigued by coincidences with orders
>>> of magnitude errors!
>>>
>>> However, get a load of this:
>>>
>>> CH4=2^(2^(2^(2^2-1)-1)-1)-1
>>> = 170141183460469231731687303715884105727
>>> protonAlphaG=(codata["PlanckMass"]/codata["ProtonMass"])^2
>>> = (1.69315\[PlusMinus]0.4)*10^38
>>> RelativeError[protonAlphaG,CH4]
>>> = 0.004880\[PlusMinus]0.22
>>>
>>> 0.5±0.002% relative error!
>>>
>>> Explain that.
>>>
>>>
>>> On Sun, Mar 31, 2024 at 9:45 PM Matt Mahoney 
>>> wrote:
>>>
 On 

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-04 Thread John Rose
On Wednesday, April 03, 2024, at 2:39 PM, James Bowery wrote:
> * and I realize this is getting pretty far removed from anything relevant to 
> practical "AGI" except insofar as the richest man in the world (last I heard) 
> was the guy who wants to use it to discover what makes "the simulation" tick 
> (xAI) and he's the guy who founded OpenAI, etc.

This is VERY interesting James and a useful exercise it does all relate. We 
might be able to find some answers by looking at the code you are pasting. I 
haven’t seen it presented in this way it’s sort of like reworking a macro/micro 
view. Many people pursuing AGI are approaching "the simulation" source code 
either knownst or unbeknownst to themselves. As a youngster I realized that the 
key to understanding everything was in the relationship between the big and the 
small and that seems still to be true.
 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Md441902c49d7fc2595fdacdf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-03 Thread James Bowery
BTW* These proton, gravitation Large Number Coincidences are strong enough
that it pretty much rules out the idea that gravitational phenomena can be
attributed to anything but hadronic matter -- and that includes the 80% or
so of gravitational phenomena attributed sometimes to "dark" matter.   So,
does this mean some form of MOND (caused by hadronic matter)  and/or
alternatively, some weakly interacting form of hadronic matter is
necessary?

* and I realize this is getting pretty far removed from anything relevant
to practical "AGI" except insofar as the richest man in the world (last I
heard) was the guy who wants to use it to discover what makes "the
simulation" tick (xAI) and he's the guy who founded OpenAI, etc.

On Wed, Apr 3, 2024 at 1:23 PM James Bowery  wrote:

> Mark Rohrbaugh's formula, that I used to calculate the proton radius to a
> higher degree of precision than QED or current measurements, results in a
> slightly higher relative error with respect to the Hubble Surface
> prediction, but that could be accounted for by the 11% tolerance in the
> Hubble Surface calculation derived from the Hubble Radius, or the 2%
> tolerance in the Hubble Volume calculation taken in ratio with the proton
> volume calculated from the proton radius:
>
>
> pradiusRohrbaugh=(8.41235641\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](35\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]26\[NegativeVeryThinSpace])*10^-16)m
> pradiusRohrbaughPL=UnitConvert[pradiusRohrbaugh,"PlanckLength"]
> pvolumeRohrbaugh=(4/3) Pi pradiusRohrbaughPL^3
> h2pvolumeRohrbaugh=codata["HubbleVolume"]/pvolumeRohrbaugh
>
> RelativeError[QuantityMagnitude[h2pvolumeRohrbaugh],QuantityMagnitude[hsurface]]
> (8.41235641\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](35\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]26\[NegativeVeryThinSpace])*10^-16)m
> (5.20484478\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](84\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]16\[NegativeVeryThinSpace])*10^19)Subscript[l,
> P]
> (5.90625180\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](6\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]5\[NegativeVeryThinSpace])*10^59)Subsuperscript[l,
> P, 3]
> = (1.025\[PlusMinus]0.019)*10^123
> = -0.123\[PlusMinus]0.022
>
>
>
> On Tue, Apr 2, 2024 at 9:16 AM James Bowery  wrote:
>
>> I get it now:
>>
>> pradius = UnitConvert[codata["ProtonRMSChargeRadius"],"PlanckLength"]
>> = (5.206\[PlusMinus]0.012)*10^19Subscript[l, P]
>> pvolume=(4/3) Pi pradius^3
>> = (5.91\[PlusMinus]0.04)*10^59Subsuperscript[l, P, 3]
>> h2pvolume=codata["HubbleVolume"]/pvolume
>> = (1.024\[PlusMinus]0.020)*10^123
>> hsurface=UnitConvert[4 Pi codata["HubbleLength"]^2,"PlanckArea"]
>> = (8.99\[PlusMinus]0.11)*10^122Subsuperscript[l, P, 2]
>> RelativeError[QuantityMagnitude[h2pvolume],QuantityMagnitude[hsurface]]
>> = -0.122\[PlusMinus]0.023
>>
>> As Dirac-style "Large Number Coincidences" go, a -12±2% relative error is
>> quite remarkable since Dirac was intrigued by coincidences with orders of
>> magnitude errors!
>>
>> However, get a load of this:
>>
>> CH4=2^(2^(2^(2^2-1)-1)-1)-1
>> = 170141183460469231731687303715884105727
>> protonAlphaG=(codata["PlanckMass"]/codata["ProtonMass"])^2
>> = (1.69315\[PlusMinus]0.4)*10^38
>> RelativeError[protonAlphaG,CH4]
>> = 0.004880\[PlusMinus]0.22
>>
>> 0.5±0.002% relative error!
>>
>> Explain that.
>>
>>
>> On Sun, Mar 31, 2024 at 9:45 PM Matt Mahoney 
>> wrote:
>>
>>> On Sun, Mar 31, 2024, 9:46 PM James Bowery  wrote:
>>>
 Proton radius is about 5.2e19 Plank Lengths

>>>
>>> The Hubble radius is 13.8e9 light-years = 8.09e60 Planck lengths. So
>>> 3.77e123 protons could be packed inside this sphere with surface area
>>> 8.22e122 Planck areas.
>>>
>>> The significance of the Planck area is it bounds the entropy within to
>>> A/4 nats, or 2.95e122 bits. This makes a bit the size of 12.7 protons, or
>>> about a carbon nucleus. https://en.wikipedia.org/wiki/Bekenstein_bound
>>>
>>> 12.7 is about 4 x pi. It is a remarkable coincidence to derive
>>> properties of particles from only G, h, c, and the age of the universe.
>>>

 *Artificial General Intelligence List *
>>> / AGI / see discussions  +
>>> participants  +
>>> delivery options 
>>> Permalink
>>> 
>>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mf1cab12f23ac245a8928deaa
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-03 Thread James Bowery
Mark Rohrbaugh's formula, that I used to calculate the proton radius to a
higher degree of precision than QED or current measurements, results in a
slightly higher relative error with respect to the Hubble Surface
prediction, but that could be accounted for by the 11% tolerance in the
Hubble Surface calculation derived from the Hubble Radius, or the 2%
tolerance in the Hubble Volume calculation taken in ratio with the proton
volume calculated from the proton radius:

pradiusRohrbaugh=(8.41235641\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](35\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]26\[NegativeVeryThinSpace])*10^-16)m
pradiusRohrbaughPL=UnitConvert[pradiusRohrbaugh,"PlanckLength"]
pvolumeRohrbaugh=(4/3) Pi pradiusRohrbaughPL^3
h2pvolumeRohrbaugh=codata["HubbleVolume"]/pvolumeRohrbaugh
RelativeError[QuantityMagnitude[h2pvolumeRohrbaugh],QuantityMagnitude[hsurface]]
(8.41235641\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](35\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]26\[NegativeVeryThinSpace])*10^-16)m
(5.20484478\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](84\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]16\[NegativeVeryThinSpace])*10^19)Subscript[l,
P]
(5.90625180\[NegativeVeryThinSpace]\[NegativeVeryThinSpace]\[NegativeVeryThinSpace](6\[NegativeThinSpace]\[PlusMinus]\[NegativeThinSpace]5\[NegativeVeryThinSpace])*10^59)Subsuperscript[l,
P, 3]
= (1.025\[PlusMinus]0.019)*10^123
= -0.123\[PlusMinus]0.022



On Tue, Apr 2, 2024 at 9:16 AM James Bowery  wrote:

> I get it now:
>
> pradius = UnitConvert[codata["ProtonRMSChargeRadius"],"PlanckLength"]
> = (5.206\[PlusMinus]0.012)*10^19Subscript[l, P]
> pvolume=(4/3) Pi pradius^3
> = (5.91\[PlusMinus]0.04)*10^59Subsuperscript[l, P, 3]
> h2pvolume=codata["HubbleVolume"]/pvolume
> = (1.024\[PlusMinus]0.020)*10^123
> hsurface=UnitConvert[4 Pi codata["HubbleLength"]^2,"PlanckArea"]
> = (8.99\[PlusMinus]0.11)*10^122Subsuperscript[l, P, 2]
> RelativeError[QuantityMagnitude[h2pvolume],QuantityMagnitude[hsurface]]
> = -0.122\[PlusMinus]0.023
>
> As Dirac-style "Large Number Coincidences" go, a -12±2% relative error is
> quite remarkable since Dirac was intrigued by coincidences with orders of
> magnitude errors!
>
> However, get a load of this:
>
> CH4=2^(2^(2^(2^2-1)-1)-1)-1
> = 170141183460469231731687303715884105727
> protonAlphaG=(codata["PlanckMass"]/codata["ProtonMass"])^2
> = (1.69315\[PlusMinus]0.4)*10^38
> RelativeError[protonAlphaG,CH4]
> = 0.004880\[PlusMinus]0.22
>
> 0.5±0.002% relative error!
>
> Explain that.
>
>
> On Sun, Mar 31, 2024 at 9:45 PM Matt Mahoney 
> wrote:
>
>> On Sun, Mar 31, 2024, 9:46 PM James Bowery  wrote:
>>
>>> Proton radius is about 5.2e19 Plank Lengths
>>>
>>
>> The Hubble radius is 13.8e9 light-years = 8.09e60 Planck lengths. So
>> 3.77e123 protons could be packed inside this sphere with surface area
>> 8.22e122 Planck areas.
>>
>> The significance of the Planck area is it bounds the entropy within to
>> A/4 nats, or 2.95e122 bits. This makes a bit the size of 12.7 protons, or
>> about a carbon nucleus. https://en.wikipedia.org/wiki/Bekenstein_bound
>>
>> 12.7 is about 4 x pi. It is a remarkable coincidence to derive properties
>> of particles from only G, h, c, and the age of the universe.
>>
>>>
>>> *Artificial General Intelligence List *
>> / AGI / see discussions  +
>> participants  +
>> delivery options 
>> Permalink
>> 
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M17fccdbdbf49f194fe6532ef
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-02 Thread James Bowery
I get it now:

pradius = UnitConvert[codata["ProtonRMSChargeRadius"],"PlanckLength"]
= (5.206\[PlusMinus]0.012)*10^19Subscript[l, P]
pvolume=(4/3) Pi pradius^3
= (5.91\[PlusMinus]0.04)*10^59Subsuperscript[l, P, 3]
h2pvolume=codata["HubbleVolume"]/pvolume
= (1.024\[PlusMinus]0.020)*10^123
hsurface=UnitConvert[4 Pi codata["HubbleLength"]^2,"PlanckArea"]
= (8.99\[PlusMinus]0.11)*10^122Subsuperscript[l, P, 2]
RelativeError[QuantityMagnitude[h2pvolume],QuantityMagnitude[hsurface]]
= -0.122\[PlusMinus]0.023

As Dirac-style "Large Number Coincidences" go, a -12±2% relative error is
quite remarkable since Dirac was intrigued by coincidences with orders of
magnitude errors!

However, get a load of this:

CH4=2^(2^(2^(2^2-1)-1)-1)-1
= 170141183460469231731687303715884105727
protonAlphaG=(codata["PlanckMass"]/codata["ProtonMass"])^2
= (1.69315\[PlusMinus]0.4)*10^38
RelativeError[protonAlphaG,CH4]
= 0.004880\[PlusMinus]0.22

0.5±0.002% relative error!

Explain that.


On Sun, Mar 31, 2024 at 9:45 PM Matt Mahoney 
wrote:

> On Sun, Mar 31, 2024, 9:46 PM James Bowery  wrote:
>
>> Proton radius is about 5.2e19 Plank Lengths
>>
>
> The Hubble radius is 13.8e9 light-years = 8.09e60 Planck lengths. So
> 3.77e123 protons could be packed inside this sphere with surface area
> 8.22e122 Planck areas.
>
> The significance of the Planck area is it bounds the entropy within to A/4
> nats, or 2.95e122 bits. This makes a bit the size of 12.7 protons, or about
> a carbon nucleus. https://en.wikipedia.org/wiki/Bekenstein_bound
>
> 12.7 is about 4 x pi. It is a remarkable coincidence to derive properties
> of particles from only G, h, c, and the age of the universe.
>
>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M035b6d3a4509d0706e916fef
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-03-31 Thread Matt Mahoney
On Sun, Mar 31, 2024, 9:46 PM James Bowery  wrote:

> Proton radius is about 5.2e19 Plank Lengths
>

The Hubble radius is 13.8e9 light-years = 8.09e60 Planck lengths. So
3.77e123 protons could be packed inside this sphere with surface area
8.22e122 Planck areas.

The significance of the Planck area is it bounds the entropy within to A/4
nats, or 2.95e122 bits. This makes a bit the size of 12.7 protons, or about
a carbon nucleus. https://en.wikipedia.org/wiki/Bekenstein_bound

12.7 is about 4 x pi. It is a remarkable coincidence to derive properties
of particles from only G, h, c, and the age of the universe.

>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me023643f4fef1483cfab3ad6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-03-31 Thread Matt Mahoney
Alpha is the square of the difference between Stoney units and Planck
units. Stoney units are based on the unit electric charge instead of
Planck's constant and are 11.7 times smaller.
https://en.m.wikipedia.org/wiki/Natural_units

Alpha was once thought to be rational (1/137) but all we know for sure is
that it is computable, unlike the vast majority of real numbers, because it
exists in a finitely computable universe. That doesn't mean there is a
faster algorithm than the ~10^122 qubit operations since the big bang, even
if we discover that the code for the universe is only a few hundred bits.


On Sun, Mar 31, 2024, 2:14 PM James Bowery  wrote:

> On Sat, Mar 30, 2024 at 9:54 AM Matt Mahoney 
> wrote:
>
>> ...We can measure the fine structure constant to better than one part per
>> billion. It's physics. It has nothing to do with AGI...
>
>
> In  private communication one of the ANPA founders told me that at one
> time there were as many as 400 distinct ways of measuring the fine
> structure constant -- all theoretically related.
>
> As with a recent controversy over the anomalous g-factor or the proton
> radius, the assumptions underlying these theoretic relations can go
> unrecognized until enough, what is called, "tension" arises between theory
> and observation.  At that point people may get  serious about doing what
> they should have been doing from the outset:
>
> Compiling the measurements in a comprehensive data set and subjecting it
> to what amounts to algorithmic information approximation.
>
> This should, in fact, be the way funding is allocated: Going only to those
> theorists that improve the lossless compression of said dataset.
>
> A huge part of the problem here is a deadlock into a deadly embrace
> between scientists need for funding and the politics of funding:
>
> 1) Scientists rightfully complain that there isn't enough money available
> to "waste" on such objective competitions since it is *really* hard work,
> including both human and computation work that is very costly.
>
> 2) Funding sources, such as NSF, don't plow money into said prize
> competitions (as Matt suggested the NSF do for a replacement for the
> Turing Test with compression clear back in 1999)
>  
> because
> all they hear from scientists is that such prize competitions can't work --
> (not that they can't work because of a lack of funding).
>
> There, is, of course, the ethical conflicts of interest involving:
>
> 1) Scientists that don't want to be subjected to hard work in which their
> authority is questioned by some objective criterion.
>
> 2) Politicians posing as competent bureaucrats who don't want an objective
> way of dispensing science funding because that would reduce their degree of
> arbitrary power.
>
> Nor is any of the above to be taken to mean that AGI is dependent on this
> approach to such pure number derivation of natural science parameters.
>
> But there *is* reason to believe that principled and rigorous approaches
> to the natural sciences may lead many down the path toward a more effective
> foundation for mathematics -- a path that I described in the OP.  This may,
> in turn, shed light on the structure of the empirical world that Bertrand
> Russell lamented lacked due to the failure of his Relation Arithmetic to
> take root and, in fact, be supplanted by Tarski's travesty called "model
> theory".
>
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me4d0bcfc0747948b05c39165
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-03-31 Thread James Bowery
On Sat, Mar 30, 2024 at 9:54 AM Matt Mahoney 
wrote:

> ...We can measure the fine structure constant to better than one part per
> billion. It's physics. It has nothing to do with AGI...


In  private communication one of the ANPA founders told me that at one time
there were as many as 400 distinct ways of measuring the fine structure
constant -- all theoretically related.

As with a recent controversy over the anomalous g-factor or the proton
radius, the assumptions underlying these theoretic relations can go
unrecognized until enough, what is called, "tension" arises between theory
and observation.  At that point people may get  serious about doing what
they should have been doing from the outset:

Compiling the measurements in a comprehensive data set and subjecting it to
what amounts to algorithmic information approximation.

This should, in fact, be the way funding is allocated: Going only to those
theorists that improve the lossless compression of said dataset.

A huge part of the problem here is a deadlock into a deadly embrace between
scientists need for funding and the politics of funding:

1) Scientists rightfully complain that there isn't enough money available
to "waste" on such objective competitions since it is *really* hard work,
including both human and computation work that is very costly.

2) Funding sources, such as NSF, don't plow money into said prize
competitions (as Matt suggested the NSF do for a replacement for the Turing
Test with compression clear back in 1999)

because
all they hear from scientists is that such prize competitions can't work --
(not that they can't work because of a lack of funding).

There, is, of course, the ethical conflicts of interest involving:

1) Scientists that don't want to be subjected to hard work in which their
authority is questioned by some objective criterion.

2) Politicians posing as competent bureaucrats who don't want an objective
way of dispensing science funding because that would reduce their degree of
arbitrary power.

Nor is any of the above to be taken to mean that AGI is dependent on this
approach to such pure number derivation of natural science parameters.

But there *is* reason to believe that principled and rigorous approaches to
the natural sciences may lead many down the path toward a more effective
foundation for mathematics -- a path that I described in the OP.  This may,
in turn, shed light on the structure of the empirical world that Bertrand
Russell lamented lacked due to the failure of his Relation Arithmetic to
take root and, in fact, be supplanted by Tarski's travesty called "model
theory".

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M83ab3a14c8c449d907b6fcbc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-03-29 Thread James Bowery
I got involved with the Alternative Natural Philosophy Association back in the 
late 1990s when I hired one of the attendees of the Dartmouth Summer of AI 
Workshop, Tom Etter, to work on the foundation of programming languages.  ANPA 
was founded on the late 1950s discovery of the Combinatorial Hierarchy (CH).  
The CH is a pure combinatorial explosion of discrete mathematics that appeared 
to generate the 4 dimensionless scale constants of physics 
, the 
last 2 pure numbers (137 and 2^127-1+137) corresponding to α aka Fine Structure 
Constant  and αGproton 
aka ratio of proton to planck mass.  I've been recently working with David 
McGoveran  before he passes 
away, on generalizing the aforelinked Python code for the CH to produce his 
derivation of the  proton/electron dimensionless mass ratio under a particular 
interpretation of the CH and the way its levels interact.  If we get that done, 
we'll a computer program linking up the first two numbers of the CH (3 and 10) 
with the last two under an interpretation of discrete mathematics McGoveran and 
his colleague Pierre Noyes call "program universe".   On the strength of that 
work I applied for a job with xAI since it bears directly on the mission of 
xAI.  I, of course, was turned down for any of a variety of reasons but I did 
ask them to at least try to pick David's brains before maggots pick them. 

Tom was, when I hired him, editor of the ANPA-West journal.  I hired him 
because he'd found a way of factoring out of Quantum Mechanics what he called 
"the quantum core" as a theorem of relational combinatorics in which relational 
extensions aka relation tables could, if one treated them as *count* tables, in 
turn, be treated as a kind of "number".  These "relation numbers" have 
*dimensions* and *probability distributions*.  

By "dimensions" I mean the things we use to characterize numbers arising from 
*measurements* like the number of chickens per the number of sea cucumbers as 
well as the number of kilogram meters per second square.  That was one thing I 
demanded (going back to my 1982 work at VIEWTRON) fall out naturally from the 
mathematical foundation of any programming language.  In other words, I 
absolutely hated with a UV hot passion the fact that the existing foundations 
for programming languages always ended up with kludges to deal with units and 
dimensions.  Another thing I demanded was the treatment of procedural 
programming (1->1 mapping by statements between subsequent states) as a 
degenerate case of functional programming (N->1 mapping ala 3+2->5 & 1+4->5...) 
as a degenerate case of relational programming (N->M mapping).  So he'd handled 
that as well.  Another thing I demanded was some way of naturally emerging 
sqrt(-1), as a pure number, in the treatment of state transitions so that what 
physicists call dynamical systems theory emerges as naturally as dimensioned 
numbers. The fact that he handled fuzzy numbers/logic was beyond what I 
demanded but, hey, there it was!

Tom's "link theory", introduced in the PhysComp 96 conference did all of the 
above by the simple expedient of permitting the counts in his count tables to 
include negative counts (ie: a row in a table being an observational case 
counting as 1 measurement and a -1 measurement being permitted).

Tom was friend of Ray Solomonoff's (although I didn't discover that until years 
after both Tom and Ray had passed away) and they apparently arrived together at 
the Dartmouth Workshop early together. 

So I'm not here to deny that there is nothing of value to AGI to be found in 
the search for the minimum-length descripton of the origin of pure number 
parameters in natural philosophy, but let's be practical here.

Statistical mechanics was not necessary for the Navier–Stokes equations, even 
though the foundation of both in *calculus* existed well before either.  
Wolfram can palaver all he wants to about "computational irreducibility" -- 
something that was recognized by mathematicians and physicists centuries before 
he coined that neologism -- but that is a red-herring when considering the 
foundation of AGI in Solomonoff's Algorithmic Information Theoretic proofs or 
in my own search for a programming language with which one might code said 
algorithms.

The fact that it is hopeless to construct a "shortest program" that predicts 
what the universe will do for any of a variety of reasons (including that it is 
"computationally irreducible" in the sense that its predictions can't be 
computed prior to observing what they predict) is neither here nor there in a 
practical sense.

The universe is constructed in such a manner as to permit us to make *useful* 
predictions without making *perfect* predictions.  But we have to admit that, 
for some strange reason, Solomonoff's proof that the