Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-03 Thread Nanograte Knowledge Technologies
A very-smart developer might come along one day with an holistic enough view - and the scientific knowledge - to surprise everyone here with a workable model of an AGI. However, having worked with many-a developer in a solution engineering sense, and starting off myself as one decades ago, I ca

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-03 Thread Matt Mahoney
We don't have any way of measuring IQs much over 150 because of the problem of the tested knowing more than the tester. So when we talk about the intelligence of the universe, we can only really measure it's computing power, which we generally correlate with prediction power as a measure of intelli

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-03 Thread Matt Mahoney
The OpenCog atomspace was the data structure to hold the knowledge base, but it was never filled with knowledge. We have no idea how it would perform when it was filled with sufficient data for AGI, or how we would go about filling it, or how much effort it would take, or even how big it would have

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-03 Thread Mike Archbold
I thought the "atomspace" was the ~knowledge base? On Fri, May 3, 2024 at 2:54 PM Matt Mahoney wrote: > It could be that everyone still on this list has a different idea on how > to solve AGI, making any kind of team effort impossible. I recall a few > years back that Ben was hiring developers i

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-03 Thread Matt Mahoney
It could be that everyone still on this list has a different idea on how to solve AGI, making any kind of team effort impossible. I recall a few years back that Ben was hiring developers in Ethiopia. I don't know much about Hyperon. I really haven't seen much of anything since the 2009 OpenCog pup

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-03 Thread John Rose
Expressing the intelligence of the universe is a unique case, verses say expressing the intelligence of an agent like a human mind. A human mind is very lossy verses the universe where there is theoretically no loss. If lossy and lossless were a duality then the universe would be a singularity o

Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-03 Thread John Rose
On Thursday, May 02, 2024, at 6:03 AM, YKY (Yan King Yin, 甄景贤) wrote: > It's not easy to prove new theorems in category theory or categorical > logic... though one open problem may be the formulation of fuzzy toposes. Or perhaps neutrosophic topos, Florentin Smarandache has written much interest

Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-03 Thread James Bowery
On Thu, May 2, 2024 at 9:56 AM Matt Mahoney wrote: > ... > Prediction measures intelligence. Compression measures prediction. > Beautiful Aphorism! The aphorism captures both of AIXI's components: AIT (Compression) and SDT (Prediction). The only specious quibble left for the anti-intelligence