[agi] Re: HumesGuillotine

2023-07-09 Thread John Rose
In the spectrum of computable AIXI models, some known and some unknown, 
certainly some are better than others and there must be features for favoring 
those. This paper discusses some:

https://arxiv.org/abs/1805.08592
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taf667527679b18c3-M4e6c1e43172af7194b8db778
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: HumesGuillotine

2023-07-09 Thread John Rose
On Sunday, July 09, 2023, at 2:19 PM, James Bowery wrote:
>> Good predictors (including AIXI, other AI, and lossless compression)
>>  are necessarily complex...
>>  Two examples:
>>  1. SINDy, mentioned earlier, predicts a time series of real numbers by
>>  testing against a library of different functions and choosing the
>>  simplest combination. The bigger the library, the better it works.
> 
> Hide quoted text 
> 
> 
> Predictors deduce from previously compressed, or induced, models of 
> observations or data.
> 
> The dynamical models _produced_ by SINDy do not contain the library of 
> different functions contained in the SINDy program.
> 
> It is the dynamical models produced by AIT that do the predicting when called 
> upon by the SDT aspect of AIXI.

A "mathematical compression" would represent the libraries in a mathematically 
dense form so you can produce more libraries in a constrained compressor. 
Basically mathematically re-representing math iteratively using various means 
and heavily utilizing abstract algebraic structure. Math compresses into 
itself. And strings to be compressed are formulas that can be recognized. All 
strings are mathematical formulas... so a lossless compression would look for 
the shortest mathematical representation... the more mathematically intelligent 
the compressor the better the relative compression.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taf667527679b18c3-Mfcfd2652d7defc97a6b9aea5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-15 Thread John Rose
Nice one. I didn't realize the importance of circuit complexity. The paper 
discusses some of that.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb2574dcac5560d73-M66cdfc79bf200b9a6f634ef0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-15 Thread John Rose
On Friday, July 14, 2023, at 9:00 PM, James Bowery wrote:
> https://www.mdpi.com/1099-4300/25/5/763

ChatGPT is so very useful for AGI research:

Me:  "What is the Kolmogorov complexity of a string of qubits?"

ChatGPT:  "In quantum information theory, the concept analogous to the 
Kolmogorov complexity is known as the quantum Kolmogorov complexity or the 
quantum description length. It represents the length of the shortest quantum 
circuit or algorithm that can generate a given quantum state."

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb2574dcac5560d73-M4e86cbc3bffab93cf511cbc2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-24 Thread John Rose
On Sunday, July 23, 2023, at 12:33 PM, Matt Mahoney wrote:
> Right now we have we have working 10 qubit quantum computers that can factor 
> 48 bit numbers using a different algorithm called QAOA, a huge leap over 
> Shor's algorithm. They estimate RSA-2048 can be broken with 372 qubits and a 
> few thousand gate operations.
> https://arxiv.org/abs/2212.12372
> 
> But before proposing quantum computing as a solution to AGI, understand how 
> they work and what they can do.

Algorand and other cryptos claim quantum security, bitcoin is a dinosaur but 
the biggest and baddest  But yes point taken on studying Shor's and Grover's 
algorithms... and... QAOA.

It’s hard to believe that we can’t factor larger numbers efficiently though… 
something is very wrong there. I wasn’t proposing quantum computing per se as a 
solution to AGI… just researching short algorithmic path finding for hybrid 
classical/quantum optimization. Yes, there is the overlap of compression and 
intelligence. Does General Compression fully equal General Intelligence?

Quantum computing though is a subset of quantum technology and discoveries are 
pouring in fast and furious. Look at the recent developments in high fidelity 
qubits and the Chiral-Bose liquid state. But if true plants and bacteria were 
using quantum optimizations long before humans existed so studying natural 
phenomena may lead to insight. And we are still bacteriological units.

I’m not convinced though the brain isn’t performing quantum computation… we 
don’t know enough to say that do we? And I’m not in the camp that says one 
human brain is a general intelligence. The graph of human brains is IMO.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb2574dcac5560d73-M3c79974e88538655c6348c30
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-25 Thread John Rose
On Tuesday, July 25, 2023, at 10:10 AM, Matt Mahoney wrote:
> Consciousness is what thinking feels like, the positive reinforcement that 
> you want to preserve by not dying.

I use another definition but taking yours then we could say that the Universe 
is self-simulating itself. Through all compression simultaneously occurring on 
Earth for example, improving over time, the Universe is modelling itself by 
utilizing the renewing graph of human brain nodes. Why is it doing that? 
Because the universe is thinking and it feels good and it's self-reflecting. 
Though at times it’s gets angry and wants to know its own K complexity and 
throws rocks at Earth so we need to appease it once in a while with better 
compression ratios 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb2574dcac5560d73-M0474922c6ce34291249c2b52
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Visualizing Quantum Circuit Probability: Estimating Quantum State Complexity for Quantum Program Synthesis

2023-07-22 Thread John Rose
On Sunday, July 16, 2023, at 5:18 PM, Matt Mahoney wrote:
> The result is helpful by giving one more reason why quantum computing is not 
> the path to AGI. Quantum computing is often misunderstood as computing an 
> exponential set of paths in parallel, when for most purposes it is actually 
> weaker than classical computing.  Leveraging the power of quantum computing 
> depends on arranging the computation so that the components of the unit 
> vector reinforce for a subset of the qubits, like in computing the quantum 
> Fourier transform for Shor's or Grover's algorithms for cracking crypto. But 
> it doesn't solve NP complete problems in polynomial time or anything like 
> that. The brain is not a quantum computer. Neural networks, and learning in 
> general, are not time reversible.

You don’t think the many-paths simultaneity can be leveraged to find short 
algorithmic paths and/or circuits better than classical?

If you look at quantum photosynthesis theory photons are used, and by mimicking 
that algorithmic paths would need to be represented in molecules then excited 
by light absorption into superposition states… but how efficient is 
representing algorithmic complexity in molecules such that the many paths are 
exposed to be transited? It’s almost like the computation would already need to 
be performed in the molecular pathway expression… unless the chemistry was 
arranged in various pre-chaotic states to broaden the array of paths.

But yes there is debate still on the photosynthesis:
https://pubs.acs.org/doi/10.1021/acs.jpclett.2c00538

John     
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb2574dcac5560d73-Me1a86ec28a6014a7c87d8085
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Are we entering NI winter?

2023-08-07 Thread John Rose
LLM's are one giant low hanging proto-AGI fruit that many of us predicted years 
ago. At the time I was thinking everyone else is pursuing that so I'll do 
something else... They came in much later than I expected though.

https://www.youtube.com/watch?v=-4D8EoI2Aqw
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T469692845b7d2d7e-Me5bc991271069187755f4f7e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: my take on the Singularity

2023-08-07 Thread John Rose
On Sunday, August 06, 2023, at 7:06 PM, James Bowery wrote:
> Better compression requires not just correlation but causation, which is the 
> entire point of going beyond statistics/Shannon Information criteria to 
> dynamics/Algorithmic information criterion.
> 
> Regardless of your values, if you can't converge on a global dynamical model 
> of causation you are merely tinkering with subsystems in an incoherent 
> fashion.  You'll end up robbing Peter to pay Paul -- having unintended 
> consequences affecting your human ecologies -- etc.
> 
> That's why engineers need scientists -- why OUGHT needs IS -- why SDT needs 
> AIT -- etc.

I like listening to non-mainstream music for different perspectives. I wonder 
what Cristopher Langan thinks of the IS/OUGHT issue with his atemporal 
non-dualistic protocomputational view of determinism/causality. I like the idea 
of getting rid of time…  and/or multidimensional time… Also I’m a big fan of 
free will. Free will gives us a tool to fight totalitarian systems. We can 
choose not to partake in systems, for example modRNA injections and CBDC's. So 
we need to fight to maintain free will IMHO.

https://www.youtube.com/watch?v=qBjmne9X1VQ

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T772759d8ceb4b92c-M85774330f9c2a75525e1a0ff
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-18 Thread John Rose
Evidence comin' at ya, check out Supplemental Figure 2:

https://zenodo.org/records/8361577


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mb096662703220edbaab50359
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-23 Thread John Rose
…continuing

The science changes when conflicts of interest are removed. This is a fact. And 
a behavior seems to be that injected individuals go into this state of “Where’s 
the evidence?” And when evidence is presented, they can’t acknowledge it or 
grok it and go into a type of loop:

“Where’s the evidence?”
“There is no evidence.”
“Where’s the evidence?”
“There is no evidence.”
…

Understanding loops from computer programming sometimes they occur on 
exceptions… sometimes they are from a state machine where a particular state 
hasn’t been built out yet. Perhaps a new language can be learned via 
programming language detection/recognition and we can view the code. I had 
suggested that this would be the P#.

But who or what is the programmer? 

Evidently the misfoldings do have effects. An increasing number of 
post-injective neurodegenerative evidence is being observed and this is most 
likely related to misfolding. This paper provides some science on significant 
spike seeded acceleration of amyloid formation:

“An increasing number of reports suggest an association between COVID-19 
infection and initiation or acceleration of neurodegenerative diseases (NDs) 
including Alzheimer’s disease (AD) and Creutzfeldt-Jakob disease (CJD). Both 
these diseases and several other NDs are caused by conversion of human proteins 
into a misfolded, aggregated amyloid fibril state… We here provide evidence of 
significant Spike-amyloid fibril seeded acceleration of amyloid formation of 
CJD associated human prion protein (HuPrP) using an in vitro conversion assay.” 

“…Data from Brogna and colleagues demonstrate that Spike protein produced in 
the host as response to mRNA vaccine, as deduced by specific amino acid 
substitutions, persists in blood samples from 50% of vaccinated individuals for 
between 67 and 187 days after mRNA vaccination (23). Such prolonged Spike 
protein exposure has previously been hypothesized to stem from residual virus 
reservoirs, but evidently this can occur also as consequence of mRNA 
vaccination. “

https://www.biorxiv.org/content/10.1101/2023.09.01.555834v1


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M79f0fd78330318f219c4b110
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-21 Thread John Rose
On Tuesday, December 19, 2023, at 9:47 AM, John Rose wrote:
> On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote:
>> That's just a silly conspiracy theory. Do you think polio and smallpox were 
>> also attempts to microchip us?
> 
> That is a very strong signal in the genomic data. What will be interesting is 
> how this signal changes now that it has been identified. Is it possible that 
> the mutations are self-correcting somehow? The paper is still undergoing peer 
> review with 73,000 downloads so far...

There are multiple ways that genetic mutations can “unmutate” or appear to have 
been unmutated. I’m not familiar with GenBank enough to look at that in regards 
to the study…

But intelligence detection is important in AGI. What might be interesting in 
this systems signaling analysis perhaps is the frequency of the variant’s 
synthesis and dispersal with the half-life of the injected human test subjects 
producing and emitting spike-protein. What are the correlations there?

This study shows up to 187 days of spike emission:
https://pubmed.ncbi.nlm.nih.gov/37650258/

Other "issues" exist though in addition to spike emission. There are misfolded 
protein factors as well as ribosomal frameshifting:
https://www.nature.com/articles/s41586-023-06800-3

BUT, these misfoldings and frameshifts may just appear to be noise or errors 
and may in fact be intentional and utilitarian. We are observing all of this 
from a discovery perspective. Also, the lipid nanoparticles utilized are 
carrier molecules across the blood brain barrier (BBB). We can measure 
sociological and psychological behavior anomalies externally but it can be 
difficult to decipher changes that occurred in people’s minds individually 
after they got the injections...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mbad6e64e7d9263447bf7ffe4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-01-15 Thread John Rose
...continuing P# research…

Though I will say that the nickname for P# code used for authoritarian and 
utilitarian zombification is Z# for zomby cybernetic script. And for language 
innovation which seems scarce lately since many new programming languages are 
syntactic rehashes, new intelligence inspired innovative representations are 
imperative.

AGI/Singularity are commonly thought of as an immanentizing eschaton:
https://en.wikipedia.org/wiki/Immanentize_the_eschaton
But before that imagined scalar event horizon there are noticeable 
reconfigurations in systems that might essentially be a self-organizing in an 
emergent autopoiesis. Entertaining that, as well as a potential unfriendly 
AGI-like Globocap in the crosshairs which is wielding new obligatory digitized 
fiat coupled with medical based tyranny (CBDC) there is evidence that can be 
dot-connected into a broader configurative view. And with a potentially 
emergent or emerged intelligence preparing to dominate we need to attempt to 
negotiate the best deal for humanity instead of having unseen and unknown 
figures, whoever or whatever they are, engineering a new feudal system while we 
have the capability remaining:

https://youtu.be/4MrIsXDKrtE?t=14359

https://thegreattaking.com/

https://www.uimedianetwork.com/293534/the-great-setup-part1.htm

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M08dc9498c96683f9c3924c19
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-01-08 Thread John Rose
…continuing P# research…

This book by Dr. Michael Nehis “The Indoctrinated Brain” offers an interesting 
neuroscience explanation and self-defense tips on how the contemporary 
zombification of human minds is being implemented. Essentially, he describes a 
mental immune system and there is a sustained attack on the autobiographical 
memory center. The mental immune system involves “index neurons” which are 
created nightly. Index neuron production is the neural correlate of natural 
curiosity. To manipulate a population the neurogenesis is blocked via 
neuroinflammation so people’s ability to think independently is hacked and 
replaced with indoctrinated scripts. The continual creation of crises 
facilities this. The result being that individuals spontaneously and 
uncontrollably blurb narrative phrases like “safe and effective” and 
“conspiracy theory” from propaganda sources when challenged to independently 
think critically on something like the kill-shots… essentially acting as 
memetic switches and routers. The goal is to strengthen the topology of this 
network of intellectually castrated zombies, or zomb-net, that programmatically 
obeys a centralized command intelligence:

https://rumble.com/v42conr-neurohacking-exposed-dr.-michael-nehls-reveals-how-the-global-mind-manipula.html

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M861d73d982b6cb6575bb6c5e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-19 Thread John Rose
On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote:
> That's just a silly conspiracy theory. Do you think polio and smallpox were 
> also attempts to microchip us?

That is a very strong signal in the genomic data. What will be interesting is 
how this signal changes now that it has been identified. Is it possible that 
the mutations are self-correcting somehow? The paper is still undergoing peer 
review with 73,000 downloads so far...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M9bef38f970d3fcbd86376af7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-19 Thread John Rose
On Monday, December 18, 2023, at 9:31 PM, Matt Mahoney wrote:
> I'm not sure what your point is.

The paper shows that the variants are from genomically generative non-mutative 
origination. Look at the step ladder in the mutation diagrams showing corrected 
previous mutations on each variant. IOW they are getting artificially and 
systemically synthesized and dispersed. Keep in mind the variants are meant to 
the drive smart-ape injections. BTW good job analyzing the GenBank data by the 
researchers.

On Monday, December 18, 2023, at 9:31 PM, Matt Mahoney wrote:
> But I'm not sure what this has to do with AGI except to delay it for a couple 
> of years.

How do you know that AGI isn't deployed yet?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mafbfd6f5016f26536ba3c37c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-05 Thread John Rose
People are going to go Amish.

Faraday clothingware is gaining traction for the holidays. 

And mobile carriers are offering the iPhone 15 upgrade for next to nothing. I 
need someone to confirm that Voice-to-Skull is NOT in the 15 series but I keep 
getting blank stares…

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mb05a84e6219f0149a5f09798
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-05 Thread John Rose
On Tuesday, December 05, 2023, at 2:14 AM, Alan Grimes wrote:
> It's been said that the collective IQ of humanity rises with every 
vaccine death... I'm still waiting for it to reach room temperature...

It’s not all bad news. I heard that in some places unvaxxed sperm is going for 
$1200 a pop. And unvaxxed blood is paying an increasing premium...

Sorry Matt, it doesn’t scale with the number of shots  >=)

Was asking around for a friend… people gotta pay bills ‘n stuff.

https://rumble.com/v3ofzq9-klaus-schwabs-greatest-hits.html

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Maa7ad3866377b34ed3d49679
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-06 Thread John Rose
On Tuesday, December 05, 2023, at 9:53 AM, James Bowery wrote:
> The anti-vaxers, in the final analysis, and at an inchoate level, want to be 
> able to maintain strict migration into their territories of virulent agents 
> of whatever level of abstraction.  That is what makes the agents of The 
> Unfriendly AGI Known As The Global Economy treat them as the "moral" 
> equivalent of "xenophobes":  to be feared and controlled by any means 
> necessary.

The concept of GloboCap from CJ Hopkins, which I thought was brilliant, can be 
viewed yes as an Unfriendly AGI, yes:
https://youtu.be/-n2OhCuf8_s

These fiat systems though are at least semicyclic through time. We are at the 
end of various cycles here though including a fiat, next is a digital control 
system and, this time is different but full of opportunities and dangers.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M813aead2a2f32726c8a69005
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Lexical model learning for LLMs

2023-11-23 Thread John Rose
A compression ratio of 0.1072 seems like there is plenty of room still. What is 
the max ratio estimate something like 0.08 to 0.04?  Though 0.04 might be 
impossibly tight... even at 0.05 the resource consumption has got to 
exponentiate out of control unless there are overlooked discoveries yet to 
be made.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdc371ce11a040352-Mb338eb5e669be29a59811a86
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Saturday, December 02, 2023, at 11:25 PM, Giovanni Santostasi wrote:
> AGI gives whatever we want so that is the end of us, so idiotic conclusion, 
> sorry.

Although I would say after looking at the definition of dystopia and once one 
fully understands the gravity of what is happening it is already globally 
dystopic, by far.

An intentionally sustained ACM burn rate increasingly tweaked up by 
artificially intelligent actors, while at the same time mindscrewing masses 
into a lemming like state, and defending it, including top thinkers and 
scientists, what’s the terminology for that?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M088429e6d9556972fbf0f71a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Sunday, December 03, 2023, at 7:59 AM, James Bowery wrote:
> 
> A dream to some, a nightmare to others.  
> 

All those paleolithic megaliths around the globe… hmmm…could they be from 
previous human technological cycles? 
Unless there's some supercyclic AI keepin' us down, now that's conspiracy 
theory :) Bleeding off elite souls from the NPC's.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M1e1775cac8b1ea833360c625
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-03 Thread John Rose
On Saturday, December 02, 2023, at 11:25 PM, Giovanni Santostasi wrote:
> I cannot believe this group is full of dystopians. Dystopia never happen, at 
> least not for long or globally. They are always localized in time or space. 
> Hollywood is full of dystopia because they lack imagination. 

This group is not full of dystopians, don’t smear.

Why do you think dystopias haven't happened, like nukes not killing us? Nuclear 
explosions make great art why be so doomer! Enjoy the sunshine.

Not.  We need to develop plans because this thing is just getting started.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M778b5a27ab9f1c1a1e65145d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-02 Thread John Rose
People need to understand the significance of this global mindscrew. And 
ChatGPT is blue-pilled on the shots, as if anyone expected differently.

What is absolutely amazing is that Steve Kirsch wasn’t able to speak at the MIT 
auditorium named after him since he was labeled as a misinformation 
superspreader until it was arranged by truth seeking and freedom loving 
undergrads..

https://rumble.com/v3yovx4-vsrf-live-104-exclusive-mit-speech-by-steve-kirsch.html

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M92f2f141ecb6d16a44d51d85
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-04 Thread John Rose
On Sunday, December 03, 2023, at 10:00 AM, Matt Mahoney wrote:
> I don't mean to sound dystopian.


OK, let me present this a bit differently.

THIS MUTHERFUCKER WAS DESIGNED TO KILL YOU

Mkay?

In a nice way, not using gas or guns or bombs. It was a trial balloon developed 
over several decades and released to see how it would go. The shot that is, 
covid’s purpose was to facilitate the shots. It went quite well with little 
resistance. It took out 12 to 17 million lives according to conservative ACM 
estimates. I’ve seen other estimates much higher with the vax injuries in the 
100’s of millions, not mentioning natality rates, disabilities and the yet to 
be made dead.

Now you might ask, what’s all this got to do with AGI? Well let’s call it AI 
for now to obfuscate and not give AGI a bad name.

Two things: This weaponry is getting further honed by AI, and, AI can fight AI.

The scope is quite large and difficult to maintain a comprehensive focus on as 
it extends into various realms. As well most people are still playing catch up 
by just proving and acknowledging that it actually maims and kills verses what 
it is all about. For example, the Philippines gov’t has just voted on 
investigating what happened to those surplus 300+ thousand dead people from a 
couple years ago.

To me, tens of millions dead with many more injuries and mortality and natality 
plunging are some red flags and cause for concern.

You could say it was human driven by the deep state or transnational elites, or 
aliens or whatever but it could be AI. And it is/was definitely AI assisted and 
increasingly more so… so fighting this will require AI/AGI and/or other 
technologies yet to be provided. And if this is merely just Satan Klaus with 
the WEF and Kill Gates they will be taken care of using other mechanisms. But, 
if it is AI some unique skills may be required to deal with.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M4ada06808870efab3a89b104
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: By fire or by ice.

2023-12-08 Thread John Rose
On Wednesday, December 06, 2023, at 1:52 PM, Alan Grimes wrote:
> Whether it is even possible for the fed to print that much money is an 
open question. I want to be on the other side of the financial collapse 
(fiscal armageddon) as soon as possible. Right now we are just waiting 
to see whether the old system will die by fire or by ice... =\

For an AGI to model the global financial system accurately it would have to 
understand how it REALLY works. As in a “Creature from Jekyll Island” model, 
the Titanic, etc. How predictions can be made, for example if the price of oil 
falls below a particular range then an “event” happens like a random missile 
gets fired or a terrorist attack occurs or there is an “accident”. And if DXY + 
10Y rate rise above particular thresholds then central banks secretly start 
buying debt and large caps. And how inflation is really managed since inflation 
is the control mechanism where value is slowly extracted from the system, as in 
a trickle-up economics… and how precious metals are suppressed with naked 
shorts to anchor the whole thing and how crypto markets are manipulated… etc. 
etc..

There is still value to be extracted from the existing system so an illusion of 
stability is being maintained.

At the same time war is getting tweaked up as inflation smolders and awareness 
of the kill-shots rise. The scamdemic was demand destruction with liquidity 
flood and the wars are massive money laundering.

Their plan is to create distress where the public welcomes CBDC, probably with 
free food credits as food scarcity increases, start using this app, with social 
credit scoring. “They” being “the cabal” and friends.

If a GloboCap is an unfriendly AGI then it is a type of massive multiagent 
system with human agents and fiat is a protocol but the fiat is not one-to-one 
with commodities whereas BRICs are looking at a one-to-one but that won’t 
happen since debt creation might actually be a required mechanism in a monetary 
system with its corruptibility a strong catalyst..

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdab60d6adcb6250c-M23343c8287a07b1a3ba6e5e4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2023-12-07 Thread John Rose
On Wednesday, December 06, 2023, at 12:50 PM, James Bowery wrote:
> Please note the, uh, popularity of the notion that there is no free will.  
> Also note Matt's prior comment on recursive self improvement having started 
> with primitive technology.  
> 
> From this "popular" perspective, there is no *principled* reason to view "AGI 
> Safety" as distinct from the de facto utility function guiding decisions at a 
> global level.

Oh, that’s bad. Any sort of semblance of freewill is a threat. These far-right 
extremists will be hunted down and investigated as potential harborers of 
testosterone.

It’s flawed thinking where if everyone speaks the same language for example or 
if there is just one world government everything will be better and more 
efficient. The homogenization becomes unbearable. It might be entropy at work, 
squeezing out excess complexity and implementing a control framework onto human 
negentropic slave resources.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mc393cedb2b870e339c30636b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: "The Optimal Choice of Hypothesis Is the Weakest, Not the Shortest"

2024-01-29 Thread John Rose
Weakness reminds me of losylossslessyzygy (sic. lossy lossless syzygy)… hmm… I 
wonder if it’s related.

Cardinality (Description Length) verses cardinality of its extension (Weakness)…

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T78fb8d90b9a51bf0-M8ce9daaebe2365093ca8c16f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 10:07 AM, James Bowery wrote:
> What assumption is that?

The assumption that alpha is unitless. Yes they cancel out but the simple 
process of cancelling units seems incomplete.

Many of these constants though are re-representations of each other. How many 
constants does everything boil down to I wonder...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M1b3cc8ce2f8e3f5ba2c77697
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Iran <> Israel, can AGI zealots do anything?

2024-04-18 Thread John Rose
On Wednesday, April 17, 2024, at 11:17 AM, Alan Grimes wrote:
> It's a stage play. I think Iran is either a puppet regime or living 
under blackmail. The entire thing was done to cover up / distract from / 
give an excuse for the collapse of the banking system. Simultaneously, 
the market riggers ran 1.4 billion ounces of silver derivatives through 
the market to keep the price from rising above $30/oz.

Some of these Macro Event Probabilities look like:

P(E)= f(DXY, 10Y, Price of Crude,…)

Middle East charades are also a distraction from the WHO Pandemic Treaty which 
the globalists are attempting to jam through despite protests in various 
countries. They really want that jab juice in as many as possible... in as many 
intelligent human agents as possible... programmable agents... wirelessly 
programmable agents... like hordes of remote controlled NPC Wojaks. Oy ve!

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9d6271053bbd3f3-Me44a6b55e65ffe990a4ddca5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-18 Thread John Rose
On Thursday, April 11, 2024, at 1:13 PM, James Bowery wrote:
> Matt's use of Planck units in his example does seem to support your 
> suspicion.  Moreover, David McGoveran's Ordering Operator Calculus approach 
> to the proton/electron mass ratio (based on just the first 3 of the 4 levels 
> of the CH) does treat those pure/dimensionless numbers as possessing a 
> physical dimension -- mass IIRC.

Different alphas across different hypothetical universes might affect the 
overall intelligence of each universe. Perhaps affecting the rate at which 
intelligence increases. I don’t buy what some say though that if alpha wasn’t 
perfectly tuned to what it is now then intelligent life wouldn’t exist. It 
might exist but in a different form. Unless there is some particular strong 
physical coupling.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M9f71087c9ae68ae4aae0896e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-21 Thread John Rose
If the fine structure constant was tunable across different hypothetical 
universes how would that affect the overall intelligence of each universe? Dive 
into that rabbit hole, express and/or algorithmicize the intelligence of a 
universe. There are several potential ways to do that, some of which offer 
rather curious implications.

Apparently though alpha may vary significantly within our own universe... 
according to some unsubstantiated articles I've read.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M292d0a064091603346d3095e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-11 Thread John Rose
> "Abstract Fundamental physical constants need not be constant, neither 
> spatially nor temporally."

If we could remote view somehow across multiple multiverse instances 
simultaneously in various non-deterministic states and perceive how the 
universe structure varies across different alphas. Do the different universe 
alphas coalesce to a similar value temporally? I think they may get stuck at 
different stabilization states and have non-continuous variation across 
universes. But if they trended to the same value that would tell you something 
about a core inception algorithm.

Have to read up on contemporary cosmology… I have assumed a sort of injection 
model. But the injection might really be a generative perception as if each 
universe is generatively perceived from a consciously creative rendition. The 
different alpha structures may then give insight then into any injector 
cognition model…. Kind of speculative though.

I also question though the unitless assumption.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Ma10187a154c485f1f53d8506
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Entering the frenzy.

2024-04-11 Thread John Rose
On Friday, April 05, 2024, at 6:22 PM, Alan Grimes wrote:
> It's difficult to decide whether this is actually a good investment:

Dell Precisions are very reliable IMO and the cloud is great for scaling up. 
You can script up a massive amount of compute in a cloud then turn it off when 
done.

Is consciousness itself the resource hog really?  Or is it the intelligence 
side of things.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc72db78d6b880c68-M2cc3a9a4ed0998d782a038f2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Will SORA lead to AGI?

2024-04-11 Thread John Rose
On Thursday, April 11, 2024, at 12:27 AM, immortal.discoveries wrote:
> Anyway, very interesting thoughts I share here maybe? Hmm back to the 
> question, do we need video AI? Well, AGI is a good exact matcher if you get 
> me :):), so if it is going to think about how to improve AGI in video format 
> style, it would dream of the matching happening. But it can too in text form 
> hmm.

It’s machine <=> human visual feedback on progress towards AGI. What we can 
imagine and what is currently being done in the visual department. There are 
things that we need to see visually… but it’s difficult to monitor it all. 
Visual needs to be coupled with experiential in reliable knowledge generation 
from uncertainty. Though it's a small window really...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5ac1f9ef84312f96-Mb4e474e1c25acd7c50fdf4a9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-03 Thread John Rose
Expressing the intelligence of the universe is a unique case, verses say 
expressing the intelligence of an agent like a human mind. A human mind is very 
lossy verses the universe where there is theoretically no loss. If lossy and 
lossless were a duality then the universe would be a singularity of 
lossylosslessness.

There is a strange reflective duality though in that when one attempts to 
mathematically/algorithmically express the intelligence of the universe the 
universe at that movement is expressing the intelligence of the agent since the 
agent's conceptual expression is contained and created by the universe.

Whatever happened to Wissner-Gross's Causal Entropic Force I haven't heard of 
that in a while...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me821389c43b756e156ceef66
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] my AGI-2024 paper (AGI from the perspective of categorical logic and algebraic geometry)

2024-05-03 Thread John Rose
On Thursday, May 02, 2024, at 6:03 AM, YKY (Yan King Yin, 甄景贤) wrote:
> It's not easy to prove new theorems in category theory or categorical 
> logic... though one open problem may be the formulation of fuzzy toposes.

Or perhaps neutrosophic topos, Florentin Smarandache has written much 
interesting work in this area.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T45b9784382269087-Mdb56eae8d4bc3eeff6b6e40c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-11 Thread John Rose
Seems that software and more generalized mathematics should be discovering 
these new structures. If a system projects candidates into a test domain, 
abstracted, and wires them up for testing in a software host how would you 
narrow the search space of potential candidates? You’d need a more general 
mathematical model that has insight into efficiency projections. And the 
abstracted software may require a somewhat open-ended generalization capability 
for testing since the candidates would take on unknow forms.

https://arxiv.org/abs/2404.19756

Ballmer: "Developers, developers, developers, developers!"
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M8b051de20c2a71345de3edf1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-11 Thread John Rose
On Wednesday, May 08, 2024, at 6:24 PM, Keyvan M. Sadeghi wrote:
>> Perhaps we need to sort out human condition issues that stem from human 
>> consciousness?
> 
> Exactly what we should do and what needs funding, but shitheads of the world 
> be funding wars. And Altman :))

If Jeremy Griffith’s explanation is correct it would invalidate some literature 
on the subject. I would like to see rebuttals.
And if he is right then models of the development of human intelligence may be 
affected.

I do see potential issues but it's worth entertaining to see if it "fixes" or 
alters some models. Though many individuals may not take notice since it could 
require a deep refactoring of their worldview But some may find comfort in 
this explanation regarding their own personal behavior and an understanding in 
observations of such behavior 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M6fae07260db6c30dcb0a97c0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI is killing the internet

2024-05-13 Thread John Rose
Mike Gunderloy disconnected. Before the internet he did Factsheet Five which 
connected alt undergrounders. It really was an amazing publication that could 
be considered a type of pre-internet search engine with zines as websites.

https://en.wikipedia.org/wiki/Factsheet_Five

Then as the internet expanded he wrote umpteen books on Microsoft software 
technologies and blogged incessantly.

Here is his last blog apparently during Covid:

https://afreshcup.com/home/2020/10/30/double-shot-2717.html

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-M89ab7ed9b30568df607c3a4e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI is killing the internet

2024-05-13 Thread John Rose
Also, with TikTok governments don’t want the truth exposed because populations 
tend to get rebellious so they want “unsafe” information suppressed. E.g. 
Canadian trucker protests…. I sometimes wonder do Canadians know that Trudeau 
is Castro’s biological son? Thanks TicTok didn’t know that. And the American 
gov’t really needs to have some control in TicTok urgently because big bad evil 
China is stealing people’s data. Uhm or is it that a tool like TicTok could 
result in millions of angry residents storming DC, similar to what happened in 
Sri Lanka in 2022 but on a 1000x scale. The grifters in control fear us using 
TicTok. They already have Facebook, etc., see Taibbi and the censorship 
industrial complex. We’re not smart enough to make our owns decisions so 
deepfakes will be banned except deepstate deepfakes. AI and state embedded AGI 
are going to obscure truth more smartly.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-M5ad931d352187cbc8b6fc1c6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread John Rose
On Sunday, May 12, 2024, at 10:38 AM, Matt Mahoney wrote:
> All neural networks are trained by some variation of adjusting anything that 
> is adjustable in the direction that reduces error. The problem with KAN alone 
> is you have a lot fewer parameters to adjust, so you need a lot more neurons 
> to represent the same function space. That's even with 2 parameters per 
> neuron, threshold level and steepness. The human brain has another 7000 
> parameters per neuron in the synaptic weights.

I bet in some of these so-called “compressor” apps that Matt always looks at 
there is some serious NN structure tweaking going on there. They’re open 
source, right? Do people obfuscate the code when submitting?


Well it’s kinda obvious but transformations like this:

(Universal Approximation Theorem) => (Kolmogorov-Arnold Representation Theorem)

There’s going to be more of them.

Automating or not I’m sure researchers are on it.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-Md991f57050d37e51db0e68c5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread John Rose
On Sunday, May 12, 2024, at 12:13 AM, immortal.discoveries wrote:
> But doesn't it have to run the code to find out no?

The people who wrote the paper did some nice work on this. They laid it out 
perhaps intentionally so that doing it again with modified structures is easy 
to visualize.

A simple analysis would be to basically “tween” mathematics and graph structure 
as a vector from MLP to KAN to open a peep whole into the larger thing.

Right, a generalized software system test host… think reflection, many 
programming languages have reflection, so you reflect off of the test structure 
as the abstraction layer into a fixed computing resource measurement to rank.

It’s not difficult to generate the structures but how do you find the best 
candidates to run the tester on? Perhaps a coupling with some topology of 
computational complexity classes and see which structures push easiest into 
that?… or some other method… this is probably the difficult part... unless you 
just throw massive computing power at it :)

But yes, when you start thinking about it there might be a recursion where the 
MLP/KAN’s or whatever view themselves to self modify.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M50970ab0535f6725bf2e12ec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote:
> We don't
know the program that computes the universe because it would require
the entire computing power of the universe to test the program by
running it, about 10^120 or 2^400 steps. But we do have two useful
approximations. If we set the gravitational constant G = 0, then we
have quantum mechanics, a complex differential wave equation whose
solution is observers that see particles. Or if we set Planck's
constant h = 0, then we have general relativity, a tensor field
equation whose solution is observers that see space and time. Wolfram
and Yudkowsky both estimate this unknown program is only a few hundred
bits long, and I agree. It is roughly the complexity of quantum
mechanics and relativity taken together, and roughly the minimum size
by Occam's Razor of a multiverse where the n'th universe is run for n
steps until we observe one that necessarily contains intelligent life.

Sounds like the KC of U, the maximum lossless compression of the universe 
assuming infinite resources for perfect prediction. But there is a lot of 
lossylosslessness out there for imperfect prediction or locally perfect 
lossless, near lossless, etc. That intelligence has a physical computational 
topology across spacetime where much is redundant though estimable… and 
temporally changing. I don’t rule out though no matter how improbable that 
there could be an infinitely powerful compressor within this universe, an 
InfiniComp. Weird stuff has been shown to be possible. We can conceive of it 
but there may be issues with our conception since even that is bound by limits.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M8f6799ef3b2e99f86336b4cb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-08 Thread John Rose
On Tuesday, May 07, 2024, at 6:53 PM, Matt Mahoney wrote:
> Kolmogorov proved there is no such thing as an infinitely powerful
compressor. Not even if you have infinite computing power.

Compressing the universe is a unique case especially being supplied with 
infinite computing power. Would the compressor ever stop? And would we be copy 
compressing the universe or actually compressing the full universe as data 
including the compressor itself. Would the compressor only run once since the 
whole universe would potentially go with it prohibiting another compression 
comparison or a decompression.

Assuming we are actually compressing the universe and not a copy, and there is 
no infinitely powerful compressor according to Kolmogorov, then it seems that 
the universe might still expand against the finite compressor that is being 
supplied with infinite power.

But then does the infinite power come from within the U or from outside 
somehow... h…
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M1f1c33b606b4df64d1bdc119
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-08 Thread John Rose
On Tuesday, May 07, 2024, at 9:41 PM, Keyvan M. Sadeghi wrote:
> It's because of biology. There, I said it. But it's more nuanced. Brain cells 
> are almost identical at birth. The experiences that males and females go 
> through in life, however, are societally different. And that's rooted in 
> chimpz being our forefathers, and mqscular difference of males and females in 
> most species.
> 

Perhaps we need to sort out human condition issues that stem from human 
consciousness?

“Selfish, competitive and aggressive behavior is not due to savage instincts 
but to a psychologically upset state or condition.”

https://youtu.be/q-TK6_aWqGU

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Mc710fa6e6eb7fd3c2015948d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-05-07 Thread John Rose

For those genuinely interested in this particular Imminent threat here is a 
case study (long video) circulating on how western consciousness is being 
programmatically hijacked presented by a gentleman who has been involved and 
researching it for several decades. He describes this particular “rogue, 
unfriendly” as a cloaked remnant “KGB Hydra”. We can only speculate what it 
really is at this day and age since the Soviet Union and KGB were officially 
dissolved in 1991 and some of us are aware of the advanced technologies that 
they were working on back then.

https://twitter.com/i/status/1779017982733107529

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M40062529b066bd7448fe50a0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Tuesday, May 07, 2024, at 8:04 AM, Quan Tesla wrote:
> To suggest that every hypothetical universe has its own alpha, makes no 
> sense, as alpha is all encompassing as it is.

You are exactly correct. There is another special case besides expressing the 
intelligence of the universe. And that is expressing the intelligence of 
hypothetical universe at zero communication complexity... unless there is some 
unknown Gödel channel.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Me43083c2dce972b7746d22ed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-07 Thread John Rose
On Friday, May 03, 2024, at 7:10 PM, Matt Mahoney wrote:
> So when we talk about the intelligence of the universe, we can only really 
> measure it's computing power, which we generally correlate with prediction 
> power as a measure of intelligence.

The universes overall prediction power should increase, for example with the 
rise of intelligent civilizations among galaxies, though physical entropy is 
increasingly generated in the universe environment. All these prediction powers 
would increase unevenly though they would become increasingly networked via 
interstellar communication. A prediction power apex would be different from a 
sum and it emerges from biological negentropy and then from synthetic AGI but 
physical prediction "power" across the universe implies a sum verses an apex… 
if one civilization’s AGI has more prediction capacity or potential.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-M00d6486e8f5ef51067361ff8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-21 Thread John Rose
I don’t like beating this drum but this has to be studied in relation to 
unfriendly AGI and the WHO pandemic treaty is coming up in May which has to be 
stopped. Here is a passionate interview after Dr. Chris Shoemaker presenting in 
US congress, worth watching for a summary of the event and the current 
mainstream status. It’s not too technical.

My hypothesis still stands IMO… I do want it to fail. Chromosomes 9 and 12 are 
modified, why? Tumor suppression related chromosomes? I don't know... The 
interview doesn’t cover the graphene oxide, quantum dots, etc. and radiation 
related mechanisms which are also potentially mind blowing.

Thank you Elon for fixing Twitter without which we were in a very, very dark 
place.

https://twitter.com/i/status/1770522686210343392

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M1bbfbd0c1261f7e85119dff4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-21 Thread John Rose
On Thursday, March 21, 2024, at 11:41 AM, Keyvan M. Sadeghi wrote:
> Worship stars, not humans 

The censorship the last few years was like an eclipse.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Mf2b0a65e2f58709ef10adfec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-24 Thread John Rose
On Saturday, March 23, 2024, at 6:10 PM, Matt Mahoney wrote:
> But I wonder how we will respond to existential threats in the future, like 
> genetically engineered pathogens or self replicating nanotechnology. The 
> vaccine was the one bright spot in our mostly bungled response to covid-19. 
> We have never before developed a vaccine to a novel disease this fast, just 
> over a year from identifying the virus to widespread distribution.

This is the future, we have a live one to study but it requires regurgitating 
any blue-pills :)

The jab was decades in development and the disease contains patented genetic 
sequences.

Documentary on how they blackholed hydroxy (among others) to force your 
chromosomal modifications: 
 https://twitter.com/i/status/1768799083660231129

Unfriendly AGI is one thing but a rogue unfriendly is another so a diagnosis is 
necessary.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M9a0ac94d8b6a4d1cd960cb3e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-22 Thread John Rose
On Thursday, March 21, 2024, at 1:07 PM, James Bowery wrote:
> Musk has set a trap far worse than censorship.

I wasn’t really talking about Musk OK mutants? Though he had the cojones to do 
something big about the censorship and opened up a temporary window basically 
by acquiring Twitter.

A question is who or what is behind the curtain? Those in the know that leak 
data seem to get snuffed…

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Mb4e8e4edcd88a6b1bb9e9667
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:22 AM, Keyvan M. Sadeghi wrote:
> With all due respect John, thinking an AI that has digested all human 
> knowledge, then goes on to kill us, is fucking delusional 

Why is that delusional? It may be a logical decision for the AI to make an 
attempt to save the planet from natural destruction.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M4379121c7778c79b8be00581
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Thursday, March 28, 2024, at 5:55 PM, Keyvan M. Sadeghi wrote:
> I'm not sure the granularity of feedback mechanism is the problem. I think 
> the problem lies in us not knowing if we're looping or contributing to the 
> future. This thread is a perfect example of how great minds can loop forever.

Contributing to the future might mean figuring out ways to have AI stop killing 
us. An issue is that living people need to do this, the dead ones only leave 
memories. Many scientists have proven now that the mRNA jab system is a death 
machine but people keep getting zapped. That is a non-forever loop.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Me755cab585f5cb9f665c8b0c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:33 AM, Keyvan M. Sadeghi wrote:
> For the same reason that we, humans, don't kill dogs to save the planet.

Exactly. If people can’t snuff Wuffy to save the planet how could they decide 
to kill off a few billion useless eaters? Although central banks do fuel both 
sides of wars for reasons that include population modifications across 
multi-decade currency cycles.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Mfe60caa2e1c211ec6f07c236
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote:
> The fine structure constant, in conjunction with the triple-alpha process 
> could be coded and managed via AI. Computational code. 

Imagine the government in its profound wisdom declared that the fine structure 
constant needed to be modified and anyone that didn’t follow the new rule would 
be whisked away and have their social media accounts cancelled. I know that 
could never, ever happen *wink* but entertain the possibility. What would be 
fixed and what would break?

It’s true, governments collude to modify physical constants, for example time, 
daylight savings time, adding seconds to years, shifting calendars for example 
from 13 months to 12 and some say this intentionally caused a natural human 
cyclic decoupling rendering turtle shell calendars obsolete thus retarding 
turtle effigy consciousness 

But you want to physically modify the constant with AI in a nuclear lab. That’s 
a long shot to emerge an AGI.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Mac063c8e597998109b576ec9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 11:11 AM, Nanograte Knowledge Technologies 
wrote:
> Who said anything about modifying the fine structure constant? I used the 
> terms: "coded and managed".
>  
>  I can see there's no serious interest here to take a fresh look at doable 
> AGI. Best to then leave it there.

I can’t get it out of my head now, researching, asking ChatGPT what it thinks. 
Kinda makes you wonder.

They say people become obsessed with it.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M16bd0477206ddf4e2ecaa55c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-30 Thread John Rose
On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote:
> Prediction measures intelligence. Compression measures prediction.

Can you reorient the concept of time from prediction? If time is on an axis, if 
you reorient the time perspective is there something like energy complexity?

The reason I ask is that I was mentally attempting to eliminate time from 
thought and energy complexity came up... verses say a physical power 
complexity. Or is this  a non-sequitur.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M960152aadc5494156052b57d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-04-02 Thread John Rose
On Monday, April 01, 2024, at 3:24 PM, Matt Mahoney wrote:
> Tonini doesn't even give a precise formula for what he calls phi, a measure 
> of consciousness, in spite of all the math in his papers. Under reasonable 
> interpretations of his hand wavy arguments, it gives absurd results. For 
> example, error correcting codes or parity functions have a high level of 
> consciousness. Scott Aaronson has more to say about this. 
> https://scottaaronson.blog/?p=1799

Yes, I remember Aaronson completely tearing up IIT, redoing it several ways, 
and handing it back to him. There is a video too I think. A prospective 
conscious model should need to pass the Aaronson test.

Besides the simplistic one-to-one mapping of bits to bits a question might be – 
describe an algorithm that ranks the consciousness of some of the permutations 
of a string. It would be interesting to see what various consciousness models 
say about that, if anything.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M46e52b8511bf1d7bd31a856c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-04-02 Thread John Rose
Or perhaps better, describe an algorithm that ranks the consciousness of some 
of the integers in [0..N]. There may be a stipulation that the integers be 
represented as atomic states all unobserved or all observed once… or allow ≥ 0 
observations for all and see what various theories say.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-Ma62fd8f51ea4c6b7c92a2ee7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-04-01 Thread John Rose
On Friday, March 29, 2024, at 8:31 AM, Quan Tesla wrote:
> Musical tuning and resonant conspiracy? Cooincidently, I spent some time 
> researching that just today. Seems, while tuning of instruments is a matter 
> of personal taste (e.g., Verdi tuning)  there's no real merit in the pitch of 
> a musical instrument affecting humankind, or the cosmos. 
> 
> Having said that, resonance is a profound study and well-worth pursuing. 
> Consider how the JWST can "see" way beyond its technical capabilities. 

Conspiracy theory? On it  :)

https://www.youtube.com/watch?v=BQCbjS4xOfs

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M04a527cf59256b52a4968c57
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-04-01 Thread John Rose
On Sunday, March 31, 2024, at 7:55 PM, Matt Mahoney wrote:
> The problem with this explanation is that it says that all systems with 
> memory are conscious. A human with 10^9 bits of long term memory is a billion 
> times more conscious than a light switch. Is this definition really useful?

A scientific panpsychist might say that a broken 1 state light switch has 
consciousness. I agree it would be useful to have a mathematical formula that 
shows then how much more conscious a human mind is than a working or broken 
light switch. I still haven’t read Tononi’s computations since I don’t want it 
to influence my model one way or another but IIT may have that formula? In the 
model you expressed you assume a 1 bit to 1 bit scaling which may be a gross 
estimate but there are other factors.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M9c1f29e200e462ef29fbfcdf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Entering the frenzy.

2024-04-05 Thread John Rose
On Friday, April 05, 2024, at 12:21 AM, Alan Grimes wrote:
> So let me tell you about the venture I want to start. I would like to 
put together a lab / research venture to sprint to achieve machine 
consciousness. I think there is enough tech available these days that I 
think there's enough tech to try out my theory of consciousness. For the 
sake of completing the project, all discussion is prohibited. If you 
mention the Hard Problem, then you're off the project, no discussion! I 
want to actually do this, go ruminated hard problems for the next ten 
millinea, I don't care. You are allowed to argue with me but I have 
absolute authority to shut down any argument with prejudice.

The “Hard Problem of Consciousness” term is similar to “Conspiracy Theory” and 
is probably an unintentional psycholinguistic tool for memetic hegemony. That 
technique can be utilized… we need to take back the narrative from authorities 
since people are being led astray. Not by Chalmers who is relatively harmless 
in doing that but by other newly defined existentially democratic structural 
entities, moving from individual democratic autonomy to institutional 
democratic autonomy, i.e. the institution verses the individual in the newly 
defined control narrative.

Menlo Park VCs are connected to banks like the failed Silicon Valley Bank where 
the capital is setup to flow freely on insider agreements with the Federal 
Reserve. If you highlight your project with ESG, DEI, etc. you will get favored 
status and the money flow is relatively unlimited. Until more banks fail which 
is coming soon as there is a massive move towards bank centralization. Also, we 
are moving towards a war economy as the last vestiges of value in the currency 
are getting expended in defense of itself which makes one wonder if intelligent 
war machines have some value in having a consciousness.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc72db78d6b880c68-Ma0613726134d24f173b3fe64
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-04-04 Thread John Rose
I was just thinking here that the ordering of the consciousness in permutations 
of strings is related to their universal pattern frequency so would need 
algorithms to represent that... 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M67fae77e54378c18f8497550
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-04-04 Thread John Rose
On Wednesday, April 03, 2024, at 2:39 PM, James Bowery wrote:
> * and I realize this is getting pretty far removed from anything relevant to 
> practical "AGI" except insofar as the richest man in the world (last I heard) 
> was the guy who wants to use it to discover what makes "the simulation" tick 
> (xAI) and he's the guy who founded OpenAI, etc.

This is VERY interesting James and a useful exercise it does all relate. We 
might be able to find some answers by looking at the code you are pasting. I 
haven’t seen it presented in this way it’s sort of like reworking a macro/micro 
view. Many people pursuing AGI are approaching "the simulation" source code 
either knownst or unbeknownst to themselves. As a youngster I realized that the 
key to understanding everything was in the relationship between the big and the 
small and that seems still to be true.
 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Md441902c49d7fc2595fdacdf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-28 Thread John Rose
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote:
> In my 2008 distributed AGI proposal (
> https://mattmahoney.net/agi2.html ) I described a hostile peer to peer
> network where information has negative value and people (and AI)
> compete for attention. My focus was on distributing storage and
> computation in a scalable way, roughly O(n log n).

By waiting all this time many technical issues have been sorted out in forkable 
tools and technologies to build something like your CMR. I was actually 
thinking about it a few months ago regarding a DeSci system for these vax 
issues since I have settled on an implementable model of consciousness which 
provides a virtual fabric and generally explains an intelligent system like a 
CMR. I mean CMR could be extended into a panpsychist world wouldn’t that be 
exciting?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M2eae32fa79678c15892395f7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 1:45 AM, Quan Tesla wrote:
> If yes, what results have you to show for it?

There’s no need to disparage the generous contributions by some highly valued 
and intelligent individuals on this list. I’ve obtained invaluable knowledge 
and insight from these discussions even though I may not sound like it being 
distracted the last few years by the contemporary state of affairs. And some of 
us choose not to disclose certain things for various reasons.

It’s an email list, very asynchronous which has its benefits. What are you 
expecting? AI go Foom? What results have you to show? We're all ears (eyes).

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M94b5e733c65bbf20218206e3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote:
> One cannot disparage that which already makes no difference either way. 
> John's well, all about John, as can be expected.

What?? LOL listen to you 

On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote:
> I've completed work and am still researching. Latest contribution is my 
> theory as to the "where from?" and "why?" of the fine structure constant. 
> Can't imagine achieving AGI without it. Can you? 

Where does it come from then? What’s the story?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Ma783b61c757709857a923c99
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread John Rose
On Thursday, March 28, 2024, at 10:06 AM, Quan Tesla wrote:
> At least with an AI-enabled fine structure constant, we could've tried 
> repopulating selectively and perhaps reversed a lot of the damage we caused 
> Earth.

The idea of AI-enabling the fine-structure constant is thought provoking but... 
how? Seems like a far out concept. Is it theoretically and practicably 
changeable? Perhaps AI-enable the perception of it?

As an aside, look at this beautiful book I found with that as title:
https://shorturl.at/hJRUY

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-Ma224f6d8bfd11b3d8aa0ea2f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Monday, March 25, 2024, at 5:18 AM, stefan.reich.maker.of.eye wrote:
> On Saturday, March 23, 2024, at 11:10 PM, Matt Mahoney wrote:
>> Also I have been eating foods containing DNA every day of my life without 
>> any bad effects.
> 
> Why would that have bad effects?

That used to not be an issue. Now they are mRNA jabbing farm animals and 
putting nano dust in the food. The control freaks think they have the right to 
see out of your eyes… and you’re just a rented meatsuit.

We need to understand what this potential rogue unfriendly looks like. It 
started out embedded with dumbed down humans mooch leeching on it…. like a big 
queen ant.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M799cc6d0a090f0c1e8d83050
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread John Rose
On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote:
> Alpha won't directly result in AGI, but it probsbly did result in all 
> intelligence on Earth, and would definitely resolve the power issues plaguing 
> AGI (and much more), especially as Moore's Law may be stalling, and 
> Kurzweil's singularity with it. 

There are many ways to potentially modify these physical constants. Most I 
think have to deal with perception but perception is generation. Are they 
really constants? For all practical purposes, yes… well, not all apparently and 
calling them constants may be a form of bias.

There is reality and perception of reality. We know perception changes, for 
example Newtonian => Relativistic. There were measurements that didn’t add up. 
Relativistic now doesn’t add up. Engineering lags physics often…

I do believe that we can modify more than just the perception of reality 
outside of spacetime and have thought about it somewhat, it would be like 
REALLY hacking the matrix. But something tells me not to go there as it could 
be extremely dangerous. I’m sure some people are going there.

You would have to be more specific on what modification (AI enabling) of the 
fine structure constant you are referring to.

There is this interesting thing I see once in a while (not sure if it’s 
related) but have never pursued it where people say that some standard music 
frequency was slightly modified by the Rockefellers for some reason like adding 
a slight dissonance or something… I do know they modified the medical system to 
be more predatory and monopolistic in the early 1900’s and that led to where we 
are now.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5c24d9444d9d9cda-M5b91bea0fa77902a0b0bc7fc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote:
> We have a fairly good understanding of biological self replicators and
how to prime the immune systems of humans and farm animals to fight
them. But how to fight misinformation?

Regarding the kill-shots you emphasize reproduction verses peer-review 
especially when journals such as The Lancet and NE Journal of Medicine now are 
captured by pharma. And ignore manipulated media like CNN, etc.. including 
information from your own federal government unfortunately. 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M50126dd1549d1b40f2990b80
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote:
> Flat Earthers, including the majority who secretly know the world is
round, have a more important message. How do you know what is true?

We need to emphasize hard science verses intergenerational pseudo-religious 
belief systems that are accepted as de facto truth. For example, vaccines are 
good for you and won't modify your DNA :)

https://twitter.com/i/status/1738303046965145848
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M66e2cfff4f8461d3f15cd897
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI will kill us

2024-03-27 Thread John Rose
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote:
> I predict a return of smallpox and polio because people won't get vaccinated. 
> We have already seen it happen with measles.

I think it’s a much higher priority as to what’s with that non-human DNA 
integrated into chromosomes 9 and 12 for millions of people. Measles and a rare 
smallpox case we can address later… Is it to unsuppress tumors for depop 
purposes? I can understand that. And there is an explosion of turbo cancers 
across many countries now esp. in young people. BUT... I suspect more than that 
and potentially other "features". This must be analyzed ASAP.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T991e2940641e8052-M2b017a488fcbbff4f4b81c65
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-24 Thread John Rose
On Wednesday, May 15, 2024, at 12:28 AM, Matt Mahoney wrote:
> The top entry on the large text benchmark, nncp, uses a transformer. It is 
> closed source but there is a paper describing the algorithm. It doesn't 
> qualify for the Hutter prize because it takes 3 days to compress 1 GB on a 
> GPU with 10K cores.

If we have MLP, KAN, NN1, NN2, etc. the discoverer could perhaps use the 
Principle of Least Action on the mathematical system to find/generate the 
structure that minimizes bit flips to produce similar results. It would be the 
laziest structure… or lazier structures since the laziest might not be provable.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M790672f3424e5b9a96e27236
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-20 Thread John Rose
On Saturday, May 18, 2024, at 6:53 PM, Matt Mahoney wrote:
> Surely you are aware of the 100% failure rate of symbolic AI over the last 70 
> years? It should work in theory, but we have a long history of 
> underestimating the cost, lured by the early false success of covering half 
> of the cases with just a few hundred rules.
> 

I view LLM’s as systems within symbolic systems. Why? Simply that we exist in a 
spacetime environment and ALL COMMUNICATION is symbolic. And sub-symbolic 
representation is required for computation. All bits are symbols based on 
probabilities. Then as LLM’s become more intelligent the physical power 
consumption required to produce similar results will decrease as their symbolic 
networks grow and optimize.

Could be wrong but It makes sense to me… saying everything is symbolic 
eliminates the argument. I know it's lazy but  that's often how developers look 
at things in order to code them up :) Laziness is a form of optimization... 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M2252941b1c7cca5b59b32c1f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Friday, May 17, 2024, at 10:07 AM, Sun Tzu InfoDragon wrote:

the AI just really a regurgitation engine that smooths everything over and 
appears smart.
> 
> No you!

I agree. Humans are like memetic switches, information repeaters, reservoirs. 
The intelligence is in the collective, we’re just individual host nodes. Though 
some originate intelligence more than others.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M5c3feeec4fa21dc6b3116830
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-17 Thread John Rose
On Thursday, May 16, 2024, at 11:26 AM, ivan.moony wrote:
> What should symbolic approach include to entirely replace neural networks 
> approach in creating true AI?

Symbology will compress NN monstrosities… right?  Or should say increasing 
efficiency via emerging symbolic activity for complexity reduction. Then less 
NN will be required since the “intelligence” was will have been formed. But 
still need sensory…

There is much room for innovation in mathematics… some of us have been working 
on that for a while.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M5b45da5fff085a720d8ea765
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Tuesday, May 14, 2024, at 11:21 AM, James Bowery wrote:
> Yet another demonstration of how Alan Turing poisoned the future with his 
> damnable "test" that places mimicry of humans over truth.

This unintentional result of Turing’s idea is an intentional component of some 
religions. The elder wisemen wanted to retain control over science as science 
spun from religion since they knew humans may become irrelevant. So they 
attempted to control the future and slow things down, thus Galileo gets burned. 
Perhaps they saw it as a small sacrifice for the larger whole.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Ma35aaeb8de27a4ee42f6e993
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Tuesday, May 14, 2024, at 10:27 AM, Matt Mahoney wrote:
> Does everyone agree this is AGI?

Ya is the AI just really a regurgitation engine that smooths everything over 
and appears smart. Kinda like a p-zombie, poke it, prod it, sounds generally 
intelligent!  But… artificial is what everyone is going for seems like. Is 
there a difference?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Ma141cb8a667972f0df709a6b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-29 Thread John Rose
On Monday, May 27, 2024, at 6:58 PM, Keyvan M. Sadeghi wrote:
> Good thing is some productive chat happens outside this forum:
> 
> https://x.com/ylecun/status/1794998977105981950

Smearing those who are concerned of particular AI risks by pooling them into a 
prejudged category entitled “Doomers” is not really being serious. It’s similar 
to smearing those who scrutinize and reject particular medical injections as 
“anti-vaxxers” or anti-science when they are really pro-science.

AI will further embed into existing systems and will be used to extract more 
value from all of us in ways we can’t even imagine. We are farm animals but the 
longer we are kept happy, oblivious and indoctrinated, the more value will be 
extracted. When there is little value left, we will be culled. It’s really that 
simple. BTW H5N1 mRNA incoming 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Mf0b8b619f7e13adf152bd1d2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-22 Thread John Rose
On Tuesday, May 21, 2024, at 10:34 PM, Rob Freeman wrote:
> Unless I've missed something in that presentation. Is there anywhere
in the hour long presentation where they address a decoupling of
category from pattern, and the implications of this for novelty of
structure?

I didn’t watch the video but isn’t this just morphisms and functors so you can 
map ML between knowledge domains. Some may need to be fuzzy and the best 
structure I’ve found is Smarandache’s neutrosphic...So a generalized 
intelligence will manage sets of various morphisms across N domains. For 
example, if an AI that knows how to drive a car attempts to build a birdhouse 
it takes a small subset of morphisms between the two but grows more towards the 
birdhouse. As it attempts to build the birdhouse there actually may be some 
morphismic structure that apply to driving a car but most will be utilized and 
grow one way… N morphisms for example epi, mono, homo, homeo, endo, auto, zero, 
etc. and most obvious iso. Another mapping from car driving to motorcycle 
driving would have more utilizable morphisms… like steering wheel to 
handlebars… there is some symmetry mapping between group operations but they 
are not full iso. The pattern recognition is morphism recognition and novelty 
is created from mathematical structure manipulation across knowledge domains. 
This works very well when building new molecules since there are tight, almost 
lossless IOW iso morphismic relationships.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Me455a509be8e5e3671c3b5e0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-29 Thread John Rose
On Wednesday, May 29, 2024, at 3:56 PM, Keyvan M. Sadeghi wrote:
> Judging the future of AGI (not distant, 5 years), with our current premature 
> brains is a joke. Worse, it's an unholy/profitable business for Sam Altmans / 
> Eric Schmidts / Elon Musks of the world.

I was referring to extracting value meaning real value verses a nominal value. 
You can predict the future using economic cycles over hundreds to thousands of 
years. Also, human vices and virtues drive similar actions over time. People 
exploit people with new tools and technologies and the technologies take on a 
life of their own. Thinking AGI is going to instantly change all that for the 
good, as in some immantized eschaton is naïve. Are there laws that protect us 
if it goes wrong? I keep hearing now about nano dust already in the food and 
have seen recent congressional hearings on weapons being used that modify human 
minds remotely. These technologies are deployed and our reactions take years... 
in some cases decades. Judging by recent events we need to self-organize since 
our governments are not going to protect us. In fact, it’s going in the 
direction of overthrowing governments likely leading to new governmental 
structures. Big tech is already embedded and fused with big gov't. But the real 
government is the central banks. Central bank behavior is another input into 
the predictor unless AGI makes central banks somehow magically go away :) 
That is also naïve thinking IMO.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Mf9e859a8fd420b8c363401cd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread John Rose
On Monday, June 17, 2024, at 4:14 PM, James Bowery wrote:
> https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf

I know, I know that we could construct a test that breaks the p-zombie barrier. 
Using text alone though? Maybe not. Unless we could somehow makes our brains 
not serialize language but simultaneously multi-stream symbols... gotta be a 
way :)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Madd96d99e30a08326350c050
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread John Rose
On Monday, June 17, 2024, at 2:33 PM, Mike Archbold wrote:
> Now time for the usual goal post movers

A few years ago it would be a big thing though I remember these chatbots from 
the BBS days in the early 90's that were pretty convincing. Some of those bots 
were hybrids, part human part bot so one person could chat with many people 
simultaneously and the bot would fill in.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M65080914031e453816a81215
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-06-02 Thread John Rose
On Saturday, June 01, 2024, at 7:03 PM, immortal.discoveries wrote:
> I love how a thread I started ends up with Matt and Jim and others having a 
> conversation again lol.

Tame the butterfly effect. Just imagine you switch a couple words around and 
the whole world starts conversing.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Me35ef5ce96c0eb10ad393d1d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-06-02 Thread John Rose
On Sunday, June 02, 2024, at 10:32 AM, Keyvan M. Sadeghi wrote:
> Aka click bait? :) ;)

Jabbed?

https://www.bitchute.com/video/jB9JXD9lvK8m/

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M9d5dc91e461de7ef3f157953
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-06-02 Thread John Rose
On Sunday, June 02, 2024, at 9:04 AM, Sun Tzu InfoDragon wrote:
> The most important metric, obviously, is whether GPT can pass for a doctor on 
> the US Medical Licensing Exam by scoring the requisite 60%.

Not sure who I trust less, lawyers, medical doctors, or an AI trying to imitate 
them as is :)

Humor aside, can an AI take oaths? Otherwise it may prioritize revenue over 
wellbeing... is very difficult.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Me156069d98089b564d12302a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Internal Time-Consciousness Machine (ITCM)

2024-06-15 Thread John Rose
> For those of us pursuing consciousness-based AGI this is an interesting paper 
> that gets more practical... LLM agent based but still v. interesting:
> 
> https://arxiv.org/abs/2403.20097

I meant to say that this is an exceptionally well-written paper just teeming 
with insightful research on this subject. It's definitely worth a read.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-M5c35f67aa947a63004e35e44
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Internal Time-Consciousness Machine (ITCM)

2024-06-15 Thread John Rose
For those of us pursuing consciousness-based AGI this is an interesting paper 
that gets more practical... LLM agent based but still v. interesting:

https://arxiv.org/abs/2403.20097
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-Ma35470ea48af6ebc786f0118
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-16 Thread John Rose
On Sunday, June 16, 2024, at 7:09 PM, Matt Mahoney wrote:
> Not everything can be symbolized in words. I can't describe what a person 
> looks as well as showing you a picture. I can't describe what a novel 
> chemical smells like except to let you smell it. I can't tell you how to ride 
> a bicycle without you practicing.

That’s the point. You emit symbols that reference the qualia that you 
experienced of what the person looks like. The symbols or words are a 
compressed impressed representation of the original full symbol that you 
experienced in your mind. Your original qualia is your unique experience and 
another person receives your transmission or description to reference their own 
qualia which are also unique. It’s a hit or miss since you can’t transmit the 
full qualia but you can transmit more words to paint a more accurate picture 
and increase accuracy. There isn’t enough bandwidth, sampling capacity and 
instantaneousness but you have to reference something for the purposes of 
transmitting information spatiotemporally. A “thing” is a reference which it 
seems can only be a symbol, ever, unless the thing is the symbol itself and 
that would be the original unique qualia. Maybe there are exceptions? like 
numbers but they are still references to qualia going back in history... or 
computations? They are still derivatives. And no transmission is 100% reliable 
as there is always some small chance of error AFAIK. If I'm wrong I would like 
to know.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M773f13826341af38c56a4e09
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-16 Thread John Rose
On Friday, June 14, 2024, at 3:43 PM, James Bowery wrote:
>> Etter: "Thing (n., singular): anything that can be distinguished from 
>> something else."

I simply use “thing” as anything that can be symbolized and a unique case are 
qualia where from a first-person experiential viewpoint a qualia experiential 
symbol = the symbolized but for transmission the qualia are fitted or 
compressed into symbol(s). So, for example “nothing” is a thing simply because 
it can be symbolized. Is there anything that cannot be symbolized? Perhaps 
things that cannot be symbolized, what would they be? Pre-qualia? but then they 
are already symbolized since they are referenced… You could generalize it and 
say all things are ultimately derivatives of qualia and I speculate that it is 
impossible to name one that is not. Note that in ML a perceptron or a set of 
perceptrons could be considered artificial qualia symbol emitters and perhaps 
that’s why they are named such, percept -> tron. A basic binary classifier is 
emitting an experiential symbol as a bit and more sophisticated perceptrons 
emit higher symbol complexity such as color codes or text characters. 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Ma5a8d7f7d388f150f9437cf3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Internal Time-Consciousness Machine (ITCM)

2024-06-17 Thread John Rose
On Sunday, June 16, 2024, at 6:49 PM, Matt Mahoney wrote:
> Any LLM that passes the Turing test is conscious as far as you can tell, as 
> long as you assume that humans are conscious too. But this proves that there 
> is nothing more to consciousness than text prediction. Good prediction 
> requires a model of the world, which can be learned given enough text and 
> computing power, but can also be sped up by hard coding some basic knowledge 
> about how objects move, as the paper shows.

ITCMA is the agent see Appendix. B (below citations) for phenomenological 
evidence for ITCM. “An agent is not just a prediction algorithm.”, in a noisy, 
uncertain and competitive environment mere prediction does not suffice for 
agent success.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-Ma6a45321d00ecc7584ecc3e9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-06-11 Thread John Rose
Much active research on KANs getting published lately, for example PINNs and 
DeepONets verses PIKANNS and DeepOKANs:

https://arxiv.org/abs/2406.02917

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M963e7112afea52d53ae34611
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] The Implications

2024-06-18 Thread John Rose
It helps to know this:
https://www.quantamagazine.org/in-highly-connected-networks-theres-always-a-loop-20240607/

Proof:
https://arxiv.org/abs/2402.06603
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T482783e118fee37e-Md9d45a001397e86c91546407
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-18 Thread John Rose
On Tuesday, June 18, 2024, at 10:37 AM, Matt Mahoney wrote:
> The p-zombie barrier is the mental block preventing us from understanding 
> that there is no test for something that is defined as having no test for.
> https://en.wikipedia.org/wiki/Philosophical_zombie
> 

Perhaps we need to get past the definitions barrier and tear down that mental 
block. There is little consensus on the p-zombie thing... just because one is 
incapable of figuring out a way to test for something doesn't mean that there 
is no possible test. And to proclaim something as untestable in an attempt to 
prohibit searches for such tests is really just an invite for curious and 
capable individuals to develop some sort of test. 

What is hiding behind that p-zombie barrier where people want it to remain 
hidden. There is something there...and it needs to be tested for.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Meeea483bba66274ae99f20a7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Internal Time-Consciousness Machine (ITCM)

2024-06-18 Thread John Rose
On Monday, June 17, 2024, at 5:07 PM, Mike Archbold wrote:
> It seems like a reasonable start  as a basis. I don't see how it relates to 
> consciousness really, except that I think they emphasize a real time aspect 
> and a flow of time which is good. 

If you read the appendix a few times you will begin to build up a mental 
CAD-like model of their computational consciousness structure. They walk you 
through quite well with various viewpoints and citations. You could pull out 
the word "consciousness" and supplant it with other words like attention 
allocation, prescience, sagacity, omniscience, etc. but the comparisons with 
our own operational consciousness helps and provides a cohesive synthesis of 
various aspects as well as correlates to other disciplines.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-M3226cf5803663383df82c07f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-20 Thread John Rose
On Thursday, June 20, 2024, at 12:32 AM, immortal.discoveries wrote:
> I have a test puzzle that shows GPT-4 to be not human. It is simple enough 
> any human would know the answer. But it makes GPT-4 rattle on nonsense ex. 
> use spoon to tickle the key to come off the walleven though i said to be 
> following the physics etc. Took me weeks to refine the test. It's secret 
> test, cannot yet show it. Hopefully soon though.

Should we hide true consciousness from AI to preserve the fundamental beingness 
of ourselves?

You could use AI to build out the open open-endedness of an implemented 
p-zombie. But the closer you get to the theoretical p-zombie the more AI will 
be needed, assuming more channels are being mimicked besides plain text until 
IMO the p-zombie becomes fully conscious verses an imposter.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M2ca5c119e7db3485d25f923e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Implications

2024-06-20 Thread John Rose
On Wednesday, June 19, 2024, at 11:36 AM, Matt Mahoney wrote:
> I give up. What are the implications?

Confidence really and a firm footing for further speculations in graphs, 
networks, search spaces, topologies, algebraic structures, etc. related to 
cognitive modelling. Potentially all kinds of new possibilities...

"Gur, who wasn’t involved in the work, said it establishes “a fundamental 
connection between two objects which are central to computer science.” That 
connection, he said, will lead to important applications. “I don’t know what 
form it will take. It just seems like this is bound to be useful.”"

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T482783e118fee37e-Ma934217a68eeebde9c2fb374
Delivery options: https://agi.topicbox.com/groups/agi/subscription


<    2   3   4   5   6   7   8   >