On Thursday, September 19, 2024, at 7:51 AM, Matt Mahoney wrote:
> How do you think microtubules affect the neural network models that have been
> used so effectively in LLMs and vision models? Are neurons doing more than
> just a clamped sum of products and adjusting the weights and thresholds t
Aaaand we got transistors:
https://www.nature.com/articles/s41598-023-36801-1
Where are the capacitors now let's see...
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tff6648b032b59748-M72fd9297e7855b6dc576be2c
They have to ration the GPU juice because someone is going to ask it what is
the Kolmogorov Complexity of Ulysses.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T116fcbb6db6c3042-M1655011fcb5ad3e4a355e2a0
Delive
On-chip MT diodes, shows some interesting characteristics for building in vivo
circuitry:
https://www.pnas.org/doi/10.1073/pnas.2315992121
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tff6648b032b59748-M4b0b7
Someone may find thus useful, submission deadline in a month:
https://new.nsf.gov/funding/opportunities/mfai-mathematical-foundations-artificial-intelligence
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta91b8d
A civilization’s general intelligence has a threat surface, which may include
its own government BTW. What attack surface component would be an obvious
potential target? Memory. Attack vector? Microtubule circuit exploits.
"MTs can form bioelectric circuits through their natural connections to M
BUT, with these microcoils and nano devices recorded in other studies and since
there is little indication of the degree of advancement of the nano
autoassemblies we cannot rule out potential interfaces into the human brain’s
quantum communication channels. We can only theorize and glean from pu
Those coils, the original IJVTPR article moved here, page 1202:
https://mail.ijvtpr.com/index.php/IJVTPR/article/view/102/291
are similar in size and resemble these microsolenoids that were tested on
activating neural tissue:
https://www.nature.com/articles/s41378-021-00320-8
At first I thought
Here we go, an in vivo bio-optical transceiver using Aequorin and Markov
Chains. No oxygen required with Aequorin and it binds to Ca2+ ions which are
integral in neurotransmission:
https://www.mdpi.com/1424-8220/24/8/2584
--
Artificial General Intelligence
On Sunday, August 18, 2024, at 3:00 PM, Dorian Aur wrote:
> First, one needs to attach nanomagnetic particles to Piezo1 ion channels or
> nearby cellular structures of interest (not so easy) and then to apply an
> external magnetic field🙂
Details details...
There are known ingredients, like li
Magnetogenetics for wiring into specific cells and neurons:
https://www.researchgate.net/publication/377694198_In_vivo_magnetogenetics_for_cell-type_specific_targeting_and_modulation_of_brain_circuits
So how does the output signal route from the body? There would have to be
nanotransmitters perh
On Wednesday, August 14, 2024, at 3:33 AM, immortal.discoveries wrote:
> Absolutely clueless. What does all this mean?
>
> Just going to take a wild guess: It means anything is possible ??
It means Radio Ga Ga
https://www.youtube.com/watch?v=pMYCOYjIKrE
--
The body is mostly water but if you sweat out the nanoantennas to the skin as
EMF receivers they could still communicate molecularly. One may expect then to
find protocol translating nano-tranceivers. So if you apply a signal then you
would see chemical emissions. Maybe that's what those filamen
On Monday, August 12, 2024, at 6:58 PM, YKY (Yan King Yin, 甄景贤) wrote:
> Attached is my presentation PPT with some new materials not in the submitted
> paper.
Interesting that you utilize the hypercube. Recently I was thinking of how a
human or AGI can observe all AI by using a math model of 4-d
A limiting parameter of this study may be that if there is IoBNT THz-band
nano-communication it is absorbed by water molecules... if some of these
structures are graphene‑based nanoantennas.
--
Artificial General Intelligence List: AGI
Permalink:
https://a
5G bio-antennas for WBAN's :)
https://mail.ijvtpr.com/index.php/IJVTPR/article/view/102/282
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tff6648b032b59748-M6e95915139dc9c5ad8f3b583
Delivery options: https://agi
At the single cell level:
https://www.nature.com/articles/s41586-024-07643-2
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Te9977d4a4d2aaa14-M7864a4ab4643a106bf3e0df5
Delivery options: https://agi.topicbox.com/g
On Tuesday, July 16, 2024, at 2:41 PM, Matt Mahoney wrote:
> On Fri, Jul 12, 2024, 7:51 PM John Rose wrote:
>> Is your program conscious simply as a string without ever being run? And if
>> it is, describe a calculation of its consciousness.
>
> If we define consciousn
On Monday, June 24, 2024, at 1:16 PM, Matt Mahoney wrote:
> By this test, reinforcement learning algorithms are conscious. Consider a
> simple program that outputs a sequence of alternating bits 010101... until it
> receives a signal at time t. After that it outputs all zero bits. In code:
>
> f
On Wednesday, July 10, 2024, at 11:24 PM, Matt Mahoney wrote:
> Quantum isn't magic. It does not speed up neural networks because they
> perform time irreversible operations like writing to memory. The brain is not
> quantum. It's intelligent because it has 600T parameters and 10 petaflops
> thr
On Thursday, June 20, 2024, at 10:36 PM, immortal.discoveries wrote:
> Consciousness can be seen as goal creation/ learning/ changing. Or what you
> might be asking is to have them do long horizon tasks, and solve very tricky
> puzzles. I think all that will happen and needs to happen.
This type
On Thursday, June 20, 2024, at 12:19 AM, Nanograte Knowledge Technologies wrote:
> Can machines really think? Let's redefine thinking as low-level, spoon fed
> reasoning in an unconscious state, then perhaps they could be said to be able
> to.
We know a machine can view its own code and modify i
On Thursday, June 20, 2024, at 12:32 AM, immortal.discoveries wrote:
> I have a test puzzle that shows GPT-4 to be not human. It is simple enough
> any human would know the answer. But it makes GPT-4 rattle on nonsense ex.
> use spoon to tickle the key to come off the walleven though i said t
On Wednesday, June 19, 2024, at 11:36 AM, Matt Mahoney wrote:
> I give up. What are the implications?
Confidence really and a firm footing for further speculations in graphs,
networks, search spaces, topologies, algebraic structures, etc. related to
cognitive modelling. Potentially all kinds of
On Monday, June 17, 2024, at 5:07 PM, Mike Archbold wrote:
> It seems like a reasonable start as a basis. I don't see how it relates to
> consciousness really, except that I think they emphasize a real time aspect
> and a flow of time which is good.
If you read the appendix a few times you wil
It helps to know this:
https://www.quantamagazine.org/in-highly-connected-networks-theres-always-a-loop-20240607/
Proof:
https://arxiv.org/abs/2402.06603
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T482783e118
On Tuesday, June 18, 2024, at 10:37 AM, Matt Mahoney wrote:
> The p-zombie barrier is the mental block preventing us from understanding
> that there is no test for something that is defined as having no test for.
> https://en.wikipedia.org/wiki/Philosophical_zombie
>
Perhaps we need to get past
On Monday, June 17, 2024, at 4:14 PM, James Bowery wrote:
> https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf
I know, I know that we could construct a test that breaks the p-zombie barrier.
Using text alone though? Maybe not. Unless we could somehow makes our brains
not
On Monday, June 17, 2024, at 2:33 PM, Mike Archbold wrote:
> Now time for the usual goal post movers
A few years ago it would be a big thing though I remember these chatbots from
the BBS days in the early 90's that were pretty convincing. Some of those bots
were hybrids, part human part bot so o
On Sunday, June 16, 2024, at 6:49 PM, Matt Mahoney wrote:
> Any LLM that passes the Turing test is conscious as far as you can tell, as
> long as you assume that humans are conscious too. But this proves that there
> is nothing more to consciousness than text prediction. Good prediction
> requir
On Sunday, June 16, 2024, at 7:09 PM, Matt Mahoney wrote:
> Not everything can be symbolized in words. I can't describe what a person
> looks as well as showing you a picture. I can't describe what a novel
> chemical smells like except to let you smell it. I can't tell you how to ride
> a bicycl
On Friday, June 14, 2024, at 3:43 PM, James Bowery wrote:
>> Etter: "Thing (n., singular): anything that can be distinguished from
>> something else."
I simply use “thing” as anything that can be symbolized and a unique case are
qualia where from a first-person experiential viewpoint a qualia ex
> For those of us pursuing consciousness-based AGI this is an interesting paper
> that gets more practical... LLM agent based but still v. interesting:
>
> https://arxiv.org/abs/2403.20097
I meant to say that this is an exceptionally well-written paper just teeming
with insightful research on t
For those of us pursuing consciousness-based AGI this is an interesting paper
that gets more practical... LLM agent based but still v. interesting:
https://arxiv.org/abs/2403.20097
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbo
Much active research on KANs getting published lately, for example PINNs and
DeepONets verses PIKANNS and DeepOKANs:
https://arxiv.org/abs/2406.02917
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T1af6c4030743
On Sunday, June 02, 2024, at 9:04 AM, Sun Tzu InfoDragon wrote:
> The most important metric, obviously, is whether GPT can pass for a doctor on
> the US Medical Licensing Exam by scoring the requisite 60%.
Not sure who I trust less, lawyers, medical doctors, or an AI trying to imitate
them as is
On Sunday, June 02, 2024, at 10:32 AM, Keyvan M. Sadeghi wrote:
> Aka click bait? :) ;)
Jabbed?
https://www.bitchute.com/video/jB9JXD9lvK8m/
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M9d5
On Saturday, June 01, 2024, at 7:03 PM, immortal.discoveries wrote:
> I love how a thread I started ends up with Matt and Jim and others having a
> conversation again lol.
Tame the butterfly effect. Just imagine you switch a couple words around and
the whole world starts conversing.
---
On Wednesday, May 29, 2024, at 3:56 PM, Keyvan M. Sadeghi wrote:
> Judging the future of AGI (not distant, 5 years), with our current premature
> brains is a joke. Worse, it's an unholy/profitable business for Sam Altmans /
> Eric Schmidts / Elon Musks of the world.
I was referring to extracting
On Monday, May 27, 2024, at 6:58 PM, Keyvan M. Sadeghi wrote:
> Good thing is some productive chat happens outside this forum:
>
> https://x.com/ylecun/status/1794998977105981950
Smearing those who are concerned of particular AI risks by pooling them into a
prejudged category entitled “Doomers”
On Wednesday, May 15, 2024, at 12:28 AM, Matt Mahoney wrote:
> The top entry on the large text benchmark, nncp, uses a transformer. It is
> closed source but there is a paper describing the algorithm. It doesn't
> qualify for the Hutter prize because it takes 3 days to compress 1 GB on a
> GPU w
On Tuesday, May 21, 2024, at 10:34 PM, Rob Freeman wrote:
> Unless I've missed something in that presentation. Is there anywhere
in the hour long presentation where they address a decoupling of
category from pattern, and the implications of this for novelty of
structure?
I didn’t watch the video b
On Saturday, May 18, 2024, at 6:53 PM, Matt Mahoney wrote:
> Surely you are aware of the 100% failure rate of symbolic AI over the last 70
> years? It should work in theory, but we have a long history of
> underestimating the cost, lured by the early false success of covering half
> of the cases
On Friday, May 17, 2024, at 10:07 AM, Sun Tzu InfoDragon wrote:
the AI just really a regurgitation engine that smooths everything over and
appears smart.
>
> No you!
I agree. Humans are like memetic switches, information repeaters, reservoirs.
The intelligence is in the collective, we’re just
On Tuesday, May 14, 2024, at 11:21 AM, James Bowery wrote:
> Yet another demonstration of how Alan Turing poisoned the future with his
> damnable "test" that places mimicry of humans over truth.
This unintentional result of Turing’s idea is an intentional component of some
religions. The elder w
On Tuesday, May 14, 2024, at 10:27 AM, Matt Mahoney wrote:
> Does everyone agree this is AGI?
Ya is the AI just really a regurgitation engine that smooths everything over
and appears smart. Kinda like a p-zombie, poke it, prod it, sounds generally
intelligent! But… artificial is what everyone i
On Thursday, May 16, 2024, at 11:26 AM, ivan.moony wrote:
> What should symbolic approach include to entirely replace neural networks
> approach in creating true AI?
Symbology will compress NN monstrosities… right? Or should say increasing
efficiency via emerging symbolic activity for complexit
Also, with TikTok governments don’t want the truth exposed because populations
tend to get rebellious so they want “unsafe” information suppressed. E.g.
Canadian trucker protests…. I sometimes wonder do Canadians know that Trudeau
is Castro’s biological son? Thanks TicTok didn’t know that. And t
Mike Gunderloy disconnected. Before the internet he did Factsheet Five which
connected alt undergrounders. It really was an amazing publication that could
be considered a type of pre-internet search engine with zines as websites.
https://en.wikipedia.org/wiki/Factsheet_Five
Then as the internet
On Sunday, May 12, 2024, at 10:38 AM, Matt Mahoney wrote:
> All neural networks are trained by some variation of adjusting anything that
> is adjustable in the direction that reduces error. The problem with KAN alone
> is you have a lot fewer parameters to adjust, so you need a lot more neurons
On Sunday, May 12, 2024, at 12:13 AM, immortal.discoveries wrote:
> But doesn't it have to run the code to find out no?
The people who wrote the paper did some nice work on this. They laid it out
perhaps intentionally so that doing it again with modified structures is easy
to visualize.
A simpl
On Wednesday, May 08, 2024, at 6:24 PM, Keyvan M. Sadeghi wrote:
>> Perhaps we need to sort out human condition issues that stem from human
>> consciousness?
>
> Exactly what we should do and what needs funding, but shitheads of the world
> be funding wars. And Altman :))
If Jeremy Griffith’s e
Seems that software and more generalized mathematics should be discovering
these new structures. If a system projects candidates into a test domain,
abstracted, and wires them up for testing in a software host how would you
narrow the search space of potential candidates? You’d need a more gener
On Tuesday, May 07, 2024, at 9:41 PM, Keyvan M. Sadeghi wrote:
> It's because of biology. There, I said it. But it's more nuanced. Brain cells
> are almost identical at birth. The experiences that males and females go
> through in life, however, are societally different. And that's rooted in
> c
On Tuesday, May 07, 2024, at 6:53 PM, Matt Mahoney wrote:
> Kolmogorov proved there is no such thing as an infinitely powerful
compressor. Not even if you have infinite computing power.
Compressing the universe is a unique case especially being supplied with
infinite computing power. Would the co
On Tuesday, May 07, 2024, at 10:01 AM, Matt Mahoney wrote:
> We don't
know the program that computes the universe because it would require
the entire computing power of the universe to test the program by
running it, about 10^120 or 2^400 steps. But we do have two useful
approximations. If we set t
On Tuesday, May 07, 2024, at 8:04 AM, Quan Tesla wrote:
> To suggest that every hypothetical universe has its own alpha, makes no
> sense, as alpha is all encompassing as it is.
You are exactly correct. There is another special case besides expressing the
intelligence of the universe. And that i
On Friday, May 03, 2024, at 7:10 PM, Matt Mahoney wrote:
> So when we talk about the intelligence of the universe, we can only really
> measure it's computing power, which we generally correlate with prediction
> power as a measure of intelligence.
The universes overall prediction power should i
For those genuinely interested in this particular Imminent threat here is a
case study (long video) circulating on how western consciousness is being
programmatically hijacked presented by a gentleman who has been involved and
researching it for several decades. He describes this particular “ro
Expressing the intelligence of the universe is a unique case, verses say
expressing the intelligence of an agent like a human mind. A human mind is very
lossy verses the universe where there is theoretically no loss. If lossy and
lossless were a duality then the universe would be a singularity o
On Thursday, May 02, 2024, at 6:03 AM, YKY (Yan King Yin, 甄景贤) wrote:
> It's not easy to prove new theorems in category theory or categorical
> logic... though one open problem may be the formulation of fuzzy toposes.
Or perhaps neutrosophic topos, Florentin Smarandache has written much
interest
If the fine structure constant was tunable across different hypothetical
universes how would that affect the overall intelligence of each universe? Dive
into that rabbit hole, express and/or algorithmicize the intelligence of a
universe. There are several potential ways to do that, some of which
On Wednesday, April 17, 2024, at 11:17 AM, Alan Grimes wrote:
> It's a stage play. I think Iran is either a puppet regime or living
under blackmail. The entire thing was done to cover up / distract from /
give an excuse for the collapse of the banking system. Simultaneously,
the market riggers r
On Thursday, April 11, 2024, at 1:13 PM, James Bowery wrote:
> Matt's use of Planck units in his example does seem to support your
> suspicion. Moreover, David McGoveran's Ordering Operator Calculus approach
> to the proton/electron mass ratio (based on just the first 3 of the 4 levels
> of the
On Thursday, April 11, 2024, at 10:07 AM, James Bowery wrote:
> What assumption is that?
The assumption that alpha is unitless. Yes they cancel out but the simple
process of cancelling units seems incomplete.
Many of these constants though are re-representations of each other. How many
constant
On Thursday, April 11, 2024, at 12:27 AM, immortal.discoveries wrote:
> Anyway, very interesting thoughts I share here maybe? Hmm back to the
> question, do we need video AI? Well, AGI is a good exact matcher if you get
> me :):), so if it is going to think about how to improve AGI in video forma
On Friday, April 05, 2024, at 6:22 PM, Alan Grimes wrote:
> It's difficult to decide whether this is actually a good investment:
Dell Precisions are very reliable IMO and the cloud is great for scaling up.
You can script up a massive amount of compute in a cloud then turn it off when
done.
Is c
> "Abstract Fundamental physical constants need not be constant, neither
> spatially nor temporally."
If we could remote view somehow across multiple multiverse instances
simultaneously in various non-deterministic states and perceive how the
universe structure varies across different alphas. D
On Friday, April 05, 2024, at 12:21 AM, Alan Grimes wrote:
> So let me tell you about the venture I want to start. I would like to
put together a lab / research venture to sprint to achieve machine
consciousness. I think there is enough tech available these days that I
think there's enough tech
I was just thinking here that the ordering of the consciousness in permutations
of strings is related to their universal pattern frequency so would need
algorithms to represent that...
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.to
On Wednesday, April 03, 2024, at 2:39 PM, James Bowery wrote:
> * and I realize this is getting pretty far removed from anything relevant to
> practical "AGI" except insofar as the richest man in the world (last I heard)
> was the guy who wants to use it to discover what makes "the simulation" ti
Or perhaps better, describe an algorithm that ranks the consciousness of some
of the integers in [0..N]. There may be a stipulation that the integers be
represented as atomic states all unobserved or all observed once… or allow ≥ 0
observations for all and see what various theories say.
---
On Monday, April 01, 2024, at 3:24 PM, Matt Mahoney wrote:
> Tonini doesn't even give a precise formula for what he calls phi, a measure
> of consciousness, in spite of all the math in his papers. Under reasonable
> interpretations of his hand wavy arguments, it gives absurd results. For
> examp
On Friday, March 29, 2024, at 8:31 AM, Quan Tesla wrote:
> Musical tuning and resonant conspiracy? Cooincidently, I spent some time
> researching that just today. Seems, while tuning of instruments is a matter
> of personal taste (e.g., Verdi tuning) there's no real merit in the pitch of
> a mu
On Sunday, March 31, 2024, at 7:55 PM, Matt Mahoney wrote:
> The problem with this explanation is that it says that all systems with
> memory are conscious. A human with 10^9 bits of long term memory is a billion
> times more conscious than a light switch. Is this definition really useful?
A sci
On Saturday, March 30, 2024, at 7:11 PM, Matt Mahoney wrote:
> Prediction measures intelligence. Compression measures prediction.
Can you reorient the concept of time from prediction? If time is on an axis, if
you reorient the time perspective is there something like energy complexity?
The reaso
On Saturday, March 30, 2024, at 11:11 AM, Nanograte Knowledge Technologies
wrote:
> Who said anything about modifying the fine structure constant? I used the
> terms: "coded and managed".
>
> I can see there's no serious interest here to take a fresh look at doable
> AGI. Best to then leave i
On Saturday, March 30, 2024, at 7:33 AM, Keyvan M. Sadeghi wrote:
> For the same reason that we, humans, don't kill dogs to save the planet.
Exactly. If people can’t snuff Wuffy to save the planet how could they decide
to kill off a few billion useless eaters? Although central banks do fuel both
On Saturday, March 30, 2024, at 7:22 AM, Keyvan M. Sadeghi wrote:
> With all due respect John, thinking an AI that has digested all human
> knowledge, then goes on to kill us, is fucking delusional 🙈
Why is that delusional? It may be a logical decision for the AI to make an
attempt to save the p
On Thursday, March 28, 2024, at 5:55 PM, Keyvan M. Sadeghi wrote:
> I'm not sure the granularity of feedback mechanism is the problem. I think
> the problem lies in us not knowing if we're looping or contributing to the
> future. This thread is a perfect example of how great minds can loop foreve
On Friday, March 29, 2024, at 8:25 AM, Quan Tesla wrote:
> The fine structure constant, in conjunction with the triple-alpha process
> could be coded and managed via AI. Computational code.
Imagine the government in its profound wisdom declared that the fine structure
constant needed to be modi
On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote:
> Alpha won't directly result in AGI, but it probsbly did result in all
> intelligence on Earth, and would definitely resolve the power issues plaguing
> AGI (and much more), especially as Moore's Law may be stalling, and
> Kurzweil's si
On Thursday, March 28, 2024, at 10:06 AM, Quan Tesla wrote:
> At least with an AI-enabled fine structure constant, we could've tried
> repopulating selectively and perhaps reversed a lot of the damage we caused
> Earth.
The idea of AI-enabling the fine-structure constant is thought provoking but
On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote:
> One cannot disparage that which already makes no difference either way.
> John's well, all about John, as can be expected.
What?? LOL listen to you 😊
On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote:
> I've completed work and
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote:
> In my 2008 distributed AGI proposal (
> https://mattmahoney.net/agi2.html ) I described a hostile peer to peer
> network where information has negative value and people (and AI)
> compete for attention. My focus was on distributing sto
On Thursday, March 28, 2024, at 1:45 AM, Quan Tesla wrote:
> If yes, what results have you to show for it?
There’s no need to disparage the generous contributions by some highly valued
and intelligent individuals on this list. I’ve obtained invaluable knowledge
and insight from these discussions
On Wednesday, March 27, 2024, at 3:15 PM, Matt Mahoney wrote:
> I predict a return of smallpox and polio because people won't get vaccinated.
> We have already seen it happen with measles.
I think it’s a much higher priority as to what’s with that non-human DNA
integrated into chromosomes 9 and
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote:
> Flat Earthers, including the majority who secretly know the world is
round, have a more important message. How do you know what is true?
We need to emphasize hard science verses intergenerational pseudo-religious
belief systems that
On Wednesday, March 27, 2024, at 12:37 PM, Matt Mahoney wrote:
> We have a fairly good understanding of biological self replicators and
how to prime the immune systems of humans and farm animals to fight
them. But how to fight misinformation?
Regarding the kill-shots you emphasize reproduction ver
On Monday, March 25, 2024, at 5:18 AM, stefan.reich.maker.of.eye wrote:
> On Saturday, March 23, 2024, at 11:10 PM, Matt Mahoney wrote:
>> Also I have been eating foods containing DNA every day of my life without
>> any bad effects.
>
> Why would that have bad effects?
That used to not be an iss
On Saturday, March 23, 2024, at 6:10 PM, Matt Mahoney wrote:
> But I wonder how we will respond to existential threats in the future, like
> genetically engineered pathogens or self replicating nanotechnology. The
> vaccine was the one bright spot in our mostly bungled response to covid-19.
> We
On Thursday, March 21, 2024, at 1:07 PM, James Bowery wrote:
> Musk has set a trap far worse than censorship.
I wasn’t really talking about Musk OK mutants? Though he had the cojones to do
something big about the censorship and opened up a temporary window basically
by acquiring Twitter.
A ques
On Thursday, March 21, 2024, at 11:41 AM, Keyvan M. Sadeghi wrote:
> Worship stars, not humans 😉
The censorship the last few years was like an eclipse.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T991e2940641
I don’t like beating this drum but this has to be studied in relation to
unfriendly AGI and the WHO pandemic treaty is coming up in May which has to be
stopped. Here is a passionate interview after Dr. Chris Shoemaker presenting in
US congress, worth watching for a summary of the event and the c
Weakness reminds me of losylossslessyzygy (sic. lossy lossless syzygy)… hmm… I
wonder if it’s related.
Cardinality (Description Length) verses cardinality of its extension (Weakness)…
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.top
...continuing P# research…
Though I will say that the nickname for P# code used for authoritarian and
utilitarian zombification is Z# for zomby cybernetic script. And for language
innovation which seems scarce lately since many new programming languages are
syntactic rehashes, new intelligence
…continuing P# research…
This book by Dr. Michael Nehis “The Indoctrinated Brain” offers an interesting
neuroscience explanation and self-defense tips on how the contemporary
zombification of human minds is being implemented. Essentially, he describes a
mental immune system and there is a susta
…continuing
The science changes when conflicts of interest are removed. This is a fact. And
a behavior seems to be that injected individuals go into this state of “Where’s
the evidence?” And when evidence is presented, they can’t acknowledge it or
grok it and go into a type of loop:
“Where’s t
On Tuesday, December 19, 2023, at 9:47 AM, John Rose wrote:
> On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote:
>> That's just a silly conspiracy theory. Do you think polio and smallpox were
>> also attempts to microchip us?
>
> That is a very strong signal
On Tuesday, December 19, 2023, at 8:59 AM, Matt Mahoney wrote:
> That's just a silly conspiracy theory. Do you think polio and smallpox were
> also attempts to microchip us?
That is a very strong signal in the genomic data. What will be interesting is
how this signal changes now that it has been
1 - 100 of 760 matches
Mail list logo