Well, the reason I linked this paper was to show the way these guys skillfully
utilized some available mathematical tools to build up a model of various
consciousnesses. They generalized ZX-calculus from quantum computing and as you
can see their way of doing that could be coded up relatively
The tools and techniques in the article are pretty amazing.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Td2287eb2b142cb0b-M964424499321c8b2764979b3
Delivery options:
"Religion" is a model .. or really a set of models. The concepts of "science"
and "religion" are a relatively recent invention...
--
Artificial General Intelligence List: AGI
Permalink:
You guys are starting to make me wonder...
This is one of the coolest papers I've ever seen.
You have to be familiar with the concepts involved and with that type of
math... but they're really synthesizing things nicely, or attempting to do so.
--
I think I'm mostly there. How do you know?
Your model sort of self-assembles in your mind and you adapt your behavior
towards the world. Things make more sense, like a Nirvana.
Anyone else experience similar?
Then again could be another delusion... a mirage.
Napoleon Hill: “Whatever the mind can conceive and believe, it can achieve.”
I believe it, we create the universe. The $1 quadrillion fare will be chump
change after hyperinflation.
--
Artificial General Intelligence List: AGI
Permalink:
Does your design change and evolve the way you think verses itself. Is it a
separate entity from yourself, not in the distance between sentences but in the
sentences between distances.
--
Artificial General Intelligence List: AGI
Permalink:
That's some serious RAM Stefan this is my desktop (32 gig):
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T7de9e4c54872b1e0-Ma204e59cbb03d8251440432e
Delivery options:
See how nice and clean this is? Refreshing actually:
https://arxiv.org/abs/2007.16138
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Td2287eb2b142cb0b-M43dba753ba2bc01956322c80
Delivery options:
Trying to be serious mortal.serendipity
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Td2287eb2b142cb0b-M841fcc95a3eccea8e12f20f6
Delivery options: https://agi.topicbox.com/groups/agi/subscription
It's waaay more than just a witty saying!
Have you read Occam or just that one catchphrase? Science isn't based on
catchphrases.
--
Artificial General Intelligence List: AGI
Permalink:
Wow this is really amazing. When you read Occam's writings from the 1300's, and
are familiar with the Catholic belief system you can see the direct
correlations between deity, spirituality (you know, ghosts and souls 'n stuff),
and all of modern mathematics, AIT, logic, science, etc.
I believe human behavior is estimable. You believe human behavior is computable
IOW has a K-complexity. What's hiding in the difference there? consciousness.
In humans then belief is consciousness. Makes some sense I guess but I think
consciousness is nondeterministic, you think it's
One cannot deny that the concept of soul exists. That is the only soul that I
have ever referred to in any related discussion. One may take the position that
concepts don't exist which would be a rather interesting debate.
As far as some real physical soul that is what Minsky thought I was
On Wednesday, July 01, 2020, at 9:02 PM, Ben Goertzel wrote:
> Basically what these
NNs are doing is finding very large volumes of simple/shallow data
patterns and combining them together. Whereas there are in fact
deeper and more abstract patterns present in language data, the
transformer NNs
Consciousness, time symmetry, and wave function collapse... Time is actually
full-duplex but nature drives it.
https://arxiv.org/abs/2006.11365
I'd have to check though do the consciousnesses qualia people fit time-symmetry
into their models? As a homework assignment please use the results of
On short and accurately simulable, or computable in reality:
https://www.youtube.com/watch?v=MMiKyfd6hA0
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T37756381803ac879-M3deea99027c5928d5ae87e68
Delivery
Elon Musk there is attempting to verbalize what's going on behind the scenes
with more general languages that other people besides himself are actually
working on.
Why serialize through a natural language for anything? It's a relic. It works
and will continue to do so but really was the human
This is AGI related.
Judging by the way China handled the accident at the Wuhan lab I don't know if
I trust China handling any sort of existential AI risk or grey goo accident.
This topic used to be often discussed on this forum but not so much now...
--
On Tuesday, June 30, 2020, at 8:38 AM, Matt Mahoney wrote:
> You could write a program to print any next number you wanted. But both the
> code and the English language description would be longer. Occam's Razor and
> Solomonoff induction says the answer is 32.
That's an absolute answer but
IMO the "perfect" language would be synergistic or integrated with a "perfect"
(or generally efficient protocol). I have a partially worked-up universal
communication protocol (UCP)... "Perfect" would depend on the communicating
agents computational complexity in real environments, not an AIXI
Reading Kant can give you an idea of souls and concepts. If anything has and
will effect human behavior throughout history it has been the concept of soul.
I'm not for or against transferring or uploading individual souls one way or
another but an AGI concept modeling system can be tested by
Need something akin to an LED, a Qualia Emitting Diode (QED), bidirectional...
like a DIAC to communicate direct consciousness using a concept language, time
symmetric. An array of quantum dots as conscious connect coupled to nervous
system excited states... dots to DIAC to DIAC to dots... or
You right! Thanks for reposting that Matt quote I somehow missed it so I reread
a few times - he actually pre-answered the question I asked afterwards, uh duh.
--
Artificial General Intelligence List: AGI
Permalink:
Still though you have to have a computational resource topology to process the
theories. We immediately assume a single executable program funneled down to
CPU registers as Turing machine emulation but as there's more strings and
theories the resources need to distribute on the computational
Carefully crafted masterpieces of strings also have an aesthetic effect. Like
the one Matt created. You might not know immediately what they mean... but then
it sinks in. It's art too. Some strings you never know what they mean but the
aesthetic effect impresses. For example some crazy
This - "These are languages for computation, for expressing algorithms, not for
mathematical reasoning. They are universal programming languages that are
maximally expressive, maximally concise."
IMO, computation, expressing algorithms, AND mathematical reasoning PLUS Etc..
- An estimably
There still are a lot of Cobol programmers out there I guess...
Natural languages will become like old database formats with companies trying
to port off of them. Unfortunately they are currently a major bottleneck on
distributed intelligence.
Don't let me discourage you though. Have a go at
On Monday, June 29, 2020, at 11:13 AM, Matt Mahoney wrote:
> Surely anyone who believes that AGI is possible wouldn't also believe
in souls or heaven or ghosts??? Your brain is a computer, right?
Matt, do you believe the K-complexity of the Earth exists? I don't think it
does but perhaps you've
Tear down that statue! (Occam that is)
Just figured I'd try to join the zeitgeist... though I do like Occam (Ockham ?)
being from a Catholic upbringing :)
--
Artificial General Intelligence List: AGI
Permalink:
What's better Q# or Silq?
https://silq.ethz.ch/comparison
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T51eb63417278f283-M1f13f3bbe387d1df35f042f5
Delivery options:
Related, verified and interesting:
https://phys.org/news/2020-06-quantum-physics.html
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T51eb63417278f283-M5af090de723351b5a67b3d8f
Delivery options:
There may have been physics envy then since the technological convergence of
AIT and QIT had yet to materialize.
Perhaps another way the gods punish people is by giving them everything after
they expire. Minsky had another envy and was perhaps too cozy with
Epstein? *screeching vinyl
Seems as though individually and societally sometimes advances in intelligence
are preceded by expansions of consciousness and sometimes vice versa. Almost
like two "waves". For example, when you become aware of a "bigger picture" of
something or when a society does, it's as if there is a
The only interaction I ever had with Minksy was regarding the existence of
one's soul. My position went along the lines of the soul being like an avatar
referenced after a person's passing. Kind of like ghosts. Do ghosts exist?
People have always talked about them. There are mental artifacts
I like simplicity. It's race neutral, gender neutral and non-violent. Occam’s
Razor gives me mental images of an enraged white medieval guy coming at me with
a knife :)
Seriously though nice paper Ben. I try to think of ways to criticize it,
reread, and you got if covered. Though I do try to
This is huge for a potential distributed AGI, the coupling between electrons
proof of concept, think of the possibilities:
https://www.nature.com/articles/s41467-020-16745-0
Is moving so fast now...
--
Artificial General Intelligence List: AGI
Permalink:
Was he being sarcastic? when he said:
"Everybody should learn all about that and spend the rest of their lives
working on it." referring to the infinite amount of work part ...and... even
the estimable part.
--
Artificial General Intelligence List: AGI
Insects could isomorphically compress into much smaller models.
Let's say there's an AGI Demoscene. Anyone ever do Demoscene? 64K AGI I would
say allocate 2-4K to consciousnesses synthesis. Assuming an advanced quantum
computer system not like the model T's they're pioneering now. Though I just
Exactly!
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T51eb63417278f283-Mf75fcef807e0f6a67f1cb83b
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Seems to me this would be rather important to sort out ,no?
But, you could say that collapsing the wave function is not needed for AGI I
guess... but if it did it would explain some important things about biological
brains... and algorithms that could be snagged and reused.
Related discussion:
The representation of me in your mind. See me smiling and waving? Now picture
my hand slapping you back and forth across your face. That's the wave function
collapsing, it's real.
God is an emergent pattern in the mind. Commies try to eliminate God as a
threat to their authority by killing
Thought this was interesting:
https://www.sciencemag.org/news/2020/05/eye-catching-advances-some-ai-fields-are-not-real
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tc6599acba7a6407b-M57aa432aaccf4b69ed9a1418
You want God? You already have it. It's a real example of an interference
pattern in your mind, love it and be happy :)
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T51eb63417278f283-M36af7ce0812367edcf516057
If consciousness is part of the interagent systems signaling component of bio
brains and wave function collapse is involved what does that tell you?
Assuming that natural GI is really multibrained... and a natural environment is
multiconsciosnessed, natural GI evolved in a natural environment.
On Monday, June 08, 2020, at 2:28 PM, Matt Mahoney wrote:
> An observer does not need to be conscious. An observer is any
measuring device with at least one bit of memory.
How many bits does it take to be conscious? What's the minimum. Not one
apparently from your observing viewpoint so is it
How do you know it's recorded? Something has to observe it.
And consciousness is synthesizing information, recording it into molecules or
whatever, otherwise how would we know?
--
Artificial General Intelligence List: AGI
Permalink:
Uhm, I don't know how to break the news to ya... MSIE is all but dead. Might
want to go with Edge or Chrome or something. Just a heads up.
--
Artificial General Intelligence List: AGI
Permalink:
Gods may be emergent structures in the complex systems of societies.
Essentially state machines that govern individual behavior to some extent
automating some of our decisions until... And these emergent gods go through
cycles as mortals rebel. Then there's the gods god.. wait, this was
On Monday, June 08, 2020, at 4:02 PM, immortal.discoveries wrote:
> Anyone on this list that has a good understanding of the human brain would
> not mention god or orbs. I guess JR is on the other side =D
What's this mean -> immortal.discoveries ? Sounds spiritual...
Matt are you saying that a rodent or an insect is not conscious? They do show
some elements of consciousness no?
Trying to understand the minimal system as an agent that qualifies as conscious.
--
Artificial General Intelligence List: AGI
Permalink:
Hey Bill never saw that thanks it's getting queued :)
How about those Greek Gods in the sky scenes from Jason and The Argonauts
(1963) those were something else ha? Could be a past reflection of the
future...
--
Artificial General Intelligence List: AGI
Demos are 4K, 64K.. often they are pure asm, or c/c++ with inline assembler for
specific optimizations. So imagine an AGI demo with inline quantum.
Why simulate reasoning with classical fuzzy, probabilistic, neutrosophic when
you can pump the real flow with qudits? I suppose you could emulate
I can hear Mentifex's tears... MSIE had a long and prosperous life but it's
time has come.
I told you Mentifex a few years ago if you had a *cough cough* protoAGI in a
browser convert that shit to typescript. Now it has widespread adoption it
transpiles to javascript browser specific...
For a
On Thursday, June 11, 2020, at 2:09 PM, immortal.discoveries wrote:
> The ones who actually _understand_ blackbox AI. < me
Listen to you all brimming with attitude! I don't know whether to bow down,
shake your hand or run away :)
--
Artificial General
If gods are emergent structures from complex systems of humankind, religion may
very well appear irrational to individuals.
Do ants understand the anthill? No, they just perform their "ceremonies".
That's just one aspect of it.
--
Artificial General
Miles Davis said that the gods don't punish people by not giving them what they
want. The gods punish people, he said, by giving them what they want and then
not giving them time.
Save us a few clicks and clock ticks. What mistake did von Neumann make in his
version of quantum logic? The
Conscious systems are more efficient than non. Communication is enhanced.
Conscious agents can predict other similarly conscious agents behavior and make
decisions based on those predictions with confidence.
Take two systems of ants, one natural and the other p zombie robots. Will the
emergent
You could look at it as horizontal layers of vertical strata. Text processing
layer would be several layers up from the consciousness layer.
Prediction is time asymmetric.
--
Artificial General Intelligence List: AGI
Permalink:
"There is also another kind of uncertainty though, which we may call
"evidential error
(EE) uncertainty" - there is the possibility that some of the observations of
rocks being hard, were actually wrong ... that sometimes one perceived a soft
rock to actually be hard, due to some sort of error in
Biden and AGI are two very very disparate subjects. C'mon man! You know the
thing! the AGI thing!
Describe the mathematical morphism between AGI and Biden. Betcha can't do it!
--
Artificial General Intelligence List: AGI
Permalink:
Here's the thing I thought about this. Using some Ben math combined with my AGI
mental model I can explain how Biden is there.
Trump v. Biden in people's conscio-intelligent intention to vote is a societal
logic. How the votes are accurately counted and represented is a morphism. The
EE
On Friday, January 22, 2021, at 9:16 AM, immortal.discoveries wrote:
> a stiff old robot,
Hey I used to be able to sing and dance the tin man song :)
The heart is an integral part of human cognition IMO being more accurately
modelled holistically using brain, heart and gut we often focus
Ben,
This is an amazing paper. I would suggest anyone researching AGI and the
associated logics, probability, reasoning and concepts to give it a read.
ArXiv needs an AGI category.
I'm still working slowly through the Metagraph Folding and Theory of Simplicity
papers...
Thanks for creating
Good idea. The p-bits is clever but does open up a whole thing on emulation. I
noticed you pulled out the Sheldrake reference.
One of the reasons this paper is so good is that it relates to an actual AGI
system in advanced development, the OpenCog framework. And then all the logics
immortal.discoveries,
Please try to group your messages together each one of these comes across as a
separate email to every list user. Also, I know that you're very enthusiastic
but remember that this email list is about AGI in general. The minutia of
coding technicalities related to a
Paper specifies a quantum-like C/UC duality model:
https://arxiv.org/abs/2106.05191
Interesting quote:
"Unconscious-conscious modeling of the brain’s functioning matches well to
the philosophic paradigm of the ontic-epistemic structuring of scientific
theories.
The ontic level is about reality
On Monday, May 10, 2021, at 1:47 PM, Dorian Aur wrote:
> The first step is to shape/reshape the electromagnetic field and its
> interaction within a biological structure, see the general hybrid model
> https://arxiv.org/ftp/arxiv/papers/1411/1411.5224.pdf
That was a very pleasurable read,
On Monday, May 10, 2021, at 11:13 PM, immortal.discoveries wrote:
> And yeah, that's all a brain can do is predict, only patterns exist in data.
Some patterns exist in the data and some exist in the perceiver like those
patterns where one brain perceives one image and another brain perceives an
On Monday, May 10, 2021, at 10:48 PM, Mike Archbold wrote:
> Plainly a lot happens at the cell level with electric field action.
Ions are moving around, eg into cells, subject to electric fields.
What happens at a macro brain level or the middle stages with EMF? Why
is there a presumption that
On Wednesday, May 05, 2021, at 2:15 PM, James Bowery wrote:
> Notepad vs vi? I thought the holy editor war was EMACS vs vi. Do you mean
> notepad++ or do you mean, literally, than POS from Microsoft?
Notepad++ is a great utility if you're in Windows for the non-IDE coding
experience... Do you
On Tuesday, May 11, 2021, at 7:06 PM, Colin Hales wrote:
> Currently I am battling EM noise from the massive TV towers a few km from
> here.
>
> Kindly stop misrepresenting things.You have no clue what I am doing and are
> not qualified to comment.
>
Advocatus Diaboli
Hey! Same longitude as me (Malta, NY) a few lats up from here. 300 klicks due
north :)
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tb7a55271b7baaa5c-M2bca41980b34676c4797eee3
Delivery options:
On Monday, May 31, 2021, at 2:06 PM, James Bowery wrote:
> This probably isn't the place to get into a discussion of the philosophy of
> cryptocurrency, but I tend to trust Szabo's opinion on the relative merits of
> things like proof-of-work vs proof-of-stake.
>
If markets were only
You know you want a piece of that pie immortal... it's there waiting for
someone like you to grab it.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tb6485a9852b39c15-M3dc9bad96c7f5f19becc2ae1
Delivery options:
On Thursday, May 20, 2021, at 10:17 AM, Matt Mahoney wrote:
> My prediction is OpenAI will be bought out by one of the trillion dollar
> market cap AI companies (Amazon, Alphabet, Apple, or their Chinese
> equivalents) and remain ClosedAI.
“The best way to predict your future is to create it.”
On Wednesday, May 26, 2021, at 11:23 AM, James Bowery wrote:
> In AIXI terms, the difference between lossless and lossy compression is the
> difference between AIT's and SDT's notion of "utility": The former being
> concerned with what "is" and the latter being concerned with what "ought" to
>
On Wednesday, May 26, 2021, at 9:15 PM, immortal.discoveries wrote:
> Sketch this:
Very creative LOL. I don't need to take my psychedelics today :)
--
Artificial General Intelligence List: AGI
Permalink:
On Thursday, May 27, 2021, at 8:32 AM, stefan.reich.maker.of.eye wrote:
> LOL... we all fail at being visual AIs, I tell you that much
Yes, in many ways but still only a tiny fraction of what the mind's eye can do.
I suppose taking DALL-E and adding some of that creative stuff are tasks... but
OVH is in Montreal too you could get on your bike and ride over there. Talk
all French 'n stuff :)
https://startup.ovhcloud.com/en/
--
Artificial General Intelligence List: AGI
Permalink:
If you use a cloud, I use Azure and OVH is very good, you can spin up many
cores/GPU's for a short period of time with low cost. So dev on your local
machine for a few cores but allow scaling to many cores. While there is a
virtualization layer in cloud on the cores allocated newer CPU's have
On Sunday, May 23, 2021, at 11:19 PM, Alan Grimes wrote:
> Website: https://agilaboratory.com/
Mucho good info and links there! If you're into BICA... explicitly or
implicitly.
--
Artificial General Intelligence List: AGI
Permalink:
Well, you can design a new crypto where mining is replaced by compression such
that new benchmark improvements result in larger reward claims. Tie it in to
some BOINC type of system where compression jobs are being worked on and
payments feed into the rewards pool incentivizing developers to
On Saturday, May 29, 2021, at 11:47 PM, James Bowery wrote:
> Several years ago I suggested to Nick Szabo financing something like the
> Hutter Prize with a cryptocurrency in which the proof of work was increased
> compression. He's familiar with algorithmic information (has written on his
>
I think these two recent papers support the idea that consciousness is
Universal Communication Protocol. Though it could be thought of more of as
pre-protocol hmmm… There are arguments for and against conscious AGI but it
still must be explored. The first paper describes conscious AI from a
On Thursday, June 03, 2021, at 11:17 PM, Matt Mahoney wrote:
> No, the thing that nobody can agree on the meaning of because nobody can
> define it. So we argue about completely different things without realizing it.
The physical you is different from the informationally projected you outside of
On Thursday, June 03, 2021, at 10:32 AM, A.T. Murray wrote:
> Mentifex Theory of Consciousness
Yes Mentifex, I'm sure Chalmers deeply considered your diagram before
publishing his paper. I'll paste it below let's see if it maintains
formatting. We know the consciousness isn't labeled since
On Thursday, June 03, 2021, at 7:25 PM, Matt Mahoney wrote:
> We already know how to engineer empathy. We do this all the time. It's called
> user friendliness. The software anticipates what we will want and does the
> right thing. We even invent new symbols to do it, like menus, icons, and
>
On Thursday, June 03, 2021, at 6:58 PM, immortal.discoveries wrote:
> I think yous are using the wrong word, try something else maybe?
> Consciousness means ghost or spirit or something that cannot be made/ a
> machine made of particles. At least it sounds like you mean that meaning
> (ghost
On Thursday, June 03, 2021, at 9:23 PM, immortal.discoveries wrote:
> Eating a consciousness is how you become a bigger consciousness orb.
Well if you eat brains you become smarter because of the chemicals... there are
nootropics you can purchase as pills or get from eating raw brain. So it's
That’s a very thoughtful post Matt.
In the first paper they’re talking about the emergence of consciousness. They
argue consciousness is important to AI (assume AGI too) for at least on very
important thing - empathy.
This consciousness/communication structure is very close to what I’ve been
On Friday, June 25, 2021, at 5:00 PM, immortal.discoveries wrote:
> Well, maybe that's because some of them look like long hair and cat ears...
> And the goal to have variation on a woman, lol.
On Saturday, June 26, 2021, at 5:02 AM, Matt Mahoney wrote:
> Utter nonsense. Quantum theory says you
Infighting in woketopia. Woke verses woke. Who's woker?
NYT went downhill for years and Google went full bore into political censorship
+ Covid censorship. Twitter, Facebook, the same. It's inevitable that
ramifications occur.
--
Artificial General
On Saturday, June 26, 2021, at 6:41 PM, immortal.discoveries wrote:
> it is not hard to see DALL-E is very general purpose and can get so general
> purpose that it starts storing and using pieces of memories to do very rare
> prediction solving.
Splooging text to images and vice versa is not
On Saturday, June 26, 2021, at 9:31 PM, immortal.discoveries wrote:
> DALL-E turns on all qualia hypermeters, it is breakfest and dinner.
It can visually render map between image and text. and complete sentences,
while mapping. There is a vast void of what can be conceptually rendered into
text
I think that Sundar is not cooperating enough pro-actively with the political
powers that be. There is some high-level fusion with the government that is
occurring and frankly it’s frightening. All the Google censorship yet Google
now is commonplace and required in schools, used in local
On Sunday, June 27, 2021, at 2:06 PM, James Bowery wrote:
> I'd had the idea of a "truth speaker" for several years before that, but it
> hadn't crystalized in my mind as a lossless compression competition until I
> came up with the idea of what I called "The C-Prize" and Matt Mahoney
>
An AGI can learn to decompress data where the compressor is unknown or
partially known to it. IMO a conscious AGI can learn to decompress more
efficiently data that was compressed by another similarly conscious entity.
Compression and consciousness effect communication protocol. IOW compressed
On Thursday, April 29, 2021, at 10:49 PM, Jim Bromer wrote:
> I was reading your comment that, "Storage is transmission," and I realized,
> based on an idea I had a number of years ago, that if digital data was in
> constant transmission form then the null signal could be used for a value
>
On Wednesday, April 28, 2021, at 11:55 AM, immortal.discoveries wrote:
> What matters here is brains can solve many problems by predicting solutions
> based on context/ problem given
Single brains specialize. Multibrains generalize. That's why they communicate.
Multiparty intelligence on a
301 - 400 of 702 matches
Mail list logo