On Thu, Jun 3, 2021, 9:31 PM John Rose wrote:
> On Thursday, June 03, 2021, at 6:58 PM, immortal.discoveries wrote:
>
> I think yous are using the wrong word, try something else maybe?
> Consciousness means ghost or spirit or something that cannot be made/ a
> machine made of particles. At least
On Thursday, June 03, 2021, at 9:23 PM, immortal.discoveries wrote:
> Eating a consciousness is how you become a bigger consciousness orb.
Well if you eat brains you become smarter because of the chemicals... there are
nootropics you can purchase as pills or get from eating raw brain. So it's
po
On Thursday, June 03, 2021, at 6:58 PM, immortal.discoveries wrote:
> I think yous are using the wrong word, try something else maybe?
> Consciousness means ghost or spirit or something that cannot be made/ a
> machine made of particles. At least it sounds like you mean that meaning
> (ghost mea
Eating a consciousness is how you become a bigger consciousness orb.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T06c11d0b87552585-M109de315de66bb1abf381d2f
Delivery options: https://agi.topicbox.com/groups/agi
On Thursday, June 03, 2021, at 7:25 PM, Matt Mahoney wrote:
> We already know how to engineer empathy. We do this all the time. It's called
> user friendliness. The software anticipates what we will want and does the
> right thing. We even invent new symbols to do it, like menus, icons, and
> to
I know the above seems like low level is too low but really the 3rd paragraph
above is the actual how AI works, the beneath that is the details it runs, the
2 paragraphs above the 3rd are more implementation details e.g. how Backprop
works and what criteria they use as in second paragraph (to gr
First of all anyone making an AI can and should and even probably does know how
their code works, they should know how their Backprop, vectors, etc purposes
and results are. This is so obvious but it needs to be said.
Next, they all also should and probably do know why the backprop/ word vectors
[ML News] Anthropic raises $124M, ML execs clueless, collusion rings, ELIZA
source discovered & more:
https://www.youtube.com/watch?v=oxsdp--ULRo
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T73cb0deded02df8
Ya like 2+2=4, but must ask it in the objective sense, and has no proof else
you could write it out. Both are something you can build, in the later case it
is just less details are known. In the 3rd case, full objectiveness, is
nonesense, you cannot build it, it is just all consciousnessallism,
It seems to me that the essential characteristic of the sensation of
consciousness is not empathy but a survival instinct. The authors also
suggest that conscious machines should have human rights or at least a
right to life. This is a very dangerous combination that a lot of people
don't realize.
I think yous are using the wrong word, try something else maybe? Consciousness
means ghost or spirit or something that cannot be made/ a machine made of
particles. At least it sounds like you mean that meaning (ghost meaning).
--
Artificial General Intellig
Mr Rose that is impressive. And my work is some what parallel this.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T06c11d0b87552585-M8d7074e6421a4e7173c8501d
Delivery options: https://agi.topicbox.com/groups/agi
That’s a very thoughtful post Matt.
In the first paper they’re talking about the emergence of consciousness. They
argue consciousness is important to AI (assume AGI too) for at least on very
important thing - empathy.
This consciousness/communication structure is very close to what I’ve been
w
No matter what machine you could invent, it will never be able to prove it is
anything more than a machine. We are no different. Why just because I say
something do I make myself more worth? I could say I am god, or indestructable,
or the great one, or the only one, it's just words, just sound w
On Thu, Jun 3, 2021 at 7:28 AM John Rose wrote:
>
> I think these two recent papers support the idea that consciousness is
> Universal Communication Protocol. Though it could be thought of more of as
> pre-protocol hmmm… There are arguments for and against conscious AGI but it
> still must be e
On Thursday, June 03, 2021, at 10:32 AM, A.T. Murray wrote:
> Mentifex Theory of Consciousness
Yes Mentifex, I'm sure Chalmers deeply considered your diagram before
publishing his paper. I'll paste it below let's see if it maintains
formatting. We know the consciousness isn't labeled since it's
On Thu, Jun 3, 2021 at 4:27 AM John Rose wrote:
> I think these two recent papers support the idea that consciousness is
> Universal Communication Protocol. Though it could be thought of more of as
> pre-protocol hmmm… There are arguments for and against conscious AGI but it
> still must be explo
I think these two recent papers support the idea that consciousness is
Universal Communication Protocol. Though it could be thought of more of as
pre-protocol hmmm… There are arguments for and against conscious AGI but it
still must be explored. The first paper describes conscious AI from a
co
18 matches
Mail list logo