*Not *just* parallel I mean.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T07361bd0216a4e97-M351ff5e64886734745671433
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Nature moves sequentially. Lion jumps onto rock, roars, jumps off, starts
breeding, goes to sleep. Wakes up later.
Life is not parallel.
--
Artificial General Intelligence List: AGI
Permalink:
Elon Musk there is attempting to verbalize what's going on behind the scenes
with more general languages that other people besides himself are actually
working on.
Why serialize through a natural language for anything? It's a relic. It works
and will continue to do so but really was the human
On Wed, Jul 1, 2020, 3:23 PM John Rose wrote:
> Just do direct consciousness transfer no? Design around dinosaur
> languages...
>
> https://twitter.com/SaraTheWar/status/1258491467040874496
>
Elon Musk is talking about brain-computer interfacing eliminating the need
for speech and writing (I
There still are a lot of Cobol programmers out there I guess...
Natural languages will become like old database formats with companies trying
to port off of them. Unfortunately they are currently a major bottleneck on
distributed intelligence.
Don't let me discourage you though. Have a go at
Who made you mod
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T07361bd0216a4e97-Mfa2aba57e9223254e7d34696
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Reading Kant can give you an idea of souls and concepts. If anything has and
will effect human behavior throughout history it has been the concept of soul.
I'm not for or against transferring or uploading individual souls one way or
another but an AGI concept modeling system can be tested by
Depending on what you mean by a soul, it either has an effect on human
behavior or it doesn't. If it does, then you can model this effect in
software and create machines with souls. If it doesn't, then souls are
irrelevant to AGI.
But maybe you want to know if I create a robot that looks and acts
One cannot deny that the concept of soul exists. That is the only soul that I
have ever referred to in any related discussion. One may take the position that
concepts don't exist which would be a rather interesting debate.
As far as some real physical soul that is what Minsky thought I was
On Monday, June 29, 2020, at 9:13 AM, Matt Mahoney wrote:
> Surely anyone who believes that AGI is possible wouldn't also believe
in souls or heaven or ghosts??? Your brain is a computer, right?
Belief in souls and whatnot is fully compatible with the belief that AGI is
possible, if one avoids
I believe human behavior is estimable. You believe human behavior is computable
IOW has a K-complexity. What's hiding in the difference there? consciousness.
In humans then belief is consciousness. Makes some sense I guess but I think
consciousness is nondeterministic, you think it's
On Mon, Jun 29, 2020, 5:15 PM John Rose wrote:
> On Monday, June 29, 2020, at 11:13 AM, Matt Mahoney wrote:
>
> Surely anyone who believes that AGI is possible wouldn't also believe in
> souls or heaven or ghosts??? Your brain is a computer, right?
>
>
> Matt, do you believe the K-complexity of
On Monday, June 29, 2020, at 11:13 AM, Matt Mahoney wrote:
> Surely anyone who believes that AGI is possible wouldn't also believe
in souls or heaven or ghosts??? Your brain is a computer, right?
Matt, do you believe the K-complexity of the Earth exists? I don't think it
does but perhaps you've
As Keith Henson once told me, "If Randell Mills's SunCell turns out to
work, it will be the system programmers messing with us."
On Mon, Jun 29, 2020 at 10:14 AM Matt Mahoney
wrote:
> Surely anyone who believes that AGI is possible wouldn't also believe
> in souls or heaven or ghosts??? Your
Surely anyone who believes that AGI is possible wouldn't also believe
in souls or heaven or ghosts??? Your brain is a computer, right?
On Sun, Jun 28, 2020 at 9:46 AM John Rose wrote:
>
> The only interaction I ever had with Minksy was regarding the existence of
> one's soul. My position went
The only interaction I ever had with Minksy was regarding the existence of
one's soul. My position went along the lines of the soul being like an avatar
referenced after a person's passing. Kind of like ghosts. Do ghosts exist?
People have always talked about them. There are mental artifacts
On Wed, Jun 24, 2020 at 11:13 PM ducis wrote:
> Would you mind elaborating on how the respective communities are
> >thinking about lossless compression as a solely or even primarily
> automatic process
>
The primary cognitive barrier I've encountered among AGI researchers to the
use of lossless
So let's say that I want to forecast social or economic trends, things like
population, life expectancy, GDP, trade, etc in various countries. Then
James Bowery is saying (I believe) that the smallest program that
reproduces past data will be the best predictor of future data.
One source of such
Would you mind elaborating on how the respective communities are
>thinking about lossless compression as a solely or even primarily automatic
>process
and
>thinking about models of society and the environment as less than Turing
>complete?
--
-
At
My present motive for bringing up the "physics envy" trope, and Minsky's
redemption, is the chasm over which civilization is now passing the abyss
may be bridged if only people get over the idea that Algorithmic
Information Theory's model selection criterion is "just another information
criterion
On Wed, Jun 24, 2020 at 6:03 AM John Rose wrote:
> There may have been physics envy then since the technological convergence
> of AIT and QIT had yet to materialize.
>
The "physics envy" trope is an excuse for being unprincipled while
occupying positions of trust, power, privilege,
There may have been physics envy then since the technological convergence of
AIT and QIT had yet to materialize.
Perhaps another way the gods punish people is by giving them everything after
they expire. Minsky had another envy and was perhaps too cozy with
Epstein? *screeching vinyl
I don't think he was being sarcastic. The neocortex is a "practical
approximation". An ideal specification can shed light on a field by saying
"what" rather than "how". If you don't even know "what" intelligence is,
you can get lost even trying to articulate "how" to go about approximating
it.
More accurately, "...why there can't be an efficient implementation of
general intelligence".
AIXI is a pretty simple theory of intelligence precisely because AIT makes
up half and the (at least as) simple sequential decision theory makes up
the other half.
On Tue, Jun 23, 2020 at 11:37 PM Matt
Algorithmic information theory is the simple theory that explains why there
isn't a simple theory of intelligence.
On Tue, Jun 23, 2020, 9:34 PM John Rose wrote:
> Was he being sarcastic? when he said:
> "Everybody should learn all about that and spend the rest of their lives
> working on it."
Was he being sarcastic? when he said:
"Everybody should learn all about that and spend the rest of their lives
working on it." referring to the infinite amount of work part ...and... even
the estimable part.
--
Artificial General Intelligence List: AGI
26 matches
Mail list logo