Both agents have the same complexity after training but not before.
On Wed, Nov 21, 2018, 1:24 AM ducis
>
> Forgive me for not understanding the Legg paper completely, but
> how would you separate a 1MB "AI agent" executable plus a 1PB file of
> trained model (by "sucking data from internet"),
Forgive me for not understanding the Legg paper completely, but
how would you separate a 1MB "AI agent" executable plus a 1PB file of trained
model (by "sucking data from internet"), from a 1PB executable compiled from
manually built source code?
I don't see how the latter can be classified
Black ops the first time since we have been able to ohmm-- th9iught you
were mathoneg#/
On Mon, Nov 19, 2018, 2:49 PM Taylor Stempo Love starting to read this... just started,.
>
> Flamxotr
>
> On Sun, Sep 9, 2018, 12:42 PM John Rose
>> How I'm thinking lately (might be totally wrong,
And your words remind me of polar pulsation in context of thesis and
antithesis. As a superpattern, the Torus seems truly [content] independent, a
singularity.
From: John Rose
Sent: Friday, 28 September 2018 11:37 AM
To: 'AGI'
Subject: RE: [agi] E=mc^2 Morphism
> -Original Message-
> From: Nanograte Knowledge Technologies via AGI
>
> John. considering eternity, what you described is but a finite event. I dare
> say,
> not only consciousness, but cosmisity.
>
Until one comes to terms with their true insignificance will they not grasp
their
> -Original Message-
> From: Jim Bromer via AGI
>
> John,
> Can you map something like multipartite entanglement to something more
> viable in contemporary computer programming? I mean something simple
> enough that even I (and some of the other guys in this group) could
> understand? Or
John. considering eternity, what you described is but a finite event. I dare
say, not only consciousness, but cosmisity.
Rob
From: Jim Bromer via AGI
Sent: Thursday, 27 September 2018 7:29 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings...
(Intelligence
John,
Can you map something like multipartite entanglement to something more
viable in contemporary computer programming? I mean something simple enough
that even I (and some of the other guys in this group) could understand? Or
is there no possible model that could be composed from contemporary
Gravity and other laws of physics are explained by the anthropogenic
principle. The simplest explanation by Occam's Razor is that all possible
universes exist and we necessarily observe one where intelligent life is
possible.
On Thu, Sep 27, 2018, 5:32 AM Jim Bromer via AGI
wrote:
> Science
Science does not have a good theory about what causes gravity. You can
deny it and say that science has explained gravity. Mass 'causes'
gravity. Would you conclude that gravity does not exist because it is
actually only mass? Or you come up with something like: mass is just
the interruption of
I want to try to have a more positive attitude about other people's
crackpot ideas. It is taking me a few days to understand what people
are saying or even why people are motivated to talk about the
inexplicable experience of consciousness in an AI discussion group.
But I will take some time off
I apologize for making personal attacks. I did not mean my comments to
come out that way. I think there are a number of native American
tribes who believe that the spirit imbues everything and every where.
I do not actually disagree with that. However, that does not mean that
the spirit of a rock
ment of knowing.
> Esoterically, I'd say qualia is that absolute moment when individual,
> consciousnessintelligence potential is realized.
>
> Computational models already exist for most of the components and
> functionality I mentioned. As such, I think it has total relevance for the
> step-b
an AGI model.
Thoughts?
Rob
From: Jim Bromer via AGI
Sent: Monday, 24 September 2018 8:02 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings...
(Intelligence=math*consciousness^2 ?)
Matt's response - like an adolescent's flip remark - is evidence of
the kind of denial that I mention
Matt's response - like an adolescent's flip remark - is evidence of
the kind of denial that I mentioned.
Jim Bromer
On Mon, Sep 24, 2018 at 10:49 AM Matt Mahoney via AGI
wrote:
>
> I wrote a simple reinforcement learner which includes the line of code:
>
> printf("Ouch!\n");
>
> So I don't see
I wrote a simple reinforcement learner which includes the line of code:
printf("Ouch!\n");
So I don't see communication of qualia as a major obstacle to AGI.
Or do you mean something else by qualia?
On Mon, Sep 24, 2018, 5:21 AM John Rose wrote:
> > -Original Message-
> > From: Matt
John,
There are aspects of the intelligent understanding of the world
(universe of things and ideas) that can be modelled and simulated. I
think this is computable in an AI program except the problem of
complexity would slow the modelling down so much that it would not be
effective enough (at this
ce explains that your brain runs a program." vs So does
>> philosophy. Are they both equally correct, therefore philosophy = science?
>>
>> Inter alia, you still did not explain anything much, did you?
>>
>> Rob
>> --
>&g
to do the semantic work. Or is it symbolic of another
> problem? I think it's a very brave thing to talk publicly about a subject
> we all agree we seemingly know almost nothing about. Yet, we should at
> least try to do that as well.
>
> Therefore, to explain is to know?
>
&g
From: Matt Mahoney via AGI
Sent: Sunday, 23 September 2018 4:02 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings...
(Intelligence=math*consciousness^2 ?)
Science doesn't explain everything. It just tries to. It doesn't explain why
the universe exists. Philosop
we should at least try to
>> do that as well.
>>
>> Therefore, to explain is to know?
>>
>> Rob
>> ____________________
>> From: Jim Bromer via AGI
>> Sent: Saturday, 22 September 2018 6:12 PM
>> To: AGI
>> Subject: Re: [agi] E=mc^2 Morph
t as well.
>
> Therefore, to explain is to know?
>
> Rob
> --
> *From:* Jim Bromer via AGI
> *Sent:* Saturday, 22 September 2018 6:12 PM
> *To:* AGI
> *Subject:* Re: [agi] E=mc^2 Morphism Musings...
> (Intelligence=math*consciousness^2 ?)
>
&
The theory that contemporary science can explain everything requires a
fundamental denial of history and a kind of denial about the limits of
cotemporary science. That sort of denial of common knowledge is ill suited
for adaptation. It will interfere with your ability to use scientific
method.
Jim
Qualia is what perceptions feel like and feelings are computable and they
condition us to believe there is something magical and mysterious about it?
This is science fiction. So science has already explained Chalmer's Hard
Problem of Consciousness. He just got it wrong? Is that what you are
Let's say that someone says that quantum effects can explain qualia. I
might respond by saying that sort of speculation is not related to
contemporary computer science. Then I get the reply, What do you
mean?!! Computers are used heavily in quantum science Yes, so
computers are used to make
But you are still missing the definition of qualia. Wikipedia has a
thing on it and I am sure SEP does as well. Because there are reports
of subjective experience we know that we share something of the nature
of experience. Common sense can tell us that computers do not. How do
we know that
of my knowledge base? I'm now setting
the threshold to zero.
Rob
From: Matt Mahoney via AGI
Sent: Saturday, 22 September 2018 2:28 AM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings...
(Intelligence=math*consciousness^2 ?)
John answered the question. Qualia
John answered the question. Qualia = sensory input compressed for
communication. A thermostat has qualia because it compresses its input to
one bit (too hot/too cold) and communicates it to the heater.
On Fri, Sep 21, 2018, 2:00 PM Jim Bromer via AGI
wrote:
> > From: Matt Mahoney via AGI
> > >
> -Original Message-
> From: Matt Mahoney via AGI
>
> What do you think qualia is? How would you know if something was
> experiencing it?
>
You could look at qualia from a multi-systems signaling and a compressionist
standpoint. They're compressed impressed samples of the environment
On Thu, Sep 13, 2018, 12:12 PM John Rose wrote:
> > -Original Message-
> > From: Matt Mahoney via AGI
> >
> > We could say that everything is conscious. That has the same meaning as
> > nothing is conscious. But all we are doing is avoiding defining
> something that is
> > really hard
> -Original Message-
> From: Matt Mahoney via AGI
>
> We could say that everything is conscious. That has the same meaning as
> nothing is conscious. But all we are doing is avoiding defining something
> that is
> really hard to define. Likewise with free will.
I disagree. Some things
We could say that everything is conscious. That has the same meaning as
nothing is conscious. But all we are doing is avoiding defining something
that is really hard to define. Likewise with free will.
We will know we have properly modeled human minds in AGI if it claims to be
conscious and have
Rob
From: Matt Mahoney via AGI
Sent: Tuesday, 11 September 2018 11:05 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings...
(Intelligence=math*consciousness^2 ?)
On Mon, Sep 10, 2018 at 3:45 PM wrote:
> You believe! Showing signs of communication protocol with futu
On Mon, Sep 10, 2018 at 3:45 PM wrote:
> You believe! Showing signs of communication protocol with future AGI :) an
> aspect of CONSCIOUSNESS?
My thermostat believes the house is too hot. It wants to keep the
house cooler, but it feels warm and decides to turn on the air
conditioner.
I
> -Original Message-
> From: Russ Hurlbut via AGI
>
> 1. Where do you lean regarding the measure of intelligence? - more towards
> that of Hutter (the ability to predict the future) or towards
> Winser-Gross/Freer
> (causal entropy - soft of a proxy for future opportunities; ref
>
> -Original Message-
> From: Matt Mahoney via AGI
>...
Yes, I'm familiar with these algorithmic information theory *specifics*. Very
applicable when implemented in isolated systems...
> No, it (and Legg's generalizations) implies that a lot of software and
> hardware
> is required and
John -
Thanks for a refreshingly new discussion for this forum. Just as you
describe, it is quite interesting to see how seemingly disparate tracks
can be combined and guided onto the same course. Accordingly, your
presentation has brought to mind similar notions that appear to fit
somewhere
On Mon, Sep 10, 2018 at 8:10 AM wrote:
> Why is there no single general compression algorithm? Same reason as general
> intelligence, thus, multi-agent, thus inter agent communication, thus
> protocol, and thus consciousness.
Legg proved that there are no simple, general theories of
ay, 10 September 2018 2:44 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings...
(Intelligence=math*consciousness^2 ?)
Nanograte,
> In particular, the notion of a universal communication protocol. To me it
> seems to have a definite ring of truth to it.
It does doesn't it?!
For years
Nanograte,
> In particular, the notion of a universal communication protocol. To me it
> seems to have a definite ring of truth to it.
It does doesn't it?!
For years I've worked with signaling and protocols lending some time to
imagining a universal protocol. And for years I've thought about
Matt,
Zoom out. Think multi-agent not single agent. Multi-agent internally and
externally. Evaluate this proposition not from first-person narrative and it
begins to make sense.
Why is there no single general compression algorithm? Same reason as general
intelligence, thus, multi-agent, thus
to it. Please carry
on as you are doing now...
From: johnr...@polyplexic.com
Sent: Monday, 10 September 2018 12:56 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings...
(Intelligence=math*consciousness^2 ?)
Matt:
> AGI is the very hard engineering problem of mak
I'll take jargon salad over buzzword soup any day.
On Sun, Sep 9, 2018 at 3:26 PM Matt Mahoney via AGI
wrote:
> Recipe for jargon salad.
>
> Two cups of computer science.
> One cup mathematics.
> One cup electrical engineering.
> One cup neuroscience.
> One half cup information theory.
> Four
AGI is the very hard engineering problem of making machines do all the
things that people can do. Consciousness is not the magic ingredient that
makes the problem easy.
On Sep 9, 2018 10:08 PM, wrote:
Basically, if you look at all of life (Earth only for this example) over
the past 4.5 billion
Basically, if you look at all of life (Earth only for this example) over the
past 4.5 billion years, including all the consciousness and all that “presumed”
entanglement and say that's the first general intelligence (GI) the algebraic
structural dynamics on the computational edge... is
Recipe for jargon salad.
Two cups of computer science.
One cup mathematics.
One cup electrical engineering.
One cup neuroscience.
One half cup information theory.
Four tablespoons quantum mechanics.
Two teaspoons computational biology.
A dash of philosophy.
Mix all ingredients in a large bowl.
Consciousness computation (GI) is on the negentropic massive multi-partite
entanglement frontier of a spontaneous morphismic awareness complexity -
IOW on the edge of life’s consciousness based on manifestation of
inter/intra-agent entanglement (in DNA perhaps?).
Whoa! I'm roiling dude. I mean,
How I'm thinking lately (might be totally wrong, totally obvious, and/or
totally annoying to some but it’s interesting):
Consciousness Oriented Intelligence (COI)
Consciousness is Universal Communications Protocol (UCP)
Intelligence is consciousness manifestation
AI is a computational
48 matches
Mail list logo