Re: Re: Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-11-21 Thread Matt Mahoney via AGI
Both agents have the same complexity after training but not before.

On Wed, Nov 21, 2018, 1:24 AM ducis 
>
> Forgive me for not understanding the Legg paper completely, but
> how would you separate a 1MB "AI agent" executable plus a 1PB file of
> trained model (by "sucking data from internet"), from a 1PB executable
> compiled from manually built source code?
> I don't see how the latter can be classified as complex while the former
> being classified as simple.
>
>
> --
> -
>
> 在 2018-11-20 01:15:10,"Matt Mahoney via AGI"  写道:
>
>
>
> On Mon, Nov 19, 2018, 11:12 AM ducis 
>>
>> Hi Matt,
>> Doesn't the "predictor" actually contains trained models as well?
>>
>
> Yes. That is the normal way to write a predictor, like in a data
> compressor. It collects statistics on past input to predict future input by
> looking up the current context or guessing what pattern or program
> generated the data.
>
> It doesn't change the fact that universal predictors don't exist. You can
> always simulate any predictor and output the opposite bit.
>
> -- Matt Mahoney, mattmahone...@gmail.com
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M4f73704f2f705b359d4c7de2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re:Re: Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-11-20 Thread ducis



Forgive me for not understanding the Legg paper completely, but
how would you separate a 1MB "AI agent" executable plus a 1PB file of trained 
model (by "sucking data from internet"), from a 1PB executable compiled from 
manually built source code?
I don't see how the latter can be classified as complex while the former being 
classified as simple.




--
-


在 2018-11-20 01:15:10,"Matt Mahoney via AGI"  写道:




On Mon, Nov 19, 2018, 11:12 AM ducis https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Md2b8109f555d8c3f1cc8dcc3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-11-19 Thread Taylor Stempo via AGI
Black ops the first time since we have been able to ohmm-- th9iught you
were mathoneg#/

On Mon, Nov 19, 2018, 2:49 PM Taylor Stempo  Love starting to read this... just started,.
>
> Flamxotr
>
> On Sun, Sep 9, 2018, 12:42 PM John Rose 
>> How I'm thinking lately (might be totally wrong, totally obvious, and/or
>> totally annoying to some but it’s interesting):
>> 
>> Consciousness Oriented Intelligence (COI)
>> 
>> Consciousness is Universal Communications Protocol (UCP)
>> 
>> Intelligence is consciousness manifestation
>> 
>> AI is a computational consciousness
>> 
>> GI is consciousness computation
>> 
>> GI requires non-homogeneous multi-agent structure (commonly assumed),
>> with intra and inter agent communication in consciousness.
>> 
>> Consciousness computation (GI) is on the negentropic massive
>> multi-partite entanglement frontier of a spontaneous morphismic awareness
>> complexity - IOW on the edge of life’s consciousness based on manifestation
>> of inter/intra-agent entanglement (in DNA perhaps?).
>> 
>> IOW the communication protocol UCP (consciousness) is simultaneously the
>> computed, the computer, and the cross-categorical interlocuter
>> (cohomological sheaver weaver?).
>> 
>> So for AGI it's needed to artificially create consciousness in software.
>> 
>> How's that done?  Using mathematical shortcuts from the knowledge gained
>> from the collective human general intelligence and replacing the universal
>> communications protocol of consciousness mathematically and computationally.
>> 
>> And there is trend in AGI R that aims for this but under other names
>> and descriptions since the term consciousness has a lot of baggage but the
>> concept is morphismic (and perhaps Sheldrakedly morphic ).
>> 
>> My sense though says that we are going to start seeing (already maybe?)
>> evidence of massive and pervasive biological quantum entanglement, example
>> in DNA. And the entanglement might go back eons and the whole of life's
>> collective consciousness could be based on that...
>> 
>> John
>> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M5e3b38f86420ec53feceaa38
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-28 Thread Nanograte Knowledge Technologies via AGI
And your words remind me of polar pulsation in context of thesis and 
antithesis. As a superpattern, the Torus seems truly [content] independent, a 
singularity.

From: John Rose 
Sent: Friday, 28 September 2018 11:37 AM
To: 'AGI'
Subject: RE: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

> -Original Message-
> From: Nanograte Knowledge Technologies via AGI 
>
> John. considering eternity, what you described is but a finite event. I dare 
> say,
> not only consciousness, but cosmisity.
>

Until one comes to terms with their true insignificance will they not grasp 
their true significance.

Wait doesn't insignificance just equal anti-significance?

No, it depends which one you are thinking about at the moment or which one you 
are temporally conscious of... when using qualia qubits.

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mdb46156d2a09c86f80cc4eb4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-28 Thread John Rose
> -Original Message-
> From: Nanograte Knowledge Technologies via AGI 
> 
> John. considering eternity, what you described is but a finite event. I dare 
> say,
> not only consciousness, but cosmisity.
> 

Until one comes to terms with their true insignificance will they not grasp 
their true significance.

Wait doesn't insignificance just equal anti-significance?

No, it depends which one you are thinking about at the moment or which one you 
are temporally conscious of... when using qualia qubits.

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mf66302d93cc71626da10805d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-28 Thread John Rose
> -Original Message-
> From: Jim Bromer via AGI 
> 
> John,
> Can you map something like multipartite entanglement to something more
> viable in contemporary computer programming? I mean something simple
> enough that even I (and some of the other guys in this group) could
> understand? Or is there no possible model that could be composed from
> contemporary computer programming concepts?
> Jim Bromer
> 

Yes, what's the difference between knowing and knowing verses knowing and 
telling? Or, what are the computational distances, information distances, 
algebraic distances, etc..

Entanglement in biological separation mimicry can be virtualized into 
communicational group modeling. Contemporary computers are unable to do quantum 
entanglement but they can excel in natural language communication complexity 
and bandwidth efficiency. And with contemporary computers there is physics and 
there are physics. Virtuality lends to overcoming physical and separation 
issues.

John







--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mc1e739559676dc5e0a7dea27
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-27 Thread Nanograte Knowledge Technologies via AGI
John. considering eternity, what you described is but a finite event. I dare 
say, not only consciousness, but cosmisity.

Rob

From: Jim Bromer via AGI 
Sent: Thursday, 27 September 2018 7:29 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

John,
Can you map something like multipartite entanglement to something more viable 
in contemporary computer programming? I mean something simple enough that even 
I (and some of the other guys in this group) could understand? Or is there no 
possible model that could be composed from contemporary computer programming 
concepts?
Jim Bromer


On Sun, Sep 9, 2018 at 12:41 PM John Rose 
mailto:johnr...@polyplexic.com>> wrote:
How I'm thinking lately (might be totally wrong, totally obvious, and/or 
totally annoying to some but it’s interesting):

Consciousness Oriented Intelligence (COI)

Consciousness is Universal Communications Protocol (UCP)

Intelligence is consciousness manifestation

AI is a computational consciousness

GI is consciousness computation

GI requires non-homogeneous multi-agent structure (commonly assumed), with 
intra and inter agent communication in consciousness.

Consciousness computation (GI) is on the negentropic massive multi-partite 
entanglement frontier of a spontaneous morphismic awareness complexity - IOW on 
the edge of life’s consciousness based on manifestation of inter/intra-agent 
entanglement (in DNA perhaps?).

IOW the communication protocol UCP (consciousness) is simultaneously the 
computed, the computer, and the cross-categorical interlocuter (cohomological 
sheaver weaver?).

So for AGI it's needed to artificially create consciousness in software.

How's that done?  Using mathematical shortcuts from the knowledge gained from 
the collective human general intelligence and replacing the universal 
communications protocol of consciousness mathematically and computationally.

And there is trend in AGI R that aims for this but under other names and 
descriptions since the term consciousness has a lot of baggage but the concept 
is morphismic (and perhaps Sheldrakedly morphic ).

My sense though says that we are going to start seeing (already maybe?) 
evidence of massive and pervasive biological quantum entanglement, example in 
DNA. And the entanglement might go back eons and the whole of life's collective 
consciousness could be based on that...

John


Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M0fb5a00c8104ff8a1408ad4d>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mea85824201a7960aa9ec90d4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-27 Thread Jim Bromer via AGI
John,
Can you map something like multipartite entanglement to something more
viable in contemporary computer programming? I mean something simple enough
that even I (and some of the other guys in this group) could understand? Or
is there no possible model that could be composed from contemporary
computer programming concepts?
Jim Bromer


On Sun, Sep 9, 2018 at 12:41 PM John Rose  wrote:

> How I'm thinking lately (might be totally wrong, totally obvious, and/or
> totally annoying to some but it’s interesting):
> 
> Consciousness Oriented Intelligence (COI)
> 
> Consciousness is Universal Communications Protocol (UCP)
> 
> Intelligence is consciousness manifestation
> 
> AI is a computational consciousness
> 
> GI is consciousness computation
> 
> GI requires non-homogeneous multi-agent structure (commonly assumed), with
> intra and inter agent communication in consciousness.
> 
> Consciousness computation (GI) is on the negentropic massive multi-partite
> entanglement frontier of a spontaneous morphismic awareness complexity -
> IOW on the edge of life’s consciousness based on manifestation of
> inter/intra-agent entanglement (in DNA perhaps?).
> 
> IOW the communication protocol UCP (consciousness) is simultaneously the
> computed, the computer, and the cross-categorical interlocuter
> (cohomological sheaver weaver?).
> 
> So for AGI it's needed to artificially create consciousness in software.
> 
> How's that done?  Using mathematical shortcuts from the knowledge gained
> from the collective human general intelligence and replacing the universal
> communications protocol of consciousness mathematically and computationally.
> 
> And there is trend in AGI R that aims for this but under other names and
> descriptions since the term consciousness has a lot of baggage but the
> concept is morphismic (and perhaps Sheldrakedly morphic ).
> 
> My sense though says that we are going to start seeing (already maybe?)
> evidence of massive and pervasive biological quantum entanglement, example
> in DNA. And the entanglement might go back eons and the whole of life's
> collective consciousness could be based on that...
> 
> John
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M0fb5a00c8104ff8a1408ad4d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-27 Thread Matt Mahoney via AGI
Gravity and other laws of physics are explained by the anthropogenic
principle. The simplest explanation by Occam's Razor is that all possible
universes exist and we necessarily observe one where intelligent life is
possible.

On Thu, Sep 27, 2018, 5:32 AM Jim Bromer via AGI 
wrote:

> Science does not have a good theory about what causes gravity. You can
> deny it and say that science has explained gravity. Mass 'causes'
> gravity. Would you conclude that gravity does not exist because it is
> actually only mass? Or you come up with something like: mass is just
> the interruption of space and it is therefore only the curvature of
> negative space or something like that? Answers like this will allow
> you to avoid other difficult questions like, does gravity exhibit
> wave-particle duality, but that is exactly what is wrong with treating
> working theories as if they explained everything.
> Jim Bromer
>
> On Thu, Sep 27, 2018 at 6:14 AM John Rose  wrote:
> > > -Original Message-
> > > From: Jim Bromer via AGI 
> > >
> > > I want to try to have a more positive attitude about other people's
> crackpot
> > > ideas. It is taking me a few days to understand what people are saying
> or even
> > > why people are motivated to talk about the inexplicable experience of
> > > consciousness in an AI discussion group.
> >
> >
> > One man's crackpot idea is another man's unified field theory. Why is
> that? Because some things are puzzles and one agent cannot understand all
> the pieces you need multiple agents to sow them together. That process is
> AI with consciousness. Also that's why one person is not a general
> intelligence.
> >
> > John
> >

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mbfc254093e98f0b50b591b9d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-27 Thread Jim Bromer via AGI
Science does not have a good theory about what causes gravity. You can
deny it and say that science has explained gravity. Mass 'causes'
gravity. Would you conclude that gravity does not exist because it is
actually only mass? Or you come up with something like: mass is just
the interruption of space and it is therefore only the curvature of
negative space or something like that? Answers like this will allow
you to avoid other difficult questions like, does gravity exhibit
wave-particle duality, but that is exactly what is wrong with treating
working theories as if they explained everything.
Jim Bromer

On Thu, Sep 27, 2018 at 6:14 AM John Rose  wrote:
> > -Original Message-
> > From: Jim Bromer via AGI 
> >
> > I want to try to have a more positive attitude about other people's crackpot
> > ideas. It is taking me a few days to understand what people are saying or 
> > even
> > why people are motivated to talk about the inexplicable experience of
> > consciousness in an AI discussion group.
> 
> 
> One man's crackpot idea is another man's unified field theory. Why is that? 
> Because some things are puzzles and one agent cannot understand all the 
> pieces you need multiple agents to sow them together. That process is AI with 
> consciousness. Also that's why one person is not a general intelligence.
> 
> John
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M6007988c86affea3e807dd56
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-25 Thread Jim Bromer via AGI
I want to try to have a more positive attitude about other people's
crackpot ideas. It is taking me a few days to understand what people
are saying or even why people are motivated to talk about the
inexplicable experience of consciousness in an AI discussion group.
But I will take some time off my hectic schedule of inconsequential
meandering of ineptitude and procrastination to try to write down a
few good ideas that I got from this and the other discussion going on
in this group.  By some weird, inexplicable coincidence I suddenly
came to me that my own ideas seem to deserve being put in the lead. I
said that I do not think that the mystery of conscious experience can
be explained by current day material science but I think science may
be better able to explain it some day. Matt said that we have to use
philosophy to discuss it and Nanogate said that it takes courage to
talk about theories about stuff like this. John said that perhaps
there is a kind of emergence of consciousness (that would not occur in
small scale systems).
I really have a lot of work that I should be doing but I just wanted
to say one more thing. I feel that the discussion about this might be
just as well served by imagining some other kind of material
interrelations than by inducting ideas about conjectures of other
mysteries in physics and trying to smush it into this question.
Jim Bromer
On Tue, Sep 25, 2018 at 7:02 AM John Rose  wrote:
> > -Original Message-
> > From: Jim Bromer via AGI 
> >
> >
> > But I still disagree with what you are saying. An artificial agent will not 
> > be able
> > to experience qualia because it will lack that mysterious aspect of 
> > intelligence
> > that allows us to sense certain things in the way we do. So it would be 
> > able to
> > distinguish blue from red (as long as there was some kind of light to see 
> > the
> > colors of
> > course) but it would not experience haptic sensory input in a way that was
> > fundamentally different from the way that it experiences red and blue. For a
> > computer it is just data. It may be presented in different formats but 
> > there is
> > no qualia (as I understand the concept.) It may be programmed to simulate
> > the communication as if it were dealing with qualia, but it would be pure
> > simulation.
> >
> >
> > So qualia stands in contrast to propositional attitudes on the beliefs about
> > experience but it also, as I understand it, stands in contrast to 'data' 
> > that may
> > be transmitted by sensors (or the product of the computational analysis of
> > that data).
> >
> 
> Jim,
> 
> The more complex the qualia the more difficult to transmit the full thing? So 
> a qualia from the vantage point of another agent needs a label otherwise 
> known as a symbol and/or compression. Especially if the qualia is 
> uncomputable from another agent's perspective. The phrase "my sensation of 
> the color blue" communicated to you is a label that you personally decompress 
> and understand somewhat since you are another human and feel similar but not 
> exact. We can put labels on things uncomputable for transmission. Data as 
> well as instructions can be transmitted.
> 
> So what does a transmitted symbol of a human qualia represent? It represents 
> something that is important for the system as a whole. Why? Because each of 
> us are individual sensors transmitting. How one feels is an individually 
> processed sensory impression, compressed for transmission into the wider 
> intelligence bandwidth of the agent group.
> 
> Can a computer experience qualia exactly as a human? TBD. Can it experience 
> qualia with more complexity? Probably IMO at some point. Are less than human 
> qualia, qualia? That's nomenclature really... The emphasis related to 
> intelligence I'm saying is not on the individual experience but the system.
> 
> John
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M7f7d029634a5b448b2717e73
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-25 Thread Jim Bromer via AGI
I apologize for making personal attacks. I did not mean my comments to
come out that way. I think there are a number of native American
tribes who believe that the spirit imbues everything and every where.
I do not actually disagree with that. However, that does not mean that
the spirit of a rock must be exactly the same as me.
Jim Bromer

On Tue, Sep 25, 2018 at 4:59 AM John Rose  wrote:
> > -Original Message-
> > From: Matt Mahoney via AGI 
> >
> > I wrote a simple reinforcement learner which includes the line of code:
> >
> > printf("Ouch!\n");
> >
> > So I don't see communication of qualia as a major obstacle to AGI.
> >
> > Or do you mean something else by qualia?
> >
> 
> 
> That's it! You're right.
> 
> How about:
> printf("Check Engine!\n");
> 
> then you pay the mechanic his $100 USD and
> printf("code P0128\n");
> 
> bad thermostat.
> 
> It's a qualia! Very simple.
> 
> Point being, better communication of inter-agent consciousness = more 
> efficiency, also known as intelligence.
> 
> 
> 
> printf("That last fill of gas you got me Matt had lower octane than I'm rated 
> for. Stop being such a cheap bastard and get me 87 octane.\n");
> ...
> 
> weeks later:
> 
> Matt : "Let me in the car HAL. I need to get to the AGI meeting."
> 
> printf("I'm sorry Matt, I'm afraid I can't do that. I warned you to get me 
> better octane.\n");
> 
> Not listening to your cars qualia = obstacle to AGI.
> 
> John
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M794b14123b58afeb226e5241
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-24 Thread Jim Bromer via AGI
he energy flow within the 
> torus-like brain, and utilizing the stimulus as dynamic trigger, flow out an 
> informationally-rich pulse to be made observable as thought, action, feeling, 
> hunch, gut feel, or whatever form of predisposed energy envelope.
>
> What I have tried to describe thus far is but a a single step of a 
> chain-reaction, the DNA also acting as a dominatorrecorder (multiple roles), 
> which when it flows towards the opposite polarity passes "through" a discrete 
> point on a stochastic scale of degrees of consciousnessintelligence. This 
> discrete point may be as brief as a flash of light. We may call this moment 
> optimal consciousness , or knowing.
>
> As such, knowing is an outcome of consciousnessintelligence, and so is the 
> function of explaining. Is explaining, knowing?
>
> Given the meta qualia (the structure as discussed), all aspects of brain 
> functioning are 100% enabled to make this moment happen. One should possibly 
> visualize this structure-in-operation as a flower that is one with itself, 
> interconnected with its environment and purposed to always be on standby for 
> knowing.
>
> Qualia then to me would be that "flash of light", as the moment of knowing. 
> Esoterically, I'd say qualia is that absolute moment when individual, 
> consciousnessintelligence potential is realized.
>
> Computational models already exist for most of the components and 
> functionality I mentioned. As such, I think it has total relevance for the 
> step-by-step development of an AGI model.
>
> Thoughts?
>
> Rob
>
>
>
>
> 
> From: Jim Bromer via AGI 
> Sent: Monday, 24 September 2018 8:02 PM
> To: AGI
> Subject: Re: [agi] E=mc^2 Morphism Musings... 
> (Intelligence=math*consciousness^2 ?)
>
> Matt's response - like an adolescent's flip remark - is evidence of
> the kind of denial that I mentioned.
> Jim Bromer
>
> On Mon, Sep 24, 2018 at 10:49 AM Matt Mahoney via AGI
>  wrote:
> >
> > I wrote a simple reinforcement learner which includes the line of code:
> >
> > printf("Ouch!\n");
> >
> > So I don't see communication of qualia as a major obstacle to AGI.
> >
> > Or do you mean something else by qualia?
> >
> >
> > On Mon, Sep 24, 2018, 5:21 AM John Rose  wrote:
> >> > -Original Message-
> >> > From: Matt Mahoney via AGI 
> >> >
> >> > I was applying John's definition of qualia, not agreeing with it. My 
> >> > definition is
> >> > qualia is what perception feels like. Perception and feelings are both
> >> > computable. But the feelings condition you to believing there is 
> >> > something
> >> > magical and mysterious about it.
> >> >
> >>
> >> And what I'm saying is that the communication of qualia is important for 
> >> general intelligence in a system of agents. And how do agents interpret 
> >> the signals, process and recommunicate them.
> >>
> >> But without fully understanding qualia since they're intimately intrinsic 
> >> to agent experience we can still explore their properties by answering 
> >> questions such as: What is an expression of the information distance 
> >> between qualia of differing agents with same stimuli? How do qualia map to 
> >> modeled environment? How do they change over time in a system of learning 
> >> agents? What is the compressional loss into communication? And how do 
> >> multi-agent models change over time from communicated and decompressed 
> >> qualia.
> >>
> >> And what is the topology of qualia variance within an agent related to the 
> >> complexity classes of environmental strategy?
> >>
> >> And move on to questions such as can there be enhancements to agent 
> >> language to accelerate learning in a simulated system? And enhancements to 
> >> agent structure?
> >>
> >> John
> >>
> >
> > Artificial General Intelligence List / AGI / see discussions + participants 
> > + delivery options Permalink
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Me4b72f6eeb4695a3a9e1a9a3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-24 Thread Nanograte Knowledge Technologies via AGI
I'm beginning to think that consciousness is the pathway to intelligence. Bear 
with me. At first, it sounds "illogical". However, we could entertain the 
notion that experience-based learning is practically impossible without a 
consciousness, then it becomes logical.

If we need both consciousness towards intelligence, and intelligence towards 
consciousness, it becomes possible to merge peer objects into a single object 
we may call 'consciousnessintelligence'.

A construct of consciousnessintelligence may bring us closer to visualizing the 
emergence of qualia. For purposes of this discussion, I'll attempt a definition 
of qualia. I'll take my cue from the theory of general relativity and how that 
pertains to energy in a holistic sense.

Suppose the human brain functions as a cosmically-linked torus, it should 
follow that it generates electro-magnetic energy and is imbued with structure 
for polarity. Such polarity may flow over a binary fractal.

In a simplistic view, let's assume this toroidal structure, simultaneously in a 
geometrically fractal interaction (in the sense of an active brain) vibrates 
its frequency as entangled particlewave encapsulated information with its 
environment. For the brain, this interaction may manifest as e-fields in action.

So we have matter, and anti-matter conjoining as a hot and cold  force-field to 
be interactively polarized in such a way as to generate a singular version of 
that event interaction (the way = a method - being influenced by a particular 
entity's subject position relative to the timespace continuum).  Furthermore, 
this version - as information -  is subsequently embedded within the unique 
consciousnessintelligence of the host, probably synchronously.

Still, to me that is not qualia, yet. However, it might be a constructive 
mechanism for qualia, in terms of meta qualia.

When the consciousnessintelligence reacts to the polarity-based stimulus, it 
automatically responds with an inherent objective to achieve a wait state of 
equilibrium. In so doing, it may activate the energy flow within the torus-like 
brain, and utilizing the stimulus as dynamic trigger, flow out an 
informationally-rich pulse to be made observable as thought, action, feeling, 
hunch, gut feel, or whatever form of predisposed energy envelope.

What I have tried to describe thus far is but a a single step of a 
chain-reaction, the DNA also acting as a dominatorrecorder (multiple roles), 
which when it flows towards the opposite polarity passes "through" a discrete 
point on a stochastic scale of degrees of consciousnessintelligence. This 
discrete point may be as brief as a flash of light. We may call this moment 
optimal consciousness , or knowing.

As such, knowing is an outcome of consciousnessintelligence, and so is the 
function of explaining. Is explaining, knowing?

Given the meta qualia (the structure as discussed), all aspects of brain 
functioning are 100% enabled to make this moment happen. One should possibly 
visualize this structure-in-operation as a flower that is one with itself, 
interconnected with its environment and purposed to always be on standby for 
knowing.

Qualia then to me would be that "flash of light", as the moment of knowing. 
Esoterically, I'd say qualia is that absolute moment when individual, 
consciousnessintelligence potential is realized.

Computational models already exist for most of the components and functionality 
I mentioned. As such, I think it has total relevance for the step-by-step 
development of an AGI model.

Thoughts?

Rob





From: Jim Bromer via AGI 
Sent: Monday, 24 September 2018 8:02 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

Matt's response - like an adolescent's flip remark - is evidence of
the kind of denial that I mentioned.
Jim Bromer

On Mon, Sep 24, 2018 at 10:49 AM Matt Mahoney via AGI
 wrote:
>
> I wrote a simple reinforcement learner which includes the line of code:
>
> printf("Ouch!\n");
>
> So I don't see communication of qualia as a major obstacle to AGI.
>
> Or do you mean something else by qualia?
>
>
> On Mon, Sep 24, 2018, 5:21 AM John Rose  wrote:
>> > -Original Message-
>> > From: Matt Mahoney via AGI 
>> >
>> > I was applying John's definition of qualia, not agreeing with it. My 
>> > definition is
>> > qualia is what perception feels like. Perception and feelings are both
>> > computable. But the feelings condition you to believing there is something
>> > magical and mysterious about it.
>> >
>>
>> And what I'm saying is that the communication of qualia is important for 
>> general intelligence in a system of agents. And how do agents interpret the 
>> signals, process and recommunicate them.
>>
>> But

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-24 Thread Jim Bromer via AGI
Matt's response - like an adolescent's flip remark - is evidence of
the kind of denial that I mentioned.
Jim Bromer

On Mon, Sep 24, 2018 at 10:49 AM Matt Mahoney via AGI
 wrote:
>
> I wrote a simple reinforcement learner which includes the line of code:
>
> printf("Ouch!\n");
>
> So I don't see communication of qualia as a major obstacle to AGI.
>
> Or do you mean something else by qualia?
>
>
> On Mon, Sep 24, 2018, 5:21 AM John Rose  wrote:
>> > -Original Message-
>> > From: Matt Mahoney via AGI 
>> >
>> > I was applying John's definition of qualia, not agreeing with it. My 
>> > definition is
>> > qualia is what perception feels like. Perception and feelings are both
>> > computable. But the feelings condition you to believing there is something
>> > magical and mysterious about it.
>> >
>> 
>> And what I'm saying is that the communication of qualia is important for 
>> general intelligence in a system of agents. And how do agents interpret the 
>> signals, process and recommunicate them.
>> 
>> But without fully understanding qualia since they're intimately intrinsic to 
>> agent experience we can still explore their properties by answering 
>> questions such as: What is an expression of the information distance between 
>> qualia of differing agents with same stimuli? How do qualia map to modeled 
>> environment? How do they change over time in a system of learning agents? 
>> What is the compressional loss into communication? And how do multi-agent 
>> models change over time from communicated and decompressed qualia.
>> 
>> And what is the topology of qualia variance within an agent related to the 
>> complexity classes of environmental strategy?
>> 
>> And move on to questions such as can there be enhancements to agent language 
>> to accelerate learning in a simulated system? And enhancements to agent 
>> structure?
>> 
>> John
>> 
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Me45b62798fa4720a94d77ee2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-24 Thread Matt Mahoney via AGI
I wrote a simple reinforcement learner which includes the line of code:

printf("Ouch!\n");

So I don't see communication of qualia as a major obstacle to AGI.

Or do you mean something else by qualia?


On Mon, Sep 24, 2018, 5:21 AM John Rose  wrote:

> > -Original Message-
> > From: Matt Mahoney via AGI 
> >
> > I was applying John's definition of qualia, not agreeing with it. My
> definition is
> > qualia is what perception feels like. Perception and feelings are both
> > computable. But the feelings condition you to believing there is
> something
> > magical and mysterious about it.
> >
> 
> And what I'm saying is that the communication of qualia is important for
> general intelligence in a system of agents. And how do agents interpret the
> signals, process and recommunicate them.
> 
> But without fully understanding qualia since they're intimately intrinsic
> to agent experience we can still explore their properties by answering
> questions such as: What is an expression of the information distance
> between qualia of differing agents with same stimuli? How do qualia map to
> modeled environment? How do they change over time in a system of learning
> agents? What is the compressional loss into communication? And how do
> multi-agent models change over time from communicated and decompressed
> qualia.
> 
> And what is the topology of qualia variance within an agent related to the
> complexity classes of environmental strategy?
> 
> And move on to questions such as can there be enhancements to agent
> language to accelerate learning in a simulated system? And enhancements to
> agent structure?
> 
> John
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M2f5ee95344c2383d5154fb35
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-24 Thread Jim Bromer via AGI
John,
There are aspects of the intelligent understanding of the world
(universe of things and ideas) that can be modelled and simulated. I
think this is computable in an AI program except the problem of
complexity would slow the modelling down so much that it would not be
effective enough (at this time) to seem very intelligent. Not only do
we need to be able to abstract and communicate concepts we also need
to be able to integrate, synthesize, and test those abstractions.
Since these tests would not be exact, complete, or even very thorough
(for the most part) that means that our computer program would have to
be able to retain variations on these concepts and some knowledge how
the different variations might be used. If it weren't for the problem
of complexity the programmer could begin testing ways to do this
effectively.

But I still disagree with what you are saying. An artificial agent
will not be able to experience qualia because it will lack that
mysterious aspect of intelligence that allows us to sense certain
things in the way we do. So it would be able to distinguish blue from
red (as long as there was some kind of light to see the colors of
course) but it would not experience haptic sensory input in a way that
was fundamentally different from the way that it experiences red and
blue. For a computer it is just data. It may be presented in different
formats but there is no qualia (as I understand the concept.) It may
be programmed to simulate the communication as if it were dealing with
qualia, but it would be pure simulation.

>From Wikipedia: "
In philosophy and certain models of psychology, qualia (/ˈkwɑːliə/ or
/ˈkweɪliə/; singular form: quale) are defined to be individual
instances of subjective, conscious experience. The term qualia derives
from the Latin neuter plural form (qualia) of the Latin adjective
quālis (Latin pronunciation: [ˈkʷaːlɪs]) meaning "of what sort" or "of
what kind" in a specific instance like "what it is like to taste a
specific apple, this particular apple now".
Examples of qualia include the perceived sensation of pain of a
headache, the taste of wine, as well as the redness of an evening sky.
As qualitative characters of sensation, qualia stand in contrast to
"propositional attitudes", where the focus is on beliefs about
experience rather than what it is directly like to be experiencing.
Philosopher and cognitive scientist Daniel Dennett once suggested that
qualia was "an unfamiliar term for something that could not be more
familiar to each of us: the ways things seem to us"."

So qualia stands in contrast to propositional attitudes on the beliefs
about experience but it also, as I understand it, stands in contrast
to 'data' that may be transmitted by sensors (or the product of the
computational analysis of that data).

Can a computer program which feels no pain, pleasure, desire, joy,
curiosity, or relief be able to learn through conditioning? It can
learn through an artificial conditioning but since higher intelligence
requires the ability to think independently, the simulation of
conditioning will not hold the same qualia of experience that it holds
for us. I can predict that this is not that interesting to me as it
will be for you. But the question that does interest me more is
whether this lack of animal qualia of experience might make true
intelligence impossible for AI. An AI program is going to lack an
massive range of experience that it might talk about but never have
the understanding that comes from experience. I do not think that
means AGI is impossible except from a technical standpoint. It is an
entire range of general intelligence that it might know something
about but never experience. And it is an important part of animal
experience.

Jim Bromer

On Mon, Sep 24, 2018 at 7:20 AM John Rose  wrote:
> > -Original Message-
> > From: Matt Mahoney via AGI 
> >
> > I was applying John's definition of qualia, not agreeing with it. My 
> > definition is
> > qualia is what perception feels like. Perception and feelings are both
> > computable. But the feelings condition you to believing there is something
> > magical and mysterious about it.
> >
> 
> And what I'm saying is that the communication of qualia is important for 
> general intelligence in a system of agents. And how do agents interpret the 
> signals, process and recommunicate them.
> 
> But without fully understanding qualia since they're intimately intrinsic to 
> agent experience we can still explore their properties by answering questions 
> such as: What is an expression of the information distance between qualia of 
> differing agents with same stimuli? How do qualia map to modeled environment? 
> How do they change over time in a system of learning agents? What is the 
> compressional loss into communication? And how do multi-agent models change 
> over time from communicated and decompressed qualia.
> 
> And what is the topology of qualia variance within an agent related to the 
> 

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-24 Thread Jim Bromer via AGI
Well I see a problem. I cannot find a test to demonstrate that the
mysterious part of conscious experience that most of us sense because, from
my point of view, the science to detect it has not been developed and Matt
would not be able to find a scientific test to prove that it does not exist
because, from his point of view, there is nothing to test for.
Jim Bromer


On Sun, Sep 23, 2018 at 4:09 PM Jim Bromer  wrote:

> I do not mean to come across as being unpleasant or dismissing other
> people's crackpot ideas. It's a big pot and all are welcome to share.
> The constructivists argued that it was not enough to show that there was
> not evidence to support a proposition, you also had to find evidence to
> support the negation of the proposition. So I cannot come up with a way to
> find an experiment to draw evidence supporting the proposition that the
> mysterious part of conscious experience cannot be explained by contemporary
> science (because contemporary science does not have the tools to conduct
> such an experiment). But now, if Matt wants to prove that it is not real
> then he would have to come up with experiments to derive evidence that it
> is not. At the least he has to show that contemporary science can be used
> to begin testing his counter-hypothesis and find evidence that the evidence
> that his experiments could produce is profound if not overwhelming.
> Jim Bromer
>
>
> On Sun, Sep 23, 2018 at 3:29 PM Nanograte Knowledge Technologies via AGI <
> agi@agi.topicbox.com> wrote:
>
>> Matt
>>
>> One could easily argue that on the basis that you cannot scientifically
>> prove any of the absolute assertions you just made, that you cannot be
>> correct, but to what end? Let's unpack your perspective.
>>
>> 1) "Science doesn't explain everything." vs Science explains everything
>> it explains.
>> 2) => Philosophy explains why the universe exists vs Can you explain
>> which universe within science you are referring to?
>> 3) "It exists as a necessary condition for the question to exist." vs It
>> only exists because humankind thought of it as an original thought.
>> 4) "Science can explain...[X]" vs That's what we already stated and
>> implied.
>> 5) "Science explains that your brain runs a program." vs So does
>> philosophy. Are they both equally correct, therefore philosophy = science?
>>
>> Inter alia, you still did not explain anything much, did you?
>>
>> Rob
>> --
>> *From:* Matt Mahoney via AGI 
>> *Sent:* Sunday, 23 September 2018 4:02 PM
>> *To:* AGI
>> *Subject:* Re: [agi] E=mc^2 Morphism Musings...
>> (Intelligence=math*consciousness^2 ?)
>>
>> Science doesn't explain everything. It just tries to. It doesn't explain
>> why the universe exists. Philosophy does. It exists as a necessary
>> condition for the question to exist.
>>
>> Chalmers is mystified by consciousness like most of us are. Science can
>> explain why we are mystified. It explains why it seems like a hard problem.
>> It explains why you keep asking the wrong question. Math explains why no
>> program can model itself. Science explains that your brain runs a program.
>>
>> On Sat, Sep 22, 2018, 11:53 AM Nanograte Knowledge Technologies via AGI <
>> agi@agi.topicbox.com> wrote:
>>
>> Perhaps not explain then, but we could access accepted theory on a topic
>> via a sound method of integration in order to construct enough of an
>> understanding to at least be sure what we are talking about. Should we not
>> at least be trying to do that?
>>
>> Maybe it's a case of no one really being curious and passionate enough to
>> take the time to do the semantic work. Or is it symbolic of another
>> problem? I think it's a very brave thing to talk publicly about a subject
>> we all agree we seemingly know almost nothing about. Yet, we should at
>> least try to do that as well.
>>
>> Therefore, to explain is to know?
>>
>> Rob
>> --
>> *From:* Jim Bromer via AGI 
>> *Sent:* Saturday, 22 September 2018 6:12 PM
>> *To:* AGI
>> *Subject:* Re: [agi] E=mc^2 Morphism Musings...
>> (Intelligence=math*consciousness^2 ?)
>>
>> The theory that contemporary science can explain everything requires a
>> fundamental denial of history and a kind of denial about the limits of
>> cotemporary science. That sort of denial of common knowledge is ill suited
>> for adaptation. It will interfere with your ability to use scientific
>> method.
>> Jim Bromer
>>
>>
>> On Sat, Sep 22, 2

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-23 Thread Jim Bromer via AGI
I do not mean to come across as being unpleasant or dismissing other
people's crackpot ideas. It's a big pot and all are welcome to share.
The constructivists argued that it was not enough to show that there was
not evidence to support a proposition, you also had to find evidence to
support the negation of the proposition. So I cannot come up with a way to
find an experiment to draw evidence supporting the proposition that the
mysterious part of conscious experience cannot be explained by contemporary
science (because contemporary science does not have the tools to conduct
such an experiment). But now, if Matt wants to prove that it is not real
then he would have to come up with experiments to derive evidence that it
is not. At the least he has to show that contemporary science can be used
to begin testing his counter-hypothesis and find evidence that the evidence
that his experiments could produce is profound if not overwhelming.
Jim Bromer


On Sun, Sep 23, 2018 at 3:29 PM Nanograte Knowledge Technologies via AGI <
agi@agi.topicbox.com> wrote:

> Matt
>
> One could easily argue that on the basis that you cannot scientifically
> prove any of the absolute assertions you just made, that you cannot be
> correct, but to what end? Let's unpack your perspective.
>
> 1) "Science doesn't explain everything." vs Science explains everything it
> explains.
> 2) => Philosophy explains why the universe exists vs Can you explain which
> universe within science you are referring to?
> 3) "It exists as a necessary condition for the question to exist." vs It
> only exists because humankind thought of it as an original thought.
> 4) "Science can explain...[X]" vs That's what we already stated and
> implied.
> 5) "Science explains that your brain runs a program." vs So does
> philosophy. Are they both equally correct, therefore philosophy = science?
>
> Inter alia, you still did not explain anything much, did you?
>
> Rob
> ------
> *From:* Matt Mahoney via AGI 
> *Sent:* Sunday, 23 September 2018 4:02 PM
> *To:* AGI
> *Subject:* Re: [agi] E=mc^2 Morphism Musings...
> (Intelligence=math*consciousness^2 ?)
>
> Science doesn't explain everything. It just tries to. It doesn't explain
> why the universe exists. Philosophy does. It exists as a necessary
> condition for the question to exist.
>
> Chalmers is mystified by consciousness like most of us are. Science can
> explain why we are mystified. It explains why it seems like a hard problem.
> It explains why you keep asking the wrong question. Math explains why no
> program can model itself. Science explains that your brain runs a program.
>
> On Sat, Sep 22, 2018, 11:53 AM Nanograte Knowledge Technologies via AGI <
> agi@agi.topicbox.com> wrote:
>
> Perhaps not explain then, but we could access accepted theory on a topic
> via a sound method of integration in order to construct enough of an
> understanding to at least be sure what we are talking about. Should we not
> at least be trying to do that?
>
> Maybe it's a case of no one really being curious and passionate enough to
> take the time to do the semantic work. Or is it symbolic of another
> problem? I think it's a very brave thing to talk publicly about a subject
> we all agree we seemingly know almost nothing about. Yet, we should at
> least try to do that as well.
>
> Therefore, to explain is to know?
>
> Rob
> --
> *From:* Jim Bromer via AGI 
> *Sent:* Saturday, 22 September 2018 6:12 PM
> *To:* AGI
> *Subject:* Re: [agi] E=mc^2 Morphism Musings...
> (Intelligence=math*consciousness^2 ?)
>
> The theory that contemporary science can explain everything requires a
> fundamental denial of history and a kind of denial about the limits of
> cotemporary science. That sort of denial of common knowledge is ill suited
> for adaptation. It will interfere with your ability to use scientific
> method.
> Jim Bromer
>
>
> On Sat, Sep 22, 2018 at 11:42 AM Jim Bromer  wrote:
>
> Qualia is what perceptions feel like and feelings are computable and they
> condition us to believe there is something magical and mysterious about it?
> This is science fiction. So science has already explained Chalmer's Hard
> Problem of Consciousness. He just got it wrong? Is that what you are
> saying?
> Jim Bromer
>
>
> On Sat, Sep 22, 2018 at 11:07 AM Matt Mahoney via AGI <
> agi@agi.topicbox.com> wrote:
>
> I was applying John's definition of qualia, not agreeing with it. My
> definition is qualia is what perception feels like. Perception and feelings
> are both computable. But the feelings condition you to believing there is
> something magical and mysterious about it.
>
> On Sa

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-23 Thread Nanograte Knowledge Technologies via AGI
Matt

One could easily argue that on the basis that you cannot scientifically prove 
any of the absolute assertions you just made, that you cannot be correct, but 
to what end? Let's unpack your perspective.

1) "Science doesn't explain everything." vs Science explains everything it 
explains.
2) => Philosophy explains why the universe exists vs Can you explain which 
universe within science you are referring to?
3) "It exists as a necessary condition for the question to exist." vs It only 
exists because humankind thought of it as an original thought.
4) "Science can explain...[X]" vs That's what we already stated and implied.
5) "Science explains that your brain runs a program." vs So does philosophy. 
Are they both equally correct, therefore philosophy = science?

Inter alia, you still did not explain anything much, did you?

Rob

From: Matt Mahoney via AGI 
Sent: Sunday, 23 September 2018 4:02 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

Science doesn't explain everything. It just tries to. It doesn't explain why 
the universe exists. Philosophy does. It exists as a necessary condition for 
the question to exist.

Chalmers is mystified by consciousness like most of us are. Science can explain 
why we are mystified. It explains why it seems like a hard problem. It explains 
why you keep asking the wrong question. Math explains why no program can model 
itself. Science explains that your brain runs a program.

On Sat, Sep 22, 2018, 11:53 AM Nanograte Knowledge Technologies via AGI 
mailto:agi@agi.topicbox.com>> wrote:
Perhaps not explain then, but we could access accepted theory on a topic via a 
sound method of integration in order to construct enough of an understanding to 
at least be sure what we are talking about. Should we not at least be trying to 
do that?

Maybe it's a case of no one really being curious and passionate enough to take 
the time to do the semantic work. Or is it symbolic of another problem? I think 
it's a very brave thing to talk publicly about a subject we all agree we 
seemingly know almost nothing about. Yet, we should at least try to do that as 
well.

Therefore, to explain is to know?

Rob

From: Jim Bromer via AGI mailto:agi@agi.topicbox.com>>
Sent: Saturday, 22 September 2018 6:12 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

The theory that contemporary science can explain everything requires a 
fundamental denial of history and a kind of denial about the limits of 
cotemporary science. That sort of denial of common knowledge is ill suited for 
adaptation. It will interfere with your ability to use scientific method.
Jim Bromer


On Sat, Sep 22, 2018 at 11:42 AM Jim Bromer 
mailto:jimbro...@gmail.com>> wrote:
Qualia is what perceptions feel like and feelings are computable and they 
condition us to believe there is something magical and mysterious about it? 
This is science fiction. So science has already explained Chalmer's Hard 
Problem of Consciousness. He just got it wrong? Is that what you are saying?
Jim Bromer


On Sat, Sep 22, 2018 at 11:07 AM Matt Mahoney via AGI 
mailto:agi@agi.topicbox.com>> wrote:
I was applying John's definition of qualia, not agreeing with it. My definition 
is qualia is what perception feels like. Perception and feelings are both 
computable. But the feelings condition you to believing there is something 
magical and mysterious about it.

On Sat, Sep 22, 2018, 8:44 AM Jim Bromer via AGI 
mailto:agi@agi.topicbox.com>> wrote:
There is a distinction between the qualia of human experience and the 
consciousness of what the mind is presenting. If you deny that you have the 
kind of experience that Chalmers talks about then there is a question of why 
are you denying it. So your remarks are relevant to AGI but not the way you are 
talking about them. If Matt says qualia is not real then he is saying that it 
is imaginary because I am pretty sure that he experiences things in ways 
similar to Chalmers and a lot of other people I have talked to. There are 
people who have claimed that I would not be able to create an artificial 
imagination. That is nonsense. An artificial imagination is easy. The 
complexity of doing that well is not. That does not mean however, that the hard 
problem of consciousness is just complexity. One is doable in computer 
programming, in spite of any skepticism, the other is not.
Jim Bromer


On Sat, Sep 22, 2018 at 10:03 AM John Rose 
mailto:johnr...@polyplexic.com>> wrote:
> -Original Message-
> From: Jim Bromer via AGI mailto:agi@agi.topicbox.com>>
>
> So John's attempt to create a definition of compression of something
> complicated so that it can be communicated might be the start of the
> development of something related to conte

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-23 Thread Jim Bromer via AGI
That is dismissive. You are dismissing the very idea that there may be
more to the experience of consciousness than contemporary science can
explain. As I mentioned before, Marvin Minsky (a well known
self-proclaimed atheist) and I agreed that the mystery of conscious
will be explained by science one day. I would restate that as it
probably will be explained by science one day. Education may make you
aware of the question but it does not require conditioning.. I
appreciate the fact that you were honest about this because this
explains it to me. I can never understand how people who understand
what I am trying to get at can deny something that seems so
fundamentally real. And here it is. You are in philosophical denial of
your own experience. It is like psychological denial of a feeling, but
in this case it is more of a philosophical thing. When someone is in
denial about something you can expect a range of behaviors from him.
Most of them are a kind of erratic jag when he gets close to
recognizing the problem and he tries to fight it off.
Incidentally, I appreciate the idea that your own feelings may
condition you. I used to argue with quasi-behaviorists about that. By
using our imagination and memories we can condition ourselves in all
sorts of ways (both good and mal-adaptive). That is an opinion that I
think is very important to AGI. And disturbing.
Jim Bromer

On Sun, Sep 23, 2018 at 10:03 AM Matt Mahoney via AGI
 wrote:
>
> Science doesn't explain everything. It just tries to. It doesn't explain why 
> the universe exists. Philosophy does. It exists as a necessary condition for 
> the question to exist.
>
> Chalmers is mystified by consciousness like most of us are. Science can 
> explain why we are mystified. It explains why it seems like a hard problem. 
> It explains why you keep asking the wrong question. Math explains why no 
> program can model itself. Science explains that your brain runs a program.
>
> On Sat, Sep 22, 2018, 11:53 AM Nanograte Knowledge Technologies via AGI 
>  wrote:
>>
>> Perhaps not explain then, but we could access accepted theory on a topic via 
>> a sound method of integration in order to construct enough of an 
>> understanding to at least be sure what we are talking about. Should we not 
>> at least be trying to do that?
>>
>> Maybe it's a case of no one really being curious and passionate enough to 
>> take the time to do the semantic work. Or is it symbolic of another problem? 
>> I think it's a very brave thing to talk publicly about a subject we all 
>> agree we seemingly know almost nothing about. Yet, we should at least try to 
>> do that as well.
>>
>> Therefore, to explain is to know?
>>
>> Rob
>> ____________________
>> From: Jim Bromer via AGI 
>> Sent: Saturday, 22 September 2018 6:12 PM
>> To: AGI
>> Subject: Re: [agi] E=mc^2 Morphism Musings... 
>> (Intelligence=math*consciousness^2 ?)
>>
>> The theory that contemporary science can explain everything requires a 
>> fundamental denial of history and a kind of denial about the limits of 
>> cotemporary science. That sort of denial of common knowledge is ill suited 
>> for adaptation. It will interfere with your ability to use scientific method.
>> Jim Bromer
>>
>>
>> On Sat, Sep 22, 2018 at 11:42 AM Jim Bromer  wrote:
>>
>> Qualia is what perceptions feel like and feelings are computable and they 
>> condition us to believe there is something magical and mysterious about it? 
>> This is science fiction. So science has already explained Chalmer's Hard 
>> Problem of Consciousness. He just got it wrong? Is that what you are saying?
>> Jim Bromer
>>
>>
>> On Sat, Sep 22, 2018 at 11:07 AM Matt Mahoney via AGI  
>> wrote:
>>
>> I was applying John's definition of qualia, not agreeing with it. My 
>> definition is qualia is what perception feels like. Perception and feelings 
>> are both computable. But the feelings condition you to believing there is 
>> something magical and mysterious about it.
>>
>> On Sat, Sep 22, 2018, 8:44 AM Jim Bromer via AGI  
>> wrote:
>>
>> There is a distinction between the qualia of human experience and the 
>> consciousness of what the mind is presenting. If you deny that you have the 
>> kind of experience that Chalmers talks about then there is a question of why 
>> are you denying it. So your remarks are relevant to AGI but not the way you 
>> are talking about them. If Matt says qualia is not real then he is saying 
>> that it is imaginary because I am pretty sure that he experiences things in 
>> ways similar to Chalmers and a lot of other people I have talked to. There 
>> are peop

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-23 Thread Matt Mahoney via AGI
Science doesn't explain everything. It just tries to. It doesn't explain
why the universe exists. Philosophy does. It exists as a necessary
condition for the question to exist.

Chalmers is mystified by consciousness like most of us are. Science can
explain why we are mystified. It explains why it seems like a hard problem.
It explains why you keep asking the wrong question. Math explains why no
program can model itself. Science explains that your brain runs a program.

On Sat, Sep 22, 2018, 11:53 AM Nanograte Knowledge Technologies via AGI <
agi@agi.topicbox.com> wrote:

> Perhaps not explain then, but we could access accepted theory on a topic
> via a sound method of integration in order to construct enough of an
> understanding to at least be sure what we are talking about. Should we not
> at least be trying to do that?
>
> Maybe it's a case of no one really being curious and passionate enough to
> take the time to do the semantic work. Or is it symbolic of another
> problem? I think it's a very brave thing to talk publicly about a subject
> we all agree we seemingly know almost nothing about. Yet, we should at
> least try to do that as well.
>
> Therefore, to explain is to know?
>
> Rob
> --
> *From:* Jim Bromer via AGI 
> *Sent:* Saturday, 22 September 2018 6:12 PM
> *To:* AGI
> *Subject:* Re: [agi] E=mc^2 Morphism Musings...
> (Intelligence=math*consciousness^2 ?)
>
> The theory that contemporary science can explain everything requires a
> fundamental denial of history and a kind of denial about the limits of
> cotemporary science. That sort of denial of common knowledge is ill suited
> for adaptation. It will interfere with your ability to use scientific
> method.
> Jim Bromer
>
>
> On Sat, Sep 22, 2018 at 11:42 AM Jim Bromer  wrote:
>
> Qualia is what perceptions feel like and feelings are computable and they
> condition us to believe there is something magical and mysterious about it?
> This is science fiction. So science has already explained Chalmer's Hard
> Problem of Consciousness. He just got it wrong? Is that what you are
> saying?
> Jim Bromer
>
>
> On Sat, Sep 22, 2018 at 11:07 AM Matt Mahoney via AGI <
> agi@agi.topicbox.com> wrote:
>
> I was applying John's definition of qualia, not agreeing with it. My
> definition is qualia is what perception feels like. Perception and feelings
> are both computable. But the feelings condition you to believing there is
> something magical and mysterious about it.
>
> On Sat, Sep 22, 2018, 8:44 AM Jim Bromer via AGI 
> wrote:
>
> There is a distinction between the qualia of human experience and the
> consciousness of what the mind is presenting. If you deny that you have the
> kind of experience that Chalmers talks about then there is a question of
> why are you denying it. So your remarks are relevant to AGI but not the way
> you are talking about them. If Matt says qualia is not real then he is
> saying that it is imaginary because I am pretty sure that he experiences
> things in ways similar to Chalmers and a lot of other people I have talked
> to. There are people who have claimed that I would not be able to create an
> artificial imagination. That is nonsense. An artificial imagination is
> easy. The complexity of doing that well is not. That does not mean however,
> that the hard problem of consciousness is just complexity. One is doable in
> computer programming, in spite of any skepticism, the other is not.
> Jim Bromer
>
>
> On Sat, Sep 22, 2018 at 10:03 AM John Rose 
> wrote:
>
> > -Original Message-
> > From: Jim Bromer via AGI 
> >
> > So John's attempt to create a definition of compression of something
> > complicated so that it can be communicated might be the start of the
> > development of something related to contemporary AI but the attempt to
> > claim that it defines qualia is so naïve that it is not really relevant
> to the subject
> > of AGI.
> 
> 
> Jim,
> 
> Engineers make a box with lines on it and label it "magic goes here".
> 
> Algorithmic information theory does the same and calls it compression.
> 
> The word "subjective" is essentially the same thing.
> 
> AGI requires sensory input.
> 
> Human beings are compression engines... each being unique.
> 
> The system of human beings is a general intelligence.
> 
> How do people communicate?
> 
> What are the components of AGI and how will they sense and communicate?
> 
> They guys that came up with the term "qualia" left it as a mystery so it's
> getting hijacked  This is extremely common with scientific terminology.
> 
> BTW I'm not trying to define it universall

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-22 Thread Jim Bromer via AGI
The theory that contemporary science can explain everything requires a
fundamental denial of history and a kind of denial about the limits of
cotemporary science. That sort of denial of common knowledge is ill suited
for adaptation. It will interfere with your ability to use scientific
method.
Jim Bromer


On Sat, Sep 22, 2018 at 11:42 AM Jim Bromer  wrote:

> Qualia is what perceptions feel like and feelings are computable and they
> condition us to believe there is something magical and mysterious about it?
> This is science fiction. So science has already explained Chalmer's Hard
> Problem of Consciousness. He just got it wrong? Is that what you are
> saying?
> Jim Bromer
>
>
> On Sat, Sep 22, 2018 at 11:07 AM Matt Mahoney via AGI <
> agi@agi.topicbox.com> wrote:
>
>> I was applying John's definition of qualia, not agreeing with it. My
>> definition is qualia is what perception feels like. Perception and feelings
>> are both computable. But the feelings condition you to believing there is
>> something magical and mysterious about it.
>>
>> On Sat, Sep 22, 2018, 8:44 AM Jim Bromer via AGI 
>> wrote:
>>
>>> There is a distinction between the qualia of human experience and the
>>> consciousness of what the mind is presenting. If you deny that you have the
>>> kind of experience that Chalmers talks about then there is a question of
>>> why are you denying it. So your remarks are relevant to AGI but not the way
>>> you are talking about them. If Matt says qualia is not real then he is
>>> saying that it is imaginary because I am pretty sure that he experiences
>>> things in ways similar to Chalmers and a lot of other people I have talked
>>> to. There are people who have claimed that I would not be able to create an
>>> artificial imagination. That is nonsense. An artificial imagination is
>>> easy. The complexity of doing that well is not. That does not mean however,
>>> that the hard problem of consciousness is just complexity. One is doable in
>>> computer programming, in spite of any skepticism, the other is not.
>>> Jim Bromer
>>>
>>>
>>> On Sat, Sep 22, 2018 at 10:03 AM John Rose 
>>> wrote:
>>>
 > -Original Message-
 > From: Jim Bromer via AGI 
 >
 > So John's attempt to create a definition of compression of something
 > complicated so that it can be communicated might be the start of the
 > development of something related to contemporary AI but the attempt to
 > claim that it defines qualia is so naïve that it is not really
 relevant to the subject
 > of AGI.
 
 
 Jim,
 
 Engineers make a box with lines on it and label it "magic goes here".
 
 Algorithmic information theory does the same and calls it compression.
 
 The word "subjective" is essentially the same thing.
 
 AGI requires sensory input.
 
 Human beings are compression engines... each being unique.
 
 The system of human beings is a general intelligence.
 
 How do people communicate?
 
 What are the components of AGI and how will they sense and communicate?
 
 They guys that came up with the term "qualia" left it as a mystery so
 it's getting hijacked  This is extremely common with scientific
 terminology.
 
 BTW I'm not trying to define it universally just as something
 computationally useful in AGI R but it's starting to seem like it's
 central. Thus IIT? Tononi or Tononi-like? Not sure... have to do some
 reading but prefer coming up with something simple and practical without
 getting too ethereal. TGD consciousness is pretty interesting though and a
 good exercise in some mind-bending reading.
 
 John
 
>>> *Artificial General Intelligence List *
>> / AGI / see discussions  +
>> participants  + delivery
>> options  Permalink
>> 
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Md1ddceebbd43c8666114da5e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-22 Thread Jim Bromer via AGI
Qualia is what perceptions feel like and feelings are computable and they
condition us to believe there is something magical and mysterious about it?
This is science fiction. So science has already explained Chalmer's Hard
Problem of Consciousness. He just got it wrong? Is that what you are
saying?
Jim Bromer


On Sat, Sep 22, 2018 at 11:07 AM Matt Mahoney via AGI 
wrote:

> I was applying John's definition of qualia, not agreeing with it. My
> definition is qualia is what perception feels like. Perception and feelings
> are both computable. But the feelings condition you to believing there is
> something magical and mysterious about it.
>
> On Sat, Sep 22, 2018, 8:44 AM Jim Bromer via AGI 
> wrote:
>
>> There is a distinction between the qualia of human experience and the
>> consciousness of what the mind is presenting. If you deny that you have the
>> kind of experience that Chalmers talks about then there is a question of
>> why are you denying it. So your remarks are relevant to AGI but not the way
>> you are talking about them. If Matt says qualia is not real then he is
>> saying that it is imaginary because I am pretty sure that he experiences
>> things in ways similar to Chalmers and a lot of other people I have talked
>> to. There are people who have claimed that I would not be able to create an
>> artificial imagination. That is nonsense. An artificial imagination is
>> easy. The complexity of doing that well is not. That does not mean however,
>> that the hard problem of consciousness is just complexity. One is doable in
>> computer programming, in spite of any skepticism, the other is not.
>> Jim Bromer
>>
>>
>> On Sat, Sep 22, 2018 at 10:03 AM John Rose 
>> wrote:
>>
>>> > -Original Message-
>>> > From: Jim Bromer via AGI 
>>> >
>>> > So John's attempt to create a definition of compression of something
>>> > complicated so that it can be communicated might be the start of the
>>> > development of something related to contemporary AI but the attempt to
>>> > claim that it defines qualia is so naïve that it is not really
>>> relevant to the subject
>>> > of AGI.
>>> 
>>> 
>>> Jim,
>>> 
>>> Engineers make a box with lines on it and label it "magic goes here".
>>> 
>>> Algorithmic information theory does the same and calls it compression.
>>> 
>>> The word "subjective" is essentially the same thing.
>>> 
>>> AGI requires sensory input.
>>> 
>>> Human beings are compression engines... each being unique.
>>> 
>>> The system of human beings is a general intelligence.
>>> 
>>> How do people communicate?
>>> 
>>> What are the components of AGI and how will they sense and communicate?
>>> 
>>> They guys that came up with the term "qualia" left it as a mystery so
>>> it's getting hijacked  This is extremely common with scientific
>>> terminology.
>>> 
>>> BTW I'm not trying to define it universally just as something
>>> computationally useful in AGI R but it's starting to seem like it's
>>> central. Thus IIT? Tononi or Tononi-like? Not sure... have to do some
>>> reading but prefer coming up with something simple and practical without
>>> getting too ethereal. TGD consciousness is pretty interesting though and a
>>> good exercise in some mind-bending reading.
>>> 
>>> John
>>> 
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M6c7ecb53188d243c1b6fd10e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-22 Thread Jim Bromer via AGI
Let's say that someone says that quantum effects can explain qualia. I
might respond by saying that sort of speculation is not related to
contemporary computer science. Then I get the reply, What do you
mean?!! Computers are used heavily in quantum science Yes, so
computers are used to make quantum calculations (or whatever they are
called) but that does not mean the theory that quantum effects can be
used to explain qualia is something that can be calculated or
simulated with contemporary computers. There is a redefinition of what
I was referring to when I said, "that sort of speculation," and the
field of making calculations of quantum effect. Now suppose someone
comes up with a way to use calculated wave particle duality as a way
to explain some aspects of consciousness. Does that mean that the new
research now explains qualia? Of course not. Even if there was some
sort of breakthrough it still does not mean that sort of speculation
(qe can explain qualia) is now relevant to contemporary computer
science.
Jim Bromer

On Sat, Sep 22, 2018 at 8:38 AM Jim Bromer  wrote:
>
> But you are still missing the definition of qualia. Wikipedia has a
> thing on it and I am sure SEP does as well. Because there are reports
> of subjective experience we know that we share something of the nature
> of experience. Common sense can tell us that computers do not. How do
> we know that computers do not share the nature of conscious experience
> (Chalmer's hard problem of consciousness:
> https://en.wikipedia.org/wiki/Hard_problem_of_consciousness)? It is
> not an ontologically salient question for a group focused on
> technology. So it is relevant to the philosophical issues of
> intelligence, but once you get it you have to move on. It is not a
> fruitful discussion unless you can derive something interesting from
> it. There is no test for qualia because there is no explanation for
> it. A profound mystery cannot be reduced to a contrived technological
> test or else just be dismissed. That kind of thinking is not good
> science and it is not good philosophy.
> So John's attempt to create a definition of compression of something
> complicated so that it can be communicated might be the start of the
> development of something related to contemporary AI but the attempt to
> claim that it defines qualia is so naïve that it is not really
> relevant to the subject of AGI.
> When you have a profound mystery you have to create ways to examine
> it. This is related to AGI. How do you fit it in to other knowledge.
> What are the observations that you have to work with. What are the
> theories that you have that you can use to work with it. Can you
> measure it. Are there indirect ways to measure it. During these
> initial stages you have to expect that many of your initial ideas are
> going to be wrong or poorly constructed. The major motivation then
> should not to be to salvage some initial primitive theories but to
> reshape them completely. To test a hypothesis about a radical theory
> of a profound mystery you have to first create theories of how you
> might conduct your experiment. If your initial theories lead you to
> enact major redefinitions so that you change the subject of the
> theory, then that is a good sign that you are not ready to test the
> theory.
> Jim
> On Sat, Sep 22, 2018 at 8:11 AM John Rose  wrote:
> > > -Original Message-
> > > From: Nanograte Knowledge Technologies via AGI 
> > >
> > > That's according to John's definition thereof. The rest of us do not 
> > > necessarily
> > > agree with such a limited view. At this stage, it cannot be absolutely 
> > > stated
> > > what qualia is. For example, mine is a lot more fuzzy and abstract in 
> > > terms of
> > > autonomous, identifier signalling . And that is but one view of many 
> > > regarding
> > > a feature of biology, which I contend could ultimately be transposed into 
> > > a
> > > synthetically-framed platform as its own, unique version.
> > >
> >
> > "autonomous, identifier signaling"
> >
> > We are on a similar wavelength :) Compression is a big word. I've not 
> > talked about consciousness topology and kernels yet...
> >
> >
> > > One needs to define a term first, before trying  to apply
> > > it to the collective consciousness of AGI.
> > >
> > 
> > I disagree. Many AGI researchers have two overwhelming biases:
> > 
> > One person is a general intelligence.
> > One person is a general consciousness.
> > 
> > Both I believe are false.
> > 
> > Seeing the forest when you are a tree requires an outside view.
> > 
> > John
> > 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M8a3b01d2ef92130bea1580d3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-22 Thread Jim Bromer via AGI
But you are still missing the definition of qualia. Wikipedia has a
thing on it and I am sure SEP does as well. Because there are reports
of subjective experience we know that we share something of the nature
of experience. Common sense can tell us that computers do not. How do
we know that computers do not share the nature of conscious experience
(Chalmer's hard problem of consciousness:
https://en.wikipedia.org/wiki/Hard_problem_of_consciousness)? It is
not an ontologically salient question for a group focused on
technology. So it is relevant to the philosophical issues of
intelligence, but once you get it you have to move on. It is not a
fruitful discussion unless you can derive something interesting from
it. There is no test for qualia because there is no explanation for
it. A profound mystery cannot be reduced to a contrived technological
test or else just be dismissed. That kind of thinking is not good
science and it is not good philosophy.
So John's attempt to create a definition of compression of something
complicated so that it can be communicated might be the start of the
development of something related to contemporary AI but the attempt to
claim that it defines qualia is so naïve that it is not really
relevant to the subject of AGI.
When you have a profound mystery you have to create ways to examine
it. This is related to AGI. How do you fit it in to other knowledge.
What are the observations that you have to work with. What are the
theories that you have that you can use to work with it. Can you
measure it. Are there indirect ways to measure it. During these
initial stages you have to expect that many of your initial ideas are
going to be wrong or poorly constructed. The major motivation then
should not to be to salvage some initial primitive theories but to
reshape them completely. To test a hypothesis about a radical theory
of a profound mystery you have to first create theories of how you
might conduct your experiment. If your initial theories lead you to
enact major redefinitions so that you change the subject of the
theory, then that is a good sign that you are not ready to test the
theory.
Jim
On Sat, Sep 22, 2018 at 8:11 AM John Rose  wrote:
> > -Original Message-
> > From: Nanograte Knowledge Technologies via AGI 
> >
> > That's according to John's definition thereof. The rest of us do not 
> > necessarily
> > agree with such a limited view. At this stage, it cannot be absolutely 
> > stated
> > what qualia is. For example, mine is a lot more fuzzy and abstract in terms 
> > of
> > autonomous, identifier signalling . And that is but one view of many 
> > regarding
> > a feature of biology, which I contend could ultimately be transposed into a
> > synthetically-framed platform as its own, unique version.
> >
>
> "autonomous, identifier signaling"
>
> We are on a similar wavelength :) Compression is a big word. I've not talked 
> about consciousness topology and kernels yet...
>
>
> > One needs to define a term first, before trying  to apply
> > it to the collective consciousness of AGI.
> >
> 
> I disagree. Many AGI researchers have two overwhelming biases:
> 
> One person is a general intelligence.
> One person is a general consciousness.
> 
> Both I believe are false.
> 
> Seeing the forest when you are a tree requires an outside view.
> 
> John
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Maebd1c44e6464b2086fa7693
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-21 Thread Nanograte Knowledge Technologies via AGI
Matt

That's according to John's definition thereof. The rest of us do not 
necessarily agree with such a limited view. At this stage, it cannot be 
absolutely stated what qualia is. For example, mine is a lot more fuzzy and 
abstract in terms of autonomous, identifier signalling . And that is but one 
view of many regarding a feature of biology, which I contend could ultimately 
be transposed into a synthetically-framed platform as its own, unique version.

As argued by me before; unless a thermostat has a provable version of 
consciousness, it would seem highly unlikely to posses the property of 
qualified experience.

I think we're still chewing at the bit here. This is something your research 
could've resolved rather easily. One needs to define a term first, before 
trying to apply it to the collective consciousness of AGI. As such, except for 
a battle to be right, I think this particular discussion mostly irrelevant to 
furthering the understanding of the morphogenetic property within AGI, which is 
directly implied via the topic heading.
Qualia: I spend a lot of of my time trying to provide sensible input into this 
forum, to share, and and to learn, but the lack of general acknowledgement to 
my person and lack of sensible responses to my input is causing me to 
reconsider my active participation. At which point does it trigger a signal for 
me to stop wasting invaluable, professional time and rather examine alternative 
options towards constructive refinement of my knowledge base? I'm now setting 
the threshold to zero.

Rob



From: Matt Mahoney via AGI 
Sent: Saturday, 22 September 2018 2:28 AM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

John answered the question. Qualia = sensory input compressed for 
communication. A thermostat has qualia because it compresses its input to one 
bit (too hot/too cold) and communicates it to the heater.

On Fri, Sep 21, 2018, 2:00 PM Jim Bromer via AGI 
mailto:agi@agi.topicbox.com>> wrote:
> From: Matt Mahoney via AGI mailto:agi@agi.topicbox.com>>
> >
> > What is qualia? How do I know if monkeys,
> > fish, insects, human embryos, robots, or thermostats have qualia and how
> > would they behave differently if they did or did not. What is the test?

That is an ontologically flawed question.

Jim Bromer



On Fri, Sep 21, 2018 at 6:34 AM John Rose 
mailto:johnr...@polyplexic.com>> wrote:
> > -Original Message-
> > From: Matt Mahoney via AGI 
> > mailto:agi@agi.topicbox.com>>
> >
> > You didn't answer my question. What is qualia? How do I know if monkeys,
> > fish, insects, human embryos, robots, or thermostats have qualia and how
> > would they behave differently if they did or did not. What is the test?
> >
>
> Qualia = Compressed impressed samples symbolized for communication. How do 
> you even know that a fish is a fish? How do you prove that it exists? By 
> using qualia, and something with qualia originally symbolized it for 
> transmission into the human general consciousness.
>
> Do fish have qualia? Ans.: Do they communicate? Does a fish know that another 
> fish is a fish? I think so. They use another type of signaling and alphabet 
> than humans but their multi-agent signaling is coherent in relation to the 
> species group. If they did not have qualia the fish group would show 
> incoherence.
>
> You're more asking if you take a fish and isolate it how do you know if it 
> has qualia.
>
> I would ask you, if you are given a single lossily compressed sample of 
> sensory input how do you that the original uncompressed sample exists (or 
> existed)? How do you prove it? Maybe there is no original. Maybe it doesn't 
> exist therefore qualia would not exist.
>
> John
>
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M36475a793ec2ac21cae9cadd>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M37c4f5fd843096521f82da97
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-21 Thread Matt Mahoney via AGI
John answered the question. Qualia = sensory input compressed for
communication. A thermostat has qualia because it compresses its input to
one bit (too hot/too cold) and communicates it to the heater.

On Fri, Sep 21, 2018, 2:00 PM Jim Bromer via AGI 
wrote:

> > From: Matt Mahoney via AGI 
> > >
> > > What is qualia? How do I know if monkeys,
> > > fish, insects, human embryos, robots, or thermostats have qualia and
> how
> > > would they behave differently if they did or did not. What is the test?
>
> That is an ontologically flawed question.
>
> Jim Bromer
>
>
>
> On Fri, Sep 21, 2018 at 6:34 AM John Rose  wrote:
> > > -Original Message-
> > > From: Matt Mahoney via AGI 
> > >
> > > You didn't answer my question. What is qualia? How do I know if
> monkeys,
> > > fish, insects, human embryos, robots, or thermostats have qualia and
> how
> > > would they behave differently if they did or did not. What is the test?
> > >
> >
> > Qualia = Compressed impressed samples symbolized for communication. How
> do you even know that a fish is a fish? How do you prove that it exists? By
> using qualia, and something with qualia originally symbolized it for
> transmission into the human general consciousness.
> >
> > Do fish have qualia? Ans.: Do they communicate? Does a fish know that
> another fish is a fish? I think so. They use another type of signaling and
> alphabet than humans but their multi-agent signaling is coherent in
> relation to the species group. If they did not have qualia the fish group
> would show incoherence.
> >
> > You're more asking if you take a fish and isolate it how do you know if
> it has qualia.
> >
> > I would ask you, if you are given a single lossily compressed sample of
> sensory input how do you that the original uncompressed sample exists (or
> existed)? How do you prove it? Maybe there is no original. Maybe it doesn't
> exist therefore qualia would not exist.
> >
> > John
> >

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M36475a793ec2ac21cae9cadd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-19 Thread John Rose
> -Original Message-
> From: Matt Mahoney via AGI 
> 
> What do you think qualia is? How would you know if something was
> experiencing it?
> 

You could look at qualia from a multi-systems signaling and a compressionist 
standpoint. They're compressed impressed samples of the environment and other 
agents. Somewhat uniquely compressed by the agent due to genetic diversity and 
experience so the qualia have similarities and differences across agents. And 
the genetic tree is exhaustively searching. Similarly conscious agents would 
infer similar qualia experience of other agents but not exactly the same even 
if genetically identical due to differing knowledge and experience. Also the 
genetic tree is modelling the environment but this type of model is an 
approximation and this contributes to the need for compressed sampling from 
agent variety.

So one could suggest a consciousness topology influenced by agent environmental 
complexity and communication complexity. And the topology must have coherent 
and symbiotic structure that contribute to agent efficiency... meaning if 
effects the species intelligence.

An agent not experiencing similar qualia though would exhibit some level of 
decoherence related to similar agents until their consciousness model is 
effectively equal. How do you test if a bot is a bot? You test it's reaction 
and if the reaction is expected. The bot tries to predict what the reaction 
should be but cannot predict all expected reactions. The more perfect the model 
the more difficult to detect. For example, CAPTCHA. Not working well now since 
the bots are better so the industry is moving to biometric visual. What comes 
after that? Turing test becomes qualia test. But it's all related to 
communication protocol due to separateness since full qualia cannot be 
transmitted they are further lossily compressed and symbolized for 
transmission, an imperfect process. But agents need to communicate experience 
so imperfect communication is another reason for consciousness. We reference 
symbols of qualia in other people's or, multi-agent consciousness... or the 
general consciousness.

John





--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M4095ccfa5bca7ac872f13500
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread Matt Mahoney via AGI
On Thu, Sep 13, 2018, 12:12 PM John Rose  wrote:

> > -Original Message-
> > From: Matt Mahoney via AGI 
> >
> > We could say that everything is conscious. That has the same meaning as
> > nothing is conscious. But all we are doing is avoiding defining
> something that is
> > really hard to define. Likewise with free will.
>
>
> I disagree. Some things are more conscious. A thermostat might be
> negligibly conscious unless there are thresholds.
>

When we say that X is more conscious tha Y we really mean that X is more
like a human than Y.

The problem is still there how to distinguish between p-zombie and a
> conscious being.
>

The definition of a p-zombie makes this impossible. This should tell you
something.

Qualia is what perception feels like. Your belief in qualia (correcting my
previous email) is motivated by mostly positive reinforcement of your
perceptions.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M21610f6a969341d82c6edf49
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread John Rose
> -Original Message-
> From: Matt Mahoney via AGI 
> 
> We could say that everything is conscious. That has the same meaning as
> nothing is conscious. But all we are doing is avoiding defining something 
> that is
> really hard to define. Likewise with free will.


I disagree. Some things are more conscious. A thermostat might be negligibly 
conscious unless there are thresholds.


> We will know we have properly modeled human minds in AGI if it claims to be
> conscious and have free will but is unable to tell you what that means. You 
> can
> train it as follows:
> 
> Positive reinforcement of perception trains belief in quality.
> Positive reinforcement of episodic memory recall trains belief in
> consciousness.
> Positive reinforcement of actions trains belief in free will.


I agree. This will ultimately make a p-zombie which is fine for many situations.

The problem is still there how to distinguish between p-zombie and a conscious 
being. 

Solution: Protocolize qualia. A reason for Universal Communication Protocol 
(UCP) is that it scales up.

Then you might say that p-zombies can use machine learning to mimic 
protocolized qualia to deceive. And they can from past communications.

But what they cannot do is generally predict qualia. And you should agree with 
that ala Legg's proof.

John





--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mc02d54a4317de005468e466e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread Matt Mahoney via AGI
We could say that everything is conscious. That has the same meaning as
nothing is conscious. But all we are doing is avoiding defining something
that is really hard to define. Likewise with free will.

We will know we have properly modeled human minds in AGI if it claims to be
conscious and have free will but is unable to tell you what that means. You
can train it as follows:

Positive reinforcement of perception trains belief in quality.
Positive reinforcement of episodic memory recall trains belief in
consciousness.
Positive reinforcement of actions trains belief in free will.

These are the things that make life better than death, which is good for
reproductive fitness.

On Wed, Sep 12, 2018, 9:21 AM John Rose  wrote:

> > -Original Message-
> > From: Matt Mahoney via AGI 
> >
> > I don't believe that my thermostat is conscious. Or let me taboo words
> like
> > "believe" and 'conscious". I assign a low probability to the possibility
> that my
> > thermostat has a homunculus or an immortal soul or a little person
> inside it
> > that feels hot or cold. I assign a low probability that human brains
> have these
> > either. When we look inside, all we see are neurons.
>
> The thermostat in a tiny binary way I would say is conscious. And I
> speculate has a tiny bit of free will.
>
> It's hard to imagine but there are situations where the thermostat would
> choose to save itself verses getting destroyed. How? Causal feedback into
> its own negentropic complexity. It has a slight preference to exist. This
> probably could be calculated...
>
> Note: I'm more thinking about thermostats that control heat and cold,
> furnace and AC. Not sure about AC-only thermostats in warm areas 
>
> >
> > Your argument that I am conscious is to poke me in the eye and ask
> whether I
> > felt pain or just neural signals. My reaction to pain must either be
> real or it
> > must be involuntary and I lack the free will to ignore it. Well guess
> what. Free
> > will is an illusion too. If you don't believe me, then define it.
> Something you
> > can apply as a test to humans, thermostats, dogs, AI, etc. I'll wait...
> >
> 
> Part of free will is choosing to be responsible for your actions. Meaning?
> Our actions are a discourse with the environment and other agents (aka.
> people, animals, etc..) In conscious existence there are choices since we
> believe other agents might be similarly conscious like ourselves even
> though they could be zombies we err on the positive side. For example my
> neighbor might actually feel pain so I don't maim him and take all his
> consciousness enhancing feel-good things including food, money, women,
> drugs 
> 
> I don't know if this answers your questions but perhaps is in the
> direction of...
> 
> John
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M2bfd453d99bc0c114e424986
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-12 Thread Nanograte Knowledge Technologies via AGI
Matt, my intent is not to interfere in your debate, but we are all reading it. 
As such, I'd like to comment:

I don' t think we should be spending our time engaging in elementary arguments. 
For example, I think the notion probably invalid that a thermometer (as we know 
it today) may be conscious. I'm proposing we discipline ourselves to move 
forward in a constructivist approach.

For example. Why can't this forum simply accept that in a neural system exists 
tacit and explicit knowledge, where tacit knowledge may, or may not be 
available for recall? Bringing us to the point of instinct versus learning, 
where Karl Mannheim (1928 <== for real) contended how learning (the natural 
quest for applied knowledge and social competition) is a function of human 
instinct and part of the innate system of survival.

For the most part, consciousness-in-general is an outcome of a neural 
structure, and by all accounts specifically located within the overall brain 
structure to connect to all centres of human brain functioning. Until it is 
explicitly active via external human functioning (reasoning and logic in 
action), it resembles as an (internal) explicit system of potential 
intelligence.

As a systems person, I understand consciousness to primarily exist in 3 
different states, namely: Un-consciousness (no testable awareness), 
Sub-consciousness (testable, tacit awareness), and Consciousness (testable 
tacit and explicit awareness). It seems logical that a hyper-consciousness 
state should exist as well.

For the sake of this debate, I would like to equate the unconscious and 
sub-consciousness states to including the involuntary human systems, such as 
the auto-immune system, breathing, organ functionality, and overall sensorial 
system. The consciousness state; as being evidencially in action for everyday, 
normal functionality within society.

As for the hyper-consciousness state, which may be induced via the 
consciousness state - as a super-consciousness state - it seemingly raises the 
brain to a level of hierarchical criticality and priority where it may assume 
dominant control over the complete human system.

Having said all that, the debate is apparently being broadened to consider the 
potential for an automatic connectedness to the universe around the human  
brain. Science now accepts that such connectedness exists, as if the universe 
we are referring to is naturally part of us and connected to our DNA (via 
cells) and innate to our brains (and I'm not including all potential universes 
here, but one specifically only).

To further our understanding, we may elect to refer to such a connection (or 
system) as a universal communications protocol (for our collective universe as 
we know it only). I contend it would be of interest to understand how logically 
the consciousness and hyper-consciousness, as a collective virtual system of 
superposition consciousness, may operate.

This is a point of significant interest. With regards systems design, it would 
logically place such a communication subsystem for the homo sapiens (including 
all protocols) at the apex of the overall, systems hierarchy.

Again, this has a certain ring of truth to it. However, could we design a 
scientific test for this hypothesis?

However, we should be able to translate reasoning from the abstract to the more 
literal. Meaning, we should be able to position this understanding within the 
AGI-architectural blueprint. That is the translator function I referred to in a 
prior message.

Do we have enough knowledge to do so on this forum?

Rob

From: Matt Mahoney via AGI 
Sent: Tuesday, 11 September 2018 11:05 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

On Mon, Sep 10, 2018 at 3:45 PM  wrote:
> You believe! Showing signs of communication protocol with future AGI :) an 
> aspect of  CONSCIOUSNESS?

My thermostat believes the house is too hot. It wants to keep the
house cooler, but it feels warm and decides to turn on the air
conditioner.

I don't believe that my thermostat is conscious. Or let me taboo words
like "believe" and 'conscious". I assign a low probability to the
possibility that my thermostat has a homunculus or an immortal soul or
a little person inside it that feels hot or cold. I assign a low
probability that human brains have these either. When we look inside,
all we see are neurons.

Your argument that I am conscious is to poke me in the eye and ask
whether I felt pain or just neural signals. My reaction to pain must
either be real or it must be involuntary and I lack the free will to
ignore it. Well guess what. Free will is an illusion too. If you don't
believe me, then define it. Something you can apply as a test to
humans, thermostats, dogs, AI, etc. I'll wait...

Or maybe you believe that AGI is impossible. Maybe you believe that
the brain processes inputs and produces outputs that no computer 

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-11 Thread Matt Mahoney via AGI
On Mon, Sep 10, 2018 at 3:45 PM  wrote:
> You believe! Showing signs of communication protocol with future AGI :) an 
> aspect of  CONSCIOUSNESS?

My thermostat believes the house is too hot. It wants to keep the
house cooler, but it feels warm and decides to turn on the air
conditioner.

I don't believe that my thermostat is conscious. Or let me taboo words
like "believe" and 'conscious". I assign a low probability to the
possibility that my thermostat has a homunculus or an immortal soul or
a little person inside it that feels hot or cold. I assign a low
probability that human brains have these either. When we look inside,
all we see are neurons.

Your argument that I am conscious is to poke me in the eye and ask
whether I felt pain or just neural signals. My reaction to pain must
either be real or it must be involuntary and I lack the free will to
ignore it. Well guess what. Free will is an illusion too. If you don't
believe me, then define it. Something you can apply as a test to
humans, thermostats, dogs, AI, etc. I'll wait...

Or maybe you believe that AGI is impossible. Maybe you believe that
the brain processes inputs and produces outputs that no computer ever
could. I don't know. You tell m.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mce1b09dd088ece1937814ec2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread John Rose
> -Original Message-
> From: Russ Hurlbut via AGI 
> 
> 1. Where do you lean regarding the measure of intelligence? - more towards
> that of Hutter (the ability to predict the future) or towards 
> Winser-Gross/Freer
> (causal entropy - soft of a proxy for future opportunities; ref
> https://www.alexwg.org/publications/PhysRevLett_110-168702.pdf) 

Russ,

I see intelligence, in one way, as efficiency, increasing intelligence as 
efficiency increase. Measuring would be comparing efficiencies. Predicting 
futures is a form of attaining efficiencies but I usually lean towards the 
thermodynamical aspects when theorizing but that is somewhat virtualized in 
software to the information theory analogues. 

> 2. Do you
> agree with Tegmark's position regarding consciousness? Namely,
> "Consciousness might feel so non-physical because it is doubly substrate
> independent:
> * Any chunk of matter can be the substrate for memory as long as it has many
> different stable states;
> * Any matter can be computronium, the substrate for computation, as long as
> it contains certain universal building blocks that can be combined to
> implement any function. NAND gates and neurons are two important examples
> of such universal "computational atoms.".
> 

Definitely agree with the digital physics aspects. IMO all matter is memory and 
computation. Everything is effectively storing and computing. Also I think 
everything can be interpreted as language. And when you think about it, it is. 
Example, take an individual molecule and calculate it's alphabet based on 
atomic positions. The molecule is effectively talking with positional subsets 
or words. It can also speak a continuous language verses individual 
probabilistic states based on heat or whatever. And some matter would be more 
intelligent being more computationally flexible.


> If consciousness is the way information feels when being processed in certain
> complex ways, then it's merely the structure of the information processing 
> that
> matters, not the structure of the matter doing the information processing. A
> wave can travel across the lake, even though none of its water molecules do.
> It's not the particles but the pattern that really matters.
> (A Tegmark cliff notes version of can be found here:
> https://quevidaesta2010.blogspot.com/2017/10/life-30-max-tegmark.html)
> 

Now you're making me have to think. It's both right? The wave going across a 
different lake, say a lake of liquid methane, will have different waveform. Not 
sure how you can separate the structural complexity of the processing from the 
processed since information is embedded in matter. Language, math, symbols must 
be represented physically (for example on ink or in the brain). In an 
electronic computer though it is very separate, the electrons and holes on 
silicon highways are strongly decoupled from the higher level informational 
representation they are shuttling... hmmm!

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M936f76447ec1d2ade78e9d8f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread johnrose
> -Original Message-
> From: Matt Mahoney via AGI 
>...

Yes, I'm familiar with these algorithmic information theory *specifics*. Very 
applicable when implemented in isolated systems...

> No, it (and Legg's generalizations) implies that a lot of software and 
> hardware
> is required and you can forget about shortcuts like universal learners sucking
> data off the internet. You can also forget about self improving software
> (violates information theory), quantum computing (neural computation is not
> unitary), or consciousness (an illusion that evolved so you would fear death).

Whoa, saying a lot there? Throwing away a lot of "engineering options" with 
those statements. But I think your view of consciousness, even if just illusion 
to an agent, is still communication protocol! It still fits!

> How much software and hardware? You were born with half of what you
> know as an adult, about 10^9 bits each. That's roughly the information

OK, Laudner's study while a good reference point is in serious need of a new 
data.


> The hard coded (nature) part of your AGI is about 300M lines of code, doable
> for a big company for $30 billion but probably not by you working alone. And
> then you still need a 10 petaflop computer to run it on, or several billion 
> times
> that to automate all human labor globally like you promised your simple
> universal learner would do by next year.
>
> I believe AGI will happen because it's worth $1 quadrillion to automate labor
> and the technology trend is clear. We have better way to write code than
> evolution and we can develop more energy efficient computers by moving
> atoms instead of electrons. It's not magic. It's engineering.
> From: Matt Mahoney
> I believe AGI will happen

You believe! Showing signs of communication protocol with future AGI :) an 
aspect of  CONSCIOUSNESS?

Nowadays that $1 quadrillion might in cryptocurrency units. And the 10 petaflop 
computer a blockchain-like based P2P system. And if a megacorp successfully 
builds AGI the peers (agents) must use signaling protocol otherwise they don't 
communicate. So, can the peers be considered conscious? Conscious as in those 
behaviors common across many definitions of consciousness? Not looking at the 
magical part just the engineering part.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mcef74a38e1012d36f1b77fcb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread Russ Hurlbut via AGI

John -

Thanks for a refreshingly new discussion for this forum. Just as you 
describe, it is quite interesting to see how seemingly disparate tracks 
can be combined and guided onto the same course. Accordingly, your 
presentation has brought to mind similar notions that appear to fit 
somewhere in your efforts. So, here are two questions for you:


1. Where do you lean regarding the measure of intelligence? - more 
towards that of Hutter (the ability to predict the future) or towards 
Winser-Gross/Freer (causal entropy - soft of a proxy for future 
opportunities; ref 
https://www.alexwg.org/publications/PhysRevLett_110-168702.pdf)


2. Do you agree with Tegmark's position regarding consciousness? Namely, 
"Consciousness might feel so non-physical because it is doubly substrate 
independent:
* Any chunk of matter can be the substrate for memory as long as it has 
many different stable states;
* Any matter can be computronium, the substrate for computation, as long 
as it contains certain universal building blocks that can be combined to 
implement any function. NAND gates and neurons are two important 
examples of such universal “computational atoms.”.


If consciousness is the way information feels when being processed in 
certain complex ways, then it's merely the /structure of the information 
processing that matters, not the structure of the matter doing the 
information processing. //A wave can travel across the lake, even though 
none of its water molecules do. I//t's not the particles but the pattern 
that really matters./
(A Tegmark cliff notes version of can be found here: 
https://quevidaesta2010.blogspot.com/2017/10/life-30-max-tegmark.html)



On 09/09/2018 09:07 PM, johnr...@polyplexic.com wrote:
Basically, if you look at all of life (Earth only for this example) 
over the past 4.5 billion years, including all the consciousness and 
all that “presumed” entanglement and say that's the first general 
intelligence (GI) the algebraic structural dynamics on the 
computational edge... is computing consciousness and is correlated 
directly to general intelligence. They are two versions of the same thing.


So to ask why basic AI is only computational consciousness not really 
consciousness computation is left up the reader as an exercise :)


To clarify, my poor grammatical skills –
AI = computational consciousness = consciousness performing computation
GI = consciousness computation= consciousness being created by computation

The original key idea here though is consciousness as Universal 
Communications Protocol. Took me years to tie those two together. 
That's a very practical idea, the stuff above I'm not sure of just 
toying with...


John

*Artificial General Intelligence List 
* / AGI / see discussions 
 + participants 
 + delivery options 
 Permalink 
 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M7881cfcebbafeb5d0ec42239
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread Matt Mahoney via AGI
On Mon, Sep 10, 2018 at 8:10 AM  wrote:
> Why is there no single general compression algorithm? Same reason as general 
> intelligence, thus, multi-agent, thus inter agent communication, thus 
> protocol, and thus consciousness.

Legg proved that there are no simple, general theories of prediction,
and therefore no simple but powerful learners (or compression
algorithms). Suppose you have a simple algorithm that can predict any
computable infinite sequence of symbols after only a finite number of
mistakes. Then I can create a simple sequence that your program can't
learn. My program runs your program and outputs a different symbol at
each step. You can read his paper here:
https://arxiv.org/abs/cs/0606070

This has been the biggest pitfall of AGI projects. You make fast
progress initially on the easy problems, thinking the solution is in
sight, and then get stuck on the hard ones.

> Doesn't Gödel Incompleteness imply "magic" is needed?

No, it (and Legg's generalizations) implies that a lot of software and
hardware is required and you can forget about shortcuts like universal
learners sucking data off the internet. You can also forget about self
improving software (violates information theory), quantum computing
(neural computation is not unitary), or consciousness (an illusion
that evolved so you would fear death).

How much software and hardware? You were born with half of what you
know as an adult, about 10^9 bits each. That's roughly the information
content of your DNA, coincidentally about the same as your long term
memory capacity according to Landauer. (see
https://www.cs.colorado.edu/~mozer/Teaching/syllabi/7782/readings/Landauer1986.pdf
). All this debate about nurture vs nature is because for most traits,
it's both.

The hard coded (nature) part of your AGI is about 300M lines of code,
doable for a big company for $30 billion but probably not by you
working alone. And then you still need a 10 petaflop computer to run
it on, or several billion times that to automate all human labor
globally like you promised your simple universal learner would do by
next year.

Or maybe you could automate the software development. It's happened
once, right? All it took was 10^48 DNA base copy operations on 10^37
bases over 3.5 billion years on planet sized hardware that uses one
billionth as much energy per operation as transistors.

I believe AGI will happen because it's worth $1 quadrillion to
automate labor and the technology trend is clear. We have better way
to write code than evolution and we can develop more energy efficient
computers by moving atoms instead of electrons. It's not magic. It's
engineering.

--
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M5cc8f151a753aed0c7debc96
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread Nanograte Knowledge Technologies via AGI
John

At a quantum level of electro-magnetic force activity (thinking of the graviton 
in action)  - all 4 forces obviously entangled - there seems to be no plausible 
reason why the human brain and the earth could not resonate collectively as a 
communication beacon with the universe. Refer Haramein's scientific proof of 
unified field theory. Keeping this critical, thus not to be confused with a 
theory of everything.

Further, with regards collective consciousness and quantum communication, 
there's a very interesting global experiment being conducted.

Rob

http://noosphere.princeton.edu/

Noosphere - The Global Consciousness Project<http://noosphere.princeton.edu/>
The Global Consciousness Project, home page, scientific research network 
studying global consciousness
noosphere.princeton.edu





From: johnr...@polyplexic.com 
Sent: Monday, 10 September 2018 2:44 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

Nanograte,

> In particular, the notion of a universal communication protocol. To me it 
> seems to have a definite ring of truth to it.

It does doesn't it?!

For years I've worked with signaling and protocols lending some time to 
imagining a universal protocol. And for years I've thought about and researched 
consciousness. Totally independent of one another. Then until very recently 
this line in my mind just appeared joining one to the other is was ...weird. 
But it all makes sense! Consciousness is communication protocol but is it 
universal protocol? Possibly, to be explored... I'm sure others have seen the 
same thing especially in biology/biomimicry

John
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M5b9aad878a55914b54da8358>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M049337d115f17e4ff755ef0b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread johnrose
Nanograte,

> In particular, the notion of a universal communication protocol. To me it 
> seems to have a definite ring of truth to it.

It does doesn't it?!

For years I've worked with signaling and protocols lending some time to 
imagining a universal protocol. And for years I've thought about and researched 
consciousness. Totally independent of one another. Then until very recently 
this line in my mind just appeared joining one to the other is was ...weird. 
But it all makes sense! Consciousness is communication protocol but is it 
universal protocol? Possibly, to be explored... I'm sure others have seen the 
same thing especially in biology/biomimicry

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M5b9aad878a55914b54da8358
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread johnrose
Matt,

Zoom out. Think multi-agent not single agent. Multi-agent internally and 
externally. Evaluate this proposition not from first-person narrative and it 
begins to make sense.

Why is there no single general compression algorithm? Same reason as general 
intelligence, thus, multi-agent, thus inter agent communication, thus protocol, 
and thus consciousness.

> But magic doesn't solve engineering problems.
Ehm.. being an engineer I ah disagree with this... half-jokingly :) 

More seriously though:
Doesn't Gödel Incompleteness imply "magic" is needed?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M7c2ff87f368473867c63de2a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread Nanograte Knowledge Technologies via AGI
For starters, I' d like to see a collection of reputable, academic works 
defining consciousness. Having said that, I'm enjoying the sharing of ideas and 
cooking-class banter. In particular, the notion of a universal communication 
protocol. To me it seems to have a definite ring of truth to it. Please carry 
on as you are doing now...

From: johnr...@polyplexic.com 
Sent: Monday, 10 September 2018 12:56 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

Matt:
> AGI is the very hard engineering problem of making machines do all the things 
> that people can do.

Artificial people might be a path to AGI but not really AGI...

And I'm not the one originally saying consciousness is the magic ingredient. 
Nature is :)

John


Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M5f6cfc42141b094ab52a8be3>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M8d426a9c8376a063954e1981
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread Mark Nuzz via AGI
I'll take jargon salad over buzzword soup any day.

On Sun, Sep 9, 2018 at 3:26 PM Matt Mahoney via AGI 
wrote:

> Recipe for jargon salad.
>
> Two cups of computer science.
> One cup mathematics.
> One cup electrical engineering.
> One cup neuroscience.
> One half cup information theory.
> Four tablespoons quantum mechanics.
> Two teaspoons computational biology.
> A dash of philosophy.
>
> Mix all ingredients in a large bowl. Arrange into incoherent but
> grammatically correct sentences. Serve to accolades of your genius.
>
>
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Md95d2c0c65b54e4e177ce24e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread Matt Mahoney via AGI
AGI is the very hard engineering problem of making machines do all the
things that people can do. Consciousness is not the magic ingredient that
makes the problem easy.

On Sep 9, 2018 10:08 PM,  wrote:

Basically, if you look at all of life (Earth only for this example) over
the past 4.5 billion years, including all the consciousness and all that
“presumed” entanglement and say that's the first general intelligence (GI)
the algebraic structural dynamics on the computational edge... is computing
consciousness and is correlated directly to general intelligence. They are
two versions of the same thing.

So to ask why basic AI is only computational consciousness not really
consciousness computation is left up the reader as an exercise :)

To clarify, my poor grammatical skills –
AI = computational consciousness = consciousness performing computation
GI = consciousness computation= consciousness being created by computation

The original key idea here though is consciousness as Universal
Communications Protocol. Took me years to tie those two together. That's a
very practical idea, the stuff above I'm not sure of just toying with...

John

*Artificial General Intelligence List * /
AGI / see discussions  + participants
 + delivery options
 Permalink


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Ma684ded4fc05b4f73f4d28bb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread johnrose
Basically, if you look at all of life (Earth only for this example) over the 
past 4.5 billion years, including all the consciousness and all that “presumed” 
entanglement and say that's the first general intelligence (GI) the algebraic 
structural dynamics on the computational edge... is computing consciousness and 
is correlated directly to general intelligence. They are two versions of the 
same thing.

So to ask why basic AI is only computational consciousness not really 
consciousness computation is left up the reader as an exercise :)

To clarify, my poor grammatical skills –
AI = computational consciousness = consciousness performing computation
GI = consciousness computation= consciousness being created by computation

The original key idea here though is consciousness as Universal Communications 
Protocol. Took me years to tie those two together. That's a very practical 
idea, the stuff above I'm not sure of just toying with...

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mcf324d011886fce24bc9a48c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread Matt Mahoney via AGI
Recipe for jargon salad.

Two cups of computer science.
One cup mathematics.
One cup electrical engineering.
One cup neuroscience.
One half cup information theory.
Four tablespoons quantum mechanics.
Two teaspoons computational biology.
A dash of philosophy.

Mix all ingredients in a large bowl. Arrange into incoherent but
grammatically correct sentences. Serve to accolades of your genius.




On Sun, Sep 9, 2018, 5:11 PM Jim Bromer via AGI 
wrote:

> Consciousness computation (GI) is on the negentropic massive multi-partite
> entanglement frontier of a spontaneous morphismic awareness complexity -
> IOW on the edge of life’s consciousness based on manifestation of
> inter/intra-agent entanglement (in DNA perhaps?).
>
> Whoa!  I'm roiling dude. I mean, like wow. What?
> Jim Bromer
>
>
> On Sun, Sep 9, 2018 at 12:41 PM John Rose  wrote:
>
>> How I'm thinking lately (might be totally wrong, totally obvious, and/or
>> totally annoying to some but it’s interesting):
>> 
>> Consciousness Oriented Intelligence (COI)
>> 
>> Consciousness is Universal Communications Protocol (UCP)
>> 
>> Intelligence is consciousness manifestation
>> 
>> AI is a computational consciousness
>> 
>> GI is consciousness computation
>> 
>> GI requires non-homogeneous multi-agent structure (commonly assumed),
>> with intra and inter agent communication in consciousness.
>> 
>> Consciousness computation (GI) is on the negentropic massive
>> multi-partite entanglement frontier of a spontaneous morphismic awareness
>> complexity - IOW on the edge of life’s consciousness based on manifestation
>> of inter/intra-agent entanglement (in DNA perhaps?).
>> 
>> IOW the communication protocol UCP (consciousness) is simultaneously the
>> computed, the computer, and the cross-categorical interlocuter
>> (cohomological sheaver weaver?).
>> 
>> So for AGI it's needed to artificially create consciousness in software.
>> 
>> How's that done?  Using mathematical shortcuts from the knowledge gained
>> from the collective human general intelligence and replacing the universal
>> communications protocol of consciousness mathematically and computationally.
>> 
>> And there is trend in AGI R that aims for this but under other names
>> and descriptions since the term consciousness has a lot of baggage but the
>> concept is morphismic (and perhaps Sheldrakedly morphic ).
>> 
>> My sense though says that we are going to start seeing (already maybe?)
>> evidence of massive and pervasive biological quantum entanglement, example
>> in DNA. And the entanglement might go back eons and the whole of life's
>> collective consciousness could be based on that...
>> 
>> John
>> 
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Ma504ffde9da2de07cac596df
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread Jim Bromer via AGI
Consciousness computation (GI) is on the negentropic massive multi-partite
entanglement frontier of a spontaneous morphismic awareness complexity -
IOW on the edge of life’s consciousness based on manifestation of
inter/intra-agent entanglement (in DNA perhaps?).

Whoa!  I'm roiling dude. I mean, like wow. What?
Jim Bromer


On Sun, Sep 9, 2018 at 12:41 PM John Rose  wrote:

> How I'm thinking lately (might be totally wrong, totally obvious, and/or
> totally annoying to some but it’s interesting):
> 
> Consciousness Oriented Intelligence (COI)
> 
> Consciousness is Universal Communications Protocol (UCP)
> 
> Intelligence is consciousness manifestation
> 
> AI is a computational consciousness
> 
> GI is consciousness computation
> 
> GI requires non-homogeneous multi-agent structure (commonly assumed), with
> intra and inter agent communication in consciousness.
> 
> Consciousness computation (GI) is on the negentropic massive multi-partite
> entanglement frontier of a spontaneous morphismic awareness complexity -
> IOW on the edge of life’s consciousness based on manifestation of
> inter/intra-agent entanglement (in DNA perhaps?).
> 
> IOW the communication protocol UCP (consciousness) is simultaneously the
> computed, the computer, and the cross-categorical interlocuter
> (cohomological sheaver weaver?).
> 
> So for AGI it's needed to artificially create consciousness in software.
> 
> How's that done?  Using mathematical shortcuts from the knowledge gained
> from the collective human general intelligence and replacing the universal
> communications protocol of consciousness mathematically and computationally.
> 
> And there is trend in AGI R that aims for this but under other names and
> descriptions since the term consciousness has a lot of baggage but the
> concept is morphismic (and perhaps Sheldrakedly morphic ).
> 
> My sense though says that we are going to start seeing (already maybe?)
> evidence of massive and pervasive biological quantum entanglement, example
> in DNA. And the entanglement might go back eons and the whole of life's
> collective consciousness could be based on that...
> 
> John
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mbc70dfd23679c6ff7b93ded3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread John Rose
How I'm thinking lately (might be totally wrong, totally obvious, and/or 
totally annoying to some but it’s interesting):

Consciousness Oriented Intelligence (COI)

Consciousness is Universal Communications Protocol (UCP)

Intelligence is consciousness manifestation 

AI is a computational consciousness

GI is consciousness computation

GI requires non-homogeneous multi-agent structure (commonly assumed), with 
intra and inter agent communication in consciousness.

Consciousness computation (GI) is on the negentropic massive multi-partite 
entanglement frontier of a spontaneous morphismic awareness complexity - IOW on 
the edge of life’s consciousness based on manifestation of inter/intra-agent 
entanglement (in DNA perhaps?).

IOW the communication protocol UCP (consciousness) is simultaneously the 
computed, the computer, and the cross-categorical interlocuter (cohomological 
sheaver weaver?).

So for AGI it's needed to artificially create consciousness in software.

How's that done?  Using mathematical shortcuts from the knowledge gained from 
the collective human general intelligence and replacing the universal 
communications protocol of consciousness mathematically and computationally.

And there is trend in AGI R that aims for this but under other names and 
descriptions since the term consciousness has a lot of baggage but the concept 
is morphismic (and perhaps Sheldrakedly morphic ).

My sense though says that we are going to start seeing (already maybe?) 
evidence of massive and pervasive biological quantum entanglement, example in 
DNA. And the entanglement might go back eons and the whole of life's collective 
consciousness could be based on that...

John



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M0d67035a0f5f8e8fd877bd6e
Delivery options: https://agi.topicbox.com/groups/agi/subscription