I'm beginning to think that consciousness is the pathway to intelligence. Bear with me. At first, it sounds "illogical". However, we could entertain the notion that experience-based learning is practically impossible without a consciousness, then it becomes logical.
If we need both consciousness towards intelligence, and intelligence towards consciousness, it becomes possible to merge peer objects into a single object we may call 'consciousnessintelligence'. A construct of consciousnessintelligence may bring us closer to visualizing the emergence of qualia. For purposes of this discussion, I'll attempt a definition of qualia. I'll take my cue from the theory of general relativity and how that pertains to energy in a holistic sense. Suppose the human brain functions as a cosmically-linked torus, it should follow that it generates electro-magnetic energy and is imbued with structure for polarity. Such polarity may flow over a binary fractal. In a simplistic view, let's assume this toroidal structure, simultaneously in a geometrically fractal interaction (in the sense of an active brain) vibrates its frequency as entangled particlewave encapsulated information with its environment. For the brain, this interaction may manifest as e-fields in action. So we have matter, and anti-matter conjoining as a hot and cold force-field to be interactively polarized in such a way as to generate a singular version of that event interaction (the way = a method - being influenced by a particular entity's subject position relative to the timespace continuum). Furthermore, this version - as information - is subsequently embedded within the unique consciousnessintelligence of the host, probably synchronously. Still, to me that is not qualia, yet. However, it might be a constructive mechanism for qualia, in terms of meta qualia. When the consciousnessintelligence reacts to the polarity-based stimulus, it automatically responds with an inherent objective to achieve a wait state of equilibrium. In so doing, it may activate the energy flow within the torus-like brain, and utilizing the stimulus as dynamic trigger, flow out an informationally-rich pulse to be made observable as thought, action, feeling, hunch, gut feel, or whatever form of predisposed energy envelope. What I have tried to describe thus far is but a a single step of a chain-reaction, the DNA also acting as a dominatorrecorder (multiple roles), which when it flows towards the opposite polarity passes "through" a discrete point on a stochastic scale of degrees of consciousnessintelligence. This discrete point may be as brief as a flash of light. We may call this moment optimal consciousness , or knowing. As such, knowing is an outcome of consciousnessintelligence, and so is the function of explaining. Is explaining, knowing? Given the meta qualia (the structure as discussed), all aspects of brain functioning are 100% enabled to make this moment happen. One should possibly visualize this structure-in-operation as a flower that is one with itself, interconnected with its environment and purposed to always be on standby for knowing. Qualia then to me would be that "flash of light", as the moment of knowing. Esoterically, I'd say qualia is that absolute moment when individual, consciousnessintelligence potential is realized. Computational models already exist for most of the components and functionality I mentioned. As such, I think it has total relevance for the step-by-step development of an AGI model. Thoughts? Rob ________________________________ From: Jim Bromer via AGI <agi@agi.topicbox.com> Sent: Monday, 24 September 2018 8:02 PM To: AGI Subject: Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?) Matt's response - like an adolescent's flip remark - is evidence of the kind of denial that I mentioned. Jim Bromer On Mon, Sep 24, 2018 at 10:49 AM Matt Mahoney via AGI <agi@agi.topicbox.com> wrote: > > I wrote a simple reinforcement learner which includes the line of code: > > printf("Ouch!\n"); > > So I don't see communication of qualia as a major obstacle to AGI. > > Or do you mean something else by qualia? > > > On Mon, Sep 24, 2018, 5:21 AM John Rose <johnr...@polyplexic.com> wrote: >> > -----Original Message----- >> > From: Matt Mahoney via AGI <agi@agi.topicbox.com> >> > >> > I was applying John's definition of qualia, not agreeing with it. My >> > definition is >> > qualia is what perception feels like. Perception and feelings are both >> > computable. But the feelings condition you to believing there is something >> > magical and mysterious about it. >> > >> >> And what I'm saying is that the communication of qualia is important for >> general intelligence in a system of agents. And how do agents interpret the >> signals, process and recommunicate them. >> >> But without fully understanding qualia since they're intimately intrinsic to >> agent experience we can still explore their properties by answering >> questions such as: What is an expression of the information distance between >> qualia of differing agents with same stimuli? How do qualia map to modeled >> environment? How do they change over time in a system of learning agents? >> What is the compressional loss into communication? And how do multi-agent >> models change over time from communicated and decompressed qualia. >> >> And what is the topology of qualia variance within an agent related to the >> complexity classes of environmental strategy? >> >> And move on to questions such as can there be enhancements to agent language >> to accelerate learning in a simulated system? And enhancements to agent >> structure? >> >> John >> > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mef04a2f5ec76594edfb2ce70 Delivery options: https://agi.topicbox.com/groups/agi/subscription