Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread WriterOfMinds
Cool, but ... I maintain that none of this is about consciousness.  Knowledge 
representation, abstraction and compression via symbolism, communication of 
structure, common protocols and standards for describing physical phenomena ... 
these are all intelligence tasks. Specifically, the communication-related stuff 
would be part of social and linguistic intelligence. If you want some labels 
that convey the "inter-agent" aspect without confusing everyone, I think those 
would do.

The thing you call "occupying representation" ... a  conscious agent can do it, 
but an unconscious agent can too.  The ability to decompress information and 
construct models from symbolic communication does not require or imply that the 
agent has its own qualia or first-person experiences.

And I do agree that, for the practical/utilitarian purpose of Getting Things 
Done, this is useful and is all you need for cooperative agents. Like I said 
when I first posted on this thread, phenomenal consciousness is neither 
necessary nor sufficient for an intelligent system.

I think your comment about "Gloobledeglock" actually illustrates my point. 
Communication breaks down here because you haven't tied Gloobledeglock to a 
causative external event. If you said something like, "I feel Gloobledeglock 
whenever I get rained on," then I could surmise (with no guarantee of 
correctness) that you feel whatever it is I feel when I get rained on. 
Observable events, in the world external to both our minds, are things we can 
hold in common and use as the basis of communication protocols. We can't hold 
qualia in common, or transfer them (even partially).

> AGI researchers are so occluded by first-person.

Umm since when? I certainly don't think an AGI system has to be an isolated 
singleton that only deals with first-person information. I think the kerfuffle 
in this thread is about you appearing to claim that Universal Communication 
Protocols and the ability to "occupy representation" are something they are 
not. We're not trying to give an exaggerated importance to phenomenal 
consciousness ... quite the opposite, in fact. We're just saying that the 
systems you describe don't have it.

Signing off now. Good luck with your work.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M0b99c375007957bfa978963b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread Nanograte Knowledge Technologies
John

I know you know this, so perhaps you're clickbaiting me. :-)

Digital devices do not pass the qualia test. Therefore, the example is invalid. 
However, it has relevance for this debate.

As a thought experiment, perhaps try an example of the interaction between 
yourself, your PC's processor, the resident operating system, and a peripheral 
device. It should be interesting.




From: johnr...@polyplexic.com 
Sent: Wednesday, 28 August 2019 16:06
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

On Wednesday, August 28, 2019, at 9:30 AM, Nanograte Knowledge Technologies 
wrote:
Any generalized system relying on the random, subjective input value of qualia 
would give rise to the systems constraint of ambiguity. Therefore, as a policy, 
all subjectively-derived data would introduce a semantic anomaly into any 
system - by design. This has significance for the assertion that qualia input 
would be useful for symbolic systems.


Simple example, my PC and mouse communicate. They're separated. Assume they 
have simple digital qualia. Is the mouse able to compute the k-complexity of 
the PC? No. It's estimated. Does the PC use the qualia of the mouse that are 
communicated? Click click. The mouse is compressing my finger action into 
simple  digital symbols for communication. Can I compute the exact electron 
flow and mechanical action, IOW feel the mouse’s qualia? No, it's estimated but 
estimated very reliably with almost zero errors.

Take more complicated distributed systems with more symbol complexity and it's 
still the same principle except that more consciousness is generally required 
among the communicating agents.

John

Artificial General Intelligence List / AGI / 
see discussions + 
participants + delivery 
options 
Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Md08315f9b5aacd84d59429e4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 6:49 PM, WriterOfMinds wrote:
> Great, seems like we've reached agreement on something.
> When we communicate with words like "red," we're really communicating about 
> the frequency of light. I would argue that we are not communicating our 
> qualia to each other. If we could communicate qualia, we would not have this 
> issue of being unable to know whether your green is my red. Qualia are 
> personal and incommunicable *by definition,* and it's good to have that 
> specific word and not pollute it with broader meanings.

We can't fully communicate our qualia only a representation of that which we 
ourselves loose the exact reconstruction of. That's the inter-agent part of it. 
How do you know any qualia ever existed? They are communicated. They are fitted 
into words/symbols. IMO like a pointer in the programming sense. This is all 
utilitarian not philosophical.

On Wednesday, August 28, 2019, at 6:49 PM, WriterOfMinds wrote:
> In the mouse example, I was assuming that I had fully modeled the 
> electro-mechanical phenomena in *this specific* mouse. I still don't think 
> that would give me its qualia.

There is only a best guess within the context of the observer...

On Wednesday, August 28, 2019, at 6:49 PM, WriterOfMinds wrote:
> I would be happy to refer to a machine with an incommunicable first-person 
> subjective experience stream as "conscious." But you've admitted that you're 
> not trying to talk about incommunicable first-person subjective experiences, 
> you're trying to talk about communication. I'm not concerned with whether the 
> "consciousness" is mechanical or biological, natural or artificial; I'm 
> concerned with whether it's actually "consciousness."

A sample, lossily compressed internally, symbolized. We loose the original 
basically. You can't transmit the whole qualia it's gone. Yes the utilitarian 
aspect of it is that it is all about communication in a system of agents.  
Everything is not first-person. AGI researchers are so occluded by 
first-person. Human general intelligence in not one person but a system of 
people... a baby dies in isolation.

Another piece of this is occupying representation. A phenomenal conscious 
observer may assume the structure that is transmitted in its symbolic form and 
attempt to reconstruct the original lossy representation based on it's own 
experience.  

Not really aiming for human phenomenal consciousness now but more panpsychist. 
Objects inherently contain structure that can be extracted into discrete 
representation that can be fitted systematically with similar structure of 
other objects.

...

I want to tell you a secret but it's incommunicable. Guess what. It's already 
been communicated.

Can I ask you a question? Thanks, no need to answer.

I felt a unique incommunicable sensation. I call it Gloobledeglock.  Have you 
ever felt Gloobledeglocked?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mfcb6e0f90becb8dba4791d4a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread WriterOfMinds
"You don’t know my qualia on red ... We may never know that your green is my 
red."
Great, seems like we've reached agreement on something.
When we communicate with words like "red," we're really communicating about the 
frequency of light. I would argue that we are not communicating our qualia to 
each other. If we could communicate qualia, we would not have this issue of 
being unable to know whether your green is my red. Qualia are personal and 
incommunicable *by definition,* and it's good to have that specific word and 
not pollute it with broader meanings.

In the mouse example, I was assuming that I had fully modeled the 
electro-mechanical phenomena in *this specific* mouse. I still don't think that 
would give me its qualia.

I would be happy to refer to a machine with an incommunicable first-person 
subjective experience stream as "conscious." But you've admitted that you're 
not trying to talk about incommunicable first-person subjective experiences, 
you're trying to talk about communication. I'm not concerned with whether the 
"consciousness" is mechanical or biological, natural or artificial; I'm 
concerned with whether it's actually "consciousness."
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M123187415d84d17b03b08bf7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 5:09 PM, WriterOfMinds wrote:
> People can only communicate their conscious experiences by analogy. When you 
> say "I'm in pain," you're not actually describing your experience; you're 
> encouraging me to remember how I felt the last time *I* was in pain, and to 
> assume you feel the same way. We have no way of really knowing whether the 
> assumption is correct.
> 

That’s protocol. They sync up. We are using an established language but it 
changes over time.  The word "Pain" is a transmitted compression symbol that's 
understood already not to be always the same but the majority of others besides 
oneself have a similar experience. Some people get pleasure from pain due to 
different wiring or neurochemicals or whatever. There might be a societal 
tendency for them not to breed


On Wednesday, August 28, 2019, at 5:09 PM, WriterOfMinds wrote:
> We can both name a certain frequency of light "red" and agree on which 
> objects are "red." But I can't tell you what my visual experience of red is 
> like, and you can't tell me what yours is like. Maybe my red looks like your 
> green -- the visual experience of red doesn't seem to inhere in the 
> frequency's numerical value, in fact color is nothing like number at all, so 
> nothing says my red isn't your green. "Qualia" refers to that indescribable 
> aspect of the experience. If your "qualia" can be communicated with symbols, 
> or described in terms of other things, then we're not talking about the same 
> concept -- and using the same word for it is just confusing.

Think multi-agent. Say my red is your green and your green is my red. We are 
members of a species sampling the environment. If we all saw it the same way it 
would impact evolution? You don’t know my qualia on red. But you do understand 
me communicating the experience using words and symbols generally understood 
and that is what matters in the multi-agent computational standpoint. We are 
multi-sensors emitting compressed samples via symbol transmission hoping the 
external world understands, but the initial sample is lossily compressed and 
fitted into a symbol to traverse a distance. We may never know that your green 
is my red.


On Wednesday, August 28, 2019, at 5:09 PM, WriterOfMinds wrote:
> Going back to your computer-and-mouse example: if I admit your panpsychist 
> perspective and assume that a computer mouse has qualia, those qualia are not 
> identified with the electro-mechanical events inside the mouse.  I could have 
> full knowledge of those (fully compute or model them) without sharing the 
> mouse's experience.

You can compute mouse electro-mechanical at a functional level but between two 
mice there are actual vast differences in electron flow and microscopic 
mechanical differences. You still are only estimating what is actually going 
on, or the K-complexity or qualia. There could be self-correcting errors in one 
but the signal clicks to external entities is the same...

Please note that terminology gets usurped with technology when implemented. 
Should we not call intelligence intelligence? Usually it is prepended with 
artificial but IMO wrong move there. It is intelligence or better machine 
intelligence.  Should we not call an artificial eye an eye? What's so special 
about the word consciousness that everyone gets all squirmy about?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M640c41a41bf4e294765e68a3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] I am releasing all of my AGI research soon

2019-08-28 Thread immortal . discoveries
All my work, images, notes, algorithms, a big movie, and my future research. I 
am planning to soon release it to all of the AI community. I am doing this 
because there is no actual monetary gain from owning or patenting AGI or any 
other piece of knowledge because of how fast evolution will move soon, and 
Teamwork is what drives faster progress (I have seen this in Open AI, Deepmind, 
GANs). Therefore, very soon (especially from teamwork together), AGIs will be 
in control of the previous monkey species (us), but much more profoundly than 
ever imaginable. Anyone serious in AGI knows this. There will be 0 need for 
currency or fame. Everyone will be in an advanced world SOON. Or in hell. And I 
know how, I've noted down essentially everything they will be able to 
realistically do and will do. So I'm getting started and going to share to our 
hive and hopefully team up and bring the next phase in evolution so that we can 
live happier soon. I'm open to share my research with a private group if you 
desire, but otherwise I am releasing it all to the public and attempting to 
bring AGI to Earth soon.



I have replicated GPT-2 in raw with no training months ago. I checked my 400MB 
of data - no significant copy pastes. Here is prompt & completion. It predicts 
the next word:

*The software was made on a**
*
*The software was made on a wide variety of devices, and operating apps and 
applications that users can easily read as an app for android. It is a bit of a 
difference, but i was able to get it. The developers are not going to make it 
through a web applications, and devices i have seen in the running for the 
mobile apps. Applications allows users to access applications development 
tools, and allow applications of the app store. A multimedia entertainment 
entertainment device, and allows platforms enabled access to hardware 
interfaces. Using a bit of html application app developers can enable users to 
access applications to investors, and provide a more thorough and use of 
development. The other a little entertainment media, and user development 
systems integration technology. Applications allows users to automatically 
provide access to modify, optimize capability allows users to easily enable. 
Both users and software systems, solutions allowing owners software solutions 
solutions to integrate widgets customers a day. And if you are accessing 
services product, and mobile applications remotely access to the software 
companies can easily automate application access to hardware devices hardware 
systems creators and technologies. Builders and developers are able to access 
the desktop applications, allowing users access allows users to*
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb2c56499bd62ee8a-M2336bd78b60be598c954e315
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread WriterOfMinds
People can only communicate their conscious experiences by analogy. When you 
say "I'm in pain," you're not actually describing your experience; you're 
encouraging me to remember how I felt the last time *I* was in pain, and to 
assume you feel the same way. We have no way of really knowing whether the 
assumption is correct.

We can both name a certain frequency of light "red" and agree on which objects 
are "red." But I can't tell you what my visual experience of red is like, and 
you can't tell me what yours is like. Maybe my red looks like your green -- the 
visual experience of red doesn't seem to inhere in the frequency's numerical 
value, in fact color is nothing like number at all, so nothing says my red 
isn't your green. "Qualia" refers to that indescribable aspect of the 
experience. If your "qualia" can be communicated with symbols, or described in 
terms of other things, then we're not talking about the same concept -- and 
using the same word for it is just confusing.

Going back to your computer-and-mouse example: if I admit your panpsychist 
perspective and assume that a computer mouse has qualia, those qualia are not 
identified with the electro-mechanical events inside the mouse.  I could have 
full knowledge of those (fully compute or model them) without sharing the 
mouse's experience.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Med65915d05938166d1cc3e1f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 4:07 PM, WriterOfMinds wrote:
> Are you sure you wouldn't be better served by calling your ideas some other 
> names than "consciousness" and "qualia," then?  We're all getting "hung-up 
> on" the concepts that those terms actually refer to. 

Good question.

That's what been going on already. But in this age of intelligence it's time to 
take back what is ours and also preserve human consciousness. Also, human 
machine communications are better served by calling it thus IMO. And why let 
narrow minded visionaries control the labeling? That's a control strategy. 
Shoot for the stars. Consciousness is the full package, not little bits and 
pieces to tiptoe around.

This might be premature but at some point it'll be trendy to call it as it is 
IMO.

On Wednesday, August 28, 2019, at 4:07 PM, WriterOfMinds wrote:
> I do not see how communication protocols have anything to do with 
> consciousness as it is usually understood.

People communicate their conscious experiences no? Machines do that too :) 
Machines use communication protocols.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M80abe3880277b7daf241686e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread WriterOfMinds
Are you sure you wouldn't be better served by calling your ideas some other 
names than "consciousness" and "qualia," then?  We're all getting "hung-up on" 
the concepts that those terms actually refer to.  I do not see how 
communication protocols have anything to do with consciousness as it is usually 
understood.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Me64101f0af4c6fa6d1dc5630
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 3:35 PM, Secretary of Trades wrote:
> https://philpapers.org/archive/CHATMO-32.pdf#page=50

Blah blah blah.

>From AGI perspective we are interested in the multi-agent computational 
>advantages in distributed systems that consciousness (or by other names) 
>facilitates. Thus I look at the communication aspects like communication 
>complexity, protocol, structure, etc. which are an external view, not first 
>person narrative of phenomenal consciousness that many people are so 
>obstinately hung-up on. Thus the utilitarian Qualia = Compressed impressed 
>samples symbolized for communication. Though I think first person narrative is 
>addressed by this also, it's not my goal

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mf579060433b8625fb3c512fd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread Secretary of Trades



https://philpapers.org/archive/CHATMO-32.pdf#page=50


On 28.08.2019 22:19, Secretary of Trades wrote:

clrscr();


On 28.08.2019 16:03, johnr...@polyplexic.com wrote:

On Wednesday, August 28, 2019, at 8:44 AM, WriterOfMinds wrote:

That is not what qualia are.  Qualia are incommunicable and private.


As Matt would say:

printf("Ouch!\n");

John

*Artificial General Intelligence List
* / AGI / see discussions
 + participants
 + delivery options
 Permalink





--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M33918554ba75d9d645503d8e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread Secretary of Trades

clrscr();


On 28.08.2019 16:03, johnr...@polyplexic.com wrote:

On Wednesday, August 28, 2019, at 8:44 AM, WriterOfMinds wrote:

That is not what qualia are.  Qualia are incommunicable and private.


As Matt would say:

printf("Ouch!\n");

John

*Artificial General Intelligence List
* / AGI / see discussions
 + participants
 + delivery options
 Permalink




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M930616be8b03d837e6218f2d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 9:30 AM, Nanograte Knowledge Technologies 
wrote:
> Any generalized system relying on the random, subjective input value of 
> qualia would give rise to the systems constraint of ambiguity. Therefore, as 
> a policy, all subjectively-derived data would introduce a semantic anomaly 
> into any system - by design. This
 has significance for the assertion that qualia input would be useful for 
symbolic systems. 

Simple example, my PC and mouse communicate. They're
separated. Assume they have simple digital qualia. Is the mouse able to compute 
the k-complexity of the PC? No. It's estimated. Does the PC use the qualia of 
the
mouse that are communicated? Click click. The mouse is compressing my finger
action into simple  digital symbols for communication. Can I compute the exact 
electron
flow and mechanical action, IOW feel the mouse’s qualia? No, it's estimated but 
estimated
very reliably with almost zero errors.


Take more complicated distributed systems with more symbol complexity and it's 
still the same principle except that more consciousness is generally required 
among the communicating agents.


John


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M6c1a44183a5c4d56e8f57655
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread Nanograte Knowledge Technologies
Does a quale have to always pass the "qualitative character of sensation" test?

 Any generalized system relying on the random, subjective input value of qualia 
would give rise to the systems constraint of ambiguity. Therefore, as a policy, 
all subjectively-derived data would introduce a semantic anomaly into any 
system - by design. This has significance for the assertion that qualia input 
would be useful for symbolic systems.

With regards effective complexity, in the sense of generalized correctness as 
it pertains to generalized intelligence, an AGI design would have to 
empirically resolve the 'ambiguity' problem first. Else, it would result in (or 
take form as) a consciousness-challenged dumb device, like most computers still 
are today.


From: WriterOfMinds 
Sent: Wednesday, 28 August 2019 14:44
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

That is not what qualia are.  Qualia are incommunicable and private.
Artificial General Intelligence List / AGI / 
see discussions + 
participants + delivery 
options 
Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mfb4e7c925c5b4a5bbb64e05e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 8:44 AM, WriterOfMinds wrote:
> That is not what qualia are.  Qualia are incommunicable and private.

As Matt would say:

printf("Ouch!\n");

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M707aad30dee51bf33418bef7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread WriterOfMinds
That is not what qualia are.  Qualia are incommunicable and private.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M100a6fbb04132f410d7de3d6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Monday, August 26, 2019, at 5:25 PM, WriterOfMinds wrote:
> "What it feels like to think" or "the sum of all a being's qualia" can be 
> called phenomenal consciousness. I don't think this type of consciousness is 
> either necessary or sufficient for AGI. If you have an explicit goal of 
> creating an Artificial Phenomenal Consciousness ... well, good luck. 
> Phenomenal consciousness is inherently first-person, and measuring or 
> detecting it in anyone but yourself is seemingly impossible. Nothing about an 
> AGI's structure or behavior will tell you what its first-person experiences 
> *feel* like, or if it feels anything at all.


Qualia = Compressed impressed samples symbolized for communication. From the 
perspective of other agents attempting to Occupy Representation of another 
agents phenomenal consciousness would be akin to computing it's K-complexity. 
Some being commutable some being estimable.

Why does this help AGI? This universe has inherent 
separateness/distributedness. It's the same reason why there is no single 
general compression algorithm.

John



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M09d11c426cbd235dd276652c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] You can help train desktop image segmentation

2019-08-28 Thread Stefan Reich via AGI
Thanks for asking! Well, I had to split up with someone... it always hurts
(he posts on this list too).

Currently I'm working on recognizing chess boards on computer screens in
order to test my CV ideas. This can also be a marketable product.

And yes, computer vision is a great symbiotic partner for symbolic AI.

Stefan

On Tue, 27 Aug 2019 at 23:01,  wrote:

> To my mind, this is the computer vision,  which is going to spark your
> symbolic relations.
>
> Segmentation, if you colour your segments of some video,  if you make
> everyone simpsons colours, it looks like "the real simpsons"   its
> potentially halarious. :)
>
> How is your ai going Stefan - are u still really confident,  if not - ill
> just tell you im still going strong,  even tho ive got a million monkeys in
> my head stopping my brain from squirting the magic thought juice,  im
> running on a tap drip but i do a little bit everyday
>
> excuse my horrible ugly "thought juice" remark.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8f7f05f86e62415a-Mae2b082db7ffded8fe1b4996
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Tuesday, August 27, 2019, at 11:26 PM, immortal.discoveries wrote:
> Dropping the 'consciousness' word, that video I linked above is actually hit 
> on. Let me explain. In the middle of the video, he mentioned wave 
> synchronization - the brain has signals propagating around - please see this 
> video below to see what the man has meant

The metronomes have a communication channel with each other and synchronize 
(occupy) into structure. From what I am saying consciousness is UCP + OR.

Boom done.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mf8d2a4fab87efadab1e37a3f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Tuesday, August 27, 2019, at 10:01 AM, keghnfeem wrote:
> The visual alphabet 2.0:
> 
> https://www.youtube.com/watch?v=Z6MB-ZgPcNg
> 

I watched most of this excellent presentation but was waiting for resultant 
symbol mechanics and dynamics. The structure of the structure extracted and how 
that relates to an accompanying symbol system is what I'm wondering where 
researchers are at to validate some of my thoughts.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mec80d10d7a560cc188a9abac
Delivery options: https://agi.topicbox.com/groups/agi/subscription