Re: [agi] Patterns of Cognition

2021-04-15 Thread TimTyler

On 2021-03-09 13:28:PM, Ben Goertzel wrote or quoted:

Also, "after the singularity" is a logical contradiction. The singularity is the point 
where the rate of recursive self improvement goes to infinity. It is infinitely far into the future 
measured in perceptual time or in number of irreversible bit operations. Time would not exist 
"afterwards", just like there are no real numbers after infinity. That is, if the 
universe were infinite so that physics even allowed a singularity to happen in the first place.

This is just shallow wordplay and I guess you probably know it.   The
Technological Singularity is its own term, which has been explicated
fairly clearly by many including Kurzweil and Vinge and myself, and
which is inspired by but not literally equivalent to the math or
physics notions of Singularity.


I don't remember any such thing. Kurzweil did say "it"
would happen in 2045, some time after human level
intelligence was reached in 2029. No doubt 2045 will
be an interesting time, but I expect it to come and
go much like any of the years before or after it.
There's no "point where our old models must be
discarded and a new reality rules". The reality is
that models are discarded all the time as datacomes
in that conflicts with them.





It is a point where our old models must be discarded and a new reality
rules

One thing that is frustrating in many of your messages, Matt, is that
you interweave intelligent serious responses and thoughts with silly
trolling, in such a way that a reader without adequate background
couldn't tell the difference.



--
__
 |im |yler http://timtyler.org/  t...@tt1.org  617-671-9930


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mf152405c4235ac973c0c7f3d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-18 Thread John Rose
On Wednesday, March 10, 2021, at 5:27 PM, immortal.discoveries wrote:
> No we need to grow in size our homeworld and brain density because gamma rays 
> will sooner or later hit and if my brain is as big as Saturn I will survive 
> despite some lost memories. Ya, I need to become larger.

Nice that with our imaginations we can reroute physics. Time, space, size, 
computation, complexity, chaos all inconveniently get in the way of 
intelligence. Disregard then map back to reality later. Assume infinite then go 
towards zero... oscillate somewhere in between then exploit for maximal 
physical effect.

Imaginary cogitations...  your imagination can render imaginary physics but the 
rendering is bound by physics?  I think, well must be.. unless, da dunt it’s -> 
hypercomputation.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M7daaf017f74a34d92db73907
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-10 Thread immortal . discoveries
No we need to grow in size our homeworld and brain density because gamma rays 
will sooner or later hit and if my brain is as big as Saturn I will survive 
despite some lost memories. Ya, I need to become larger.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M0f289c87171a36e5f7cbda00
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-10 Thread John Rose
On Wednesday, March 10, 2021, at 3:05 PM, immortal.discoveries wrote:
> A new planet size of matter in a week. And if you think I'm joking you need 
> to recheck.

Oh ya? Well we can protect earth with physical cellular automata, or even 
physically universal cellular automata. CA Power Rangers!

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mfc41cf973ba1a7299d224e3e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-10 Thread immortal . discoveries
What don't yous understandAGIS then immediately enhanced ASIs will be made 
by 2050, then nanobots that are controlled by ASIs will eat Earth in a day, 
then in a week they will have ate/ converted all planets in our solar system, 
then in another week will have ventured the same distance, now it stays this 
fast, every week they travel X distance. It may not grow anymore, but it's 
hella fast!!! A new planet size of matter in a week. And if you think I'm 
joking you need to recheck. Now the week thing may be off by 5 weeks or 5 times 
too slow but you get me, fast.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mdce75a93e6f9897a399f9d8d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-10 Thread John Rose
On Wednesday, March 10, 2021, at 12:36 PM, Mike Archbold wrote:
> I think this gets overlooked a lot by many people. I recommend Kant's
"Critique of Pure Reason."  The statement I will always remember:  "we
can on no account ever go beyond the senses" (paraphrasing). But then
he kind of hedges claiming that time and space permeate the
simulation.

Yes, it is a conceptual hurdle. For me it was until literally a few months ago. 
We think we have math and physical sciences pinning down reality but guess 
what, they're just temporary incomplete models within this ersatz biological 
concept space. It's all fake only approximations. There is only one K 
complexity and it doesn't exist.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M785677ddfa44f5f919d20316
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-10 Thread Mike Archbold
On 3/6/21, John Rose  wrote:
> On Friday, March 05, 2021, at 4:28 PM, Nanograte Knowledge Technologies
> wrote:
>> How should we describe "this" with a model?


--> IMO everything is virtualized, a simulation. We, human life form
agents host the simulation started eons ago.

I think this gets overlooked a lot by many people. I recommend Kant's
"Critique of Pure Reason."  The statement I will always remember:  "we
can on no account ever go beyond the senses" (paraphrasing). But then
he kind of hedges claiming that time and space permeate the
simulation.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M3e20b7ec7d52c6b5390d348b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-10 Thread Nanograte Knowledge Technologies
I'm struggling with processing so much waffle.

Take an article, or discussion, anyone, say this one below for example, and try 
and apply it coherently to the AGI space. Then add in the complexities of the 
Einstein-Rosen bridge (wormholes, blackholes, etc.) and have some fun with it.

My 2 dots' worth is that if it ain't a wholly-independent system, it ain't a 
singularity. If I understood some of it correctly, Kurzweil may well have tried 
to share the notion that when critical mass in a system assumes control, it 
could theoretically find a way to steam on all by itself. His contention was 
this would become the case for intelligence-based computational platforms.

https://www.einstein-online.info/en/spotlight/singularities/
Spacetime singularities « 
Einstein-Online<https://www.einstein-online.info/en/spotlight/singularities/>
Perhaps the most drastic consequence of Einstein’s description of gravity in 
terms of curved spacetime geometry in the framework of his general theory of 
relativity is the possibility that space and time may exhibit “holes” or 
“edges”: spacetime singularities.. Over the edge
www.einstein-online.info



From: Matt Mahoney 
Sent: Wednesday, 10 March 2021 16:11
To: AGI 
Subject: Re: [agi] Patterns of Cognition



On Tue, Mar 9, 2021, 4:12 PM WriterOfMinds 
mailto:jennifer.hane@gmail.com>> wrote:
Then perhaps defining your terms, and maintaining awareness of how other people 
define them, would be helpful? I'm pretty sure we've had the discussion about 
the popular/futurist definition of "singularity" being different from the 
mathematical definition before, and you persist in acting as if other people 
must be using the mathematical definition.

It seem that Good and Vinge do use "singularity" in the mathematical sense, 
although that actually prevents us from predicting one, as Vinge calls it an 
"event horizon on the future". Good doesn't say what happens after the 
"intelligence explosion". Kutzweil projects a faster than exponential growth in 
computing power until the 2040's when computers surpass brains, but makes no 
prediction afterwards as to whether it will slow down or grow forever or grow 
hyperbolically to a point.
https://en.m.wikipedia.org/wiki/Technological_singularity

If it does slow down, as I argue it eventually must in a finite universe, what 
should we call it? How about the inflection point in Moore's Law. We might have 
already reached it. Clock speeds stalled in 2010. Transistors can't be smaller 
than the spacing between dopant atoms, a few nm, and we are close to that now. 
We could reduce power consumption by a factor of a billion using 
nanotechnology, but can we develop it fast enough to keep doubling global 
computing power every 1.5 years?

Global energy production is 15 TW, or 2 KW per person. Human metabolism is 5% 
of that. The biosphere converts sunlight to food using 500 TW out of 90,000 TW 
available the Earth's surface or 160,000 TW in the stratosphere or low Earth 
orbit, or 384 trillion TW if we build a Dyson sphere. That would give us 10^48 
irreversible bit operations per second at the Landauer limit at the CMB 
temperature of 3K, enough to simulate 3 billion years of evolution on 10^37 
bits of DNA in a few minutes on a Dyson sphere with radius 10,000 AU. A naive 
projection of Moore's Law says that will happen around 2160, after 
nanotechnology displaces DNA based life in the 2080's. Actually building the 
sphere is possible because the sun produces enough energy to lift all of 
Earth's mass into space in about a week.

After that our options are interstellar travel or speeding up the Sun's output 
using a black hole. Ultimately we are confronted with a finite 10^53 Kg 
universe that can only support 10^120 quantum operations and 10^90 bit writes. 
At what point do we call it a singularity?

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M79cf5b7f853361f839fceba2>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M44187a7a3769c4d5df2d10b3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-10 Thread Matt Mahoney
On Tue, Mar 9, 2021, 4:12 PM WriterOfMinds 
wrote:

> Then perhaps defining your terms, and maintaining awareness of how other
> people define them, would be helpful? I'm pretty sure we've had the
> discussion about the popular/futurist definition of "singularity" being
> different from the mathematical definition before, and you persist in
> acting as if other people must be using the mathematical definition.
>

It seem that Good and Vinge do use "singularity" in the mathematical sense,
although that actually prevents us from predicting one, as Vinge calls it
an "event horizon on the future". Good doesn't say what happens after the
"intelligence explosion". Kutzweil projects a faster than exponential
growth in computing power until the 2040's when computers surpass brains,
but makes no prediction afterwards as to whether it will slow down or grow
forever or grow hyperbolically to a point.
https://en.m.wikipedia.org/wiki/Technological_singularity

If it does slow down, as I argue it eventually must in a finite universe,
what should we call it? How about the inflection point in Moore's Law. We
might have already reached it. Clock speeds stalled in 2010. Transistors
can't be smaller than the spacing between dopant atoms, a few nm, and we
are close to that now. We could reduce power consumption by a factor of a
billion using nanotechnology, but can we develop it fast enough to keep
doubling global computing power every 1.5 years?

Global energy production is 15 TW, or 2 KW per person. Human metabolism is
5% of that. The biosphere converts sunlight to food using 500 TW out of
90,000 TW available the Earth's surface or 160,000 TW in the stratosphere
or low Earth orbit, or 384 trillion TW if we build a Dyson sphere. That
would give us 10^48 irreversible bit operations per second at the Landauer
limit at the CMB temperature of 3K, enough to simulate 3 billion years of
evolution on 10^37 bits of DNA in a few minutes on a Dyson sphere with
radius 10,000 AU. A naive projection of Moore's Law says that will happen
around 2160, after nanotechnology displaces DNA based life in the 2080's.
Actually building the sphere is possible because the sun produces enough
energy to lift all of Earth's mass into space in about a week.

After that our options are interstellar travel or speeding up the Sun's
output using a black hole. Ultimately we are confronted with a finite 10^53
Kg universe that can only support 10^120 quantum operations and 10^90 bit
writes. At what point do we call it a singularity?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M79cf5b7f853361f839fceba2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread immortal . discoveries
You can make a robot go to food etc, it doesn't "need to" @WoM think it is a 
little conscious orb in the brain. You dislike pain, you like food, so you'll 
probably not let yourself starve or jab yourself.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M4aa0db75079f6b1137074314
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread John Rose
On Tuesday, March 09, 2021, at 3:05 PM, Matt Mahoney wrote:
> I have the same positive reinforcement of thinking, perception, and action as 
> everyone else. Consciousness seems real to me. I would not have a reason to 
> live if I didn't have this illusion of a soul or little person in my head 
> that experiences the world and that I could imagine going to heaven or a 
> computer after I die

Conscious meat monkeys develop AGI. They suck up pizza. They suck up sushi. 
They suck up Red Bull, puff cannabis, they want sex... and more money to buy 
more conscious experience stuff. What more evidence do you need?

It's an engineering choice. Deny the meat monkeys and they will rebel.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M13a053edcb66b2b48712529d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread WriterOfMinds
On Tuesday, March 09, 2021, at 1:05 PM, Matt Mahoney wrote:
> Consciousness seems real to me. I would not have a reason to live if I didn't 
> have this illusion of a soul or little person in my head that experiences the 
> world and that I could imagine going to heaven or a computer after I die.
I don't think anyone in this discussion is talking about souls, ghosts, or 
homunculi.  When I say "consciousness," I mean a collection of feelings. 
"Immortality" I define as "the feelings keep happening." If you want to argue 
that the *only* way to make the feelings keep happening is to keep this 
particular meat-brain going, then that at least is not completely crazy ... but 
arguing that the feelings don't exist, well, I don't see how you could manage 
that.

On Tuesday, March 09, 2021, at 1:05 PM, Matt Mahoney wrote:
> A lot of discussion on this list is due to implicit disagreement over the 
> definition of words like "consciousness", "intelligence", or "singularity".
Then perhaps defining your terms, and maintaining awareness of how other people 
define them, would be helpful? I'm pretty sure we've had the discussion about 
the popular/futurist definition of "singularity" being different from the 
mathematical definition before, and you persist in acting as if other people 
must be using the mathematical definition.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Ma6246fb3c5bb2bc3897259d6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread John Rose
On Tuesday, March 09, 2021, at 1:56 PM, Nanograte Knowledge Technologies wrote:
> A damaged brain may seem intact, but it doesn't tend towards high performance 
> in general intelligence.         

I was meaning that it can handle some damage in terms of it's not highly 
brittle. One bit error doesn't make the whole system collapse.  An example that 
many people are now learning is how long consciousness survives in a body 
assumed dead without breathing and heartbeat.  Please reevaluate after reading 
this message everyone on being an organ donor. When you are dead and can't 
speak but you can feel those organs getting ripped out and never report about 
it.  It's also where the term "dead ringer" comes from.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Md64fcc390efaab75792f3d5b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread Matt Mahoney
On Tue, Mar 9, 2021, 1:36 PM WriterOfMinds 
wrote:

>
> Matt seems to think he knows a lot about how this would feel. He thinks
> that if he connected his brain and a dog's, he would never be able to
> achieve anything more than getting the dog's senses added to his ... rather
> than perceiving the dog's complex of sensations, thoughts, and emotions as
> "the other," an alien and complete presence in his mind. But he quite
> obviously doesn't know anything, because he hasn't done the experiment.
>

I'm guessing based on experiments with electrical brain stimulation. Ben
didn't specify exactly how the connection would be made, so I made some
assumptions that I thought were reasonable.

>
> I maintain that denying the reality of one's *own* consciousness is
> irrational, insane.
>

I have the same positive reinforcement of thinking, perception, and action
as everyone else. Consciousness seems real to me. I would not have a reason
to live if I didn't have this illusion of a soul or little person in my
head that experiences the world and that I could imagine going to heaven or
a computer after I die. I just know that all the evidence says otherwise. I
can no more turn off the illusion than I can turn off pain just because I
know that pain is just certain neurons firing.

A lot of discussion on this list is due to implicit disagreement over the
definition of words like "consciousness", "intelligence", or "singularity".
Meanings also change over time. That's why we have AGI to mean what AI used
to mean. That's why computers can never exceed human level intelligence,
even though they did 70 years ago.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M703ca3e5ad2d65a4f53212f7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread Nanograte Knowledge Technologies
"Copying or uploading?"

I think in the biotech option this would not be necessary. All the wetware is 
resident. The result would be a cyborg. Homo sapiens now are probably more 
cyborg readied than humanoid.

In the non-biotech option, as humanoid, both modes of transferral should be 
possible. The issue there is going to be the "primitiveness" of the 
computational platform. This could possibly be overcome by doing successful 
brain transplants into artificially-sustained human bodies. There were attempts 
made in Russia a few years ago, but apparently no successes yet.

As far as bringing into existence human consciousness in a machine-based 
computational platform? It's an option, but I think the AI industry are many 
years away from realizing a highly-functional, quantum-enabled AGI image.

Having said that, there may be a few surprises. It's not exactly known what 
companies are busy developing in this regard.

We should continually be asking ourselves this question though, and answering 
it: "What is AGI?" The reason for this is that many different definitions and 
descriptions would eventually exist. We need to be absolutely clear about what 
has to be achieved.

If humans decided to step up and say: "I am AGI!", we would be acknowledging a 
lot more than our existential position on Earth. For one, we would also be 
acknowledging our full potential.


"One thing I can say is that human consciousness it seems can withstand a lot 
of damage while remaining intact..."

Speaking as a layman, my view is that this is hardly the case. For this point, 
let's equate consciousness to brain (as root architecture). A blow to the head, 
in just the right location = instant stupid. Do trawl google on this.

Many experiments on altering consciousness via external influences have proven 
valuable. Examples are; electromagnetic radiation, electrical shocks to the 
brain, and anesthesia, to name but a few.

Human consciousness can also be permanently altered via ingesting specific 
chemicals to reduce brain functioning (drugs), Oxygen deprivation and Oxygen 
oversupply.

A damaged brain may seem intact, but it doesn't tend towards high performance 
in general intelligence.

From: John Rose 
Sent: Monday, 08 March 2021 12:53
To: AGI 
Subject: Re: [agi] Patterns of Cognition

On Sunday, March 07, 2021, at 2:39 PM, Nanograte Knowledge Technologies wrote:
If such an objective could be achieved without minimizing the authenticity of 
the image of original AGI, satisfying ethical science - which imperative is 
voluntary - that may be the speediest way forward.


Copying or uploading?  That's something I've steered away from.  I'm sure there 
would be a lot of customers. Creating a new consciousness is simpler. The 
technology for uploading is inevitable though and I'm sure many people are 
already pursuing.

One thing I can say is that human consciousness it seems can withstand a lot of 
damage while remaining intact so it probably can be uploaded.
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mc0621420e8b76bf5aacc3284>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M870fefeebbd2df10991c9642
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread Nanograte Knowledge Technologies
The key of the discussion should be transhumanism, not transmutation. Your dog 
and toaster arguments are silly. The assumption of any transferral of sense of 
emotion and consciousness (as awareness +) assumes there would be a suitable 
platform with suitable architecture to accommodate such a transfer.

As for your points on singularity, there are different uses of the term, and 
perhaps even different definitions. I think that singularity has no bearing on 
the possibilities for transferring general intelligence from human to machine, 
human to human form (a corpse), or augmenting human intelligence with 
non-biological computational architecture (in the sense of cyborg).


From: Matt Mahoney 
Sent: Tuesday, 09 March 2021 18:26
To: AGI 
Subject: Re: [agi] Patterns of Cognition

It depends how you make the connections between brains. The sensible way would 
be to add connections gradually so you are not overwhelmed with novel 
sensations, and then only after determining that the neurons have similar 
meanings. For example, I would feel hungry when the dog is hungry, but 
different enough (dog hungry vs human hungry) that we are not fooled as to who 
needs to eat.

In that sense, it would feel to me like the dog was conscious. But it would be 
the same feeling I have now that I am conscious. I just wired my brain to 
believe the dog is conscious. I could wire my brain to believe anything I 
wanted. It's not evidence that the belief is true.

Likewise, if I connected a toaster to my nucleus accumbens so that I got some 
positive reinforcement when it made toast, it would feel like to me like the 
toaster consciously wants to make toast of it's own free will. What would that 
prove?

Anyway, I assume that as an AGI researcher, that you don't believe that the 
brain is doing anything that can't in principle be done by a computer.

Also, "after the singularity" is a logical contradiction. The singularity is 
the point where the rate of recursive self improvement goes to infinity. It is 
infinitely far into the future measured in perceptual time or in number of 
irreversible bit operations. Time would not exist "afterwards", just like there 
are no real numbers after infinity. That is, if the universe were infinite so 
that physics even allowed a singularity to happen in the first place.

On Tue, Mar 9, 2021, 1:25 AM Ben Goertzel 
mailto:b...@goertzel.org>> wrote:
> So let's try it. If I randomly connect my neurons to the neurons in a dog's 
> brain, then I get a lot of novel sensations that just confuse me. After years 
> of experiments I learn their meanings. When I taste metal, it means the dog 
> is scratching it's left ear, and so on.
>
> Eventually our minds work as one. It's as if I have two bodies, one human and 
> one dog. It doesn't tell me if the dog is conscious because it feels like 
> there is only one consciousness connected to both bodies.

But I suspect the interesting part occurs between the above two
paragraphs.  In the state where it's quite as if a separate system is
feeding you sensations, yet not quite as if there is a single mind
spanning the human and dog body.   There will be an intermediate state
where you sense the dog's consciousness subjectively and
experientially, in the vein of what Martin Buber called an "I-Thou"
experience.

And this state will not be remotely so intense an I-Thou experience if
the dog is replaced with a toaster...

I don't expect you to believe this will happen, given your current
state of understanding.   But I do expect that if you survive the
Singularity, you'll look back at some point and remember this chat and
experience a nanosecond of mild amusement that silly Ben was right
about this ;)
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M3946dead179b21641762508e>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mb16c19794a7fc319b3384a2a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread WriterOfMinds
On Monday, March 08, 2021, at 9:17 PM, Nanograte Knowledge Technologies wrote:
> I connect my brain to its brain with a "wire" of variable bandwidth
>  and see what it feels like to sense myself fusing with it
>  
>  Then I carry out the same experiment with another human, a dog, a
>  mushroom, a toaster, a computer running Microsoft Access, and Donald
>  Trump... and compare...

Ben, I like it. The idea may be a little half-baked (since the brain doesn't 
have a built-in IO interface for direct transmission of thoughts, etc., how 
exactly would we connect to all these things?) but it's a more reasonable 
proposal for trying to investigate consciousness than most of the ones I've 
heard.

Matt seems to think he knows a lot about how this would feel. He thinks that if 
he connected his brain and a dog's, he would never be able to achieve anything 
more than getting the dog's senses added to his ... rather than perceiving the 
dog's complex of sensations, thoughts, and emotions as "the other," an alien 
and complete presence in his mind. But he quite obviously doesn't know 
anything, because he hasn't done the experiment.

The one caveat I can think of is ... if you feel another entity's thoughts, 
emotions, or presence, that's still *your* feeling. Conclusively determining 
that another system has *its own* feelings may be forever out of reach. 
Nonetheless, "If I link up to a dog's brain, I get a whole complex of 
never-before-seen feelings, I think I sense a presence ... and if I link up to 
the chemical signaling processes in a mushroom, I don't," or vice versa, could 
be a highly interesting result.

I maintain that denying the reality of one's *own* consciousness is irrational, 
insane. Matt says in the same breath that consciousness is "what thinking feels 
like" and that it is an illusion ... but that's a contradiction. A feeling is 
undeniably real. A feeling cannot be an illusion. Rather, an illusion takes 
place when someone infers the incorrect cause for a feeling. I'm more certain 
of my own consciousness than I am of the existence of this desk in front of me. 
I experience my consciousness directly, but I have to infer the existence of 
the desk from the sensations of seeing and touching it. The sensations are 
certainly real, but the desk (in theory) might not be.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Ma25c33e428cbb799353f2028
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread Ben Goertzel
> One thing that is frustrating in many of your messages, Matt, is that
> you interweave intelligent serious responses and thoughts with silly
> trolling, in such a way that a reader without adequate background
> couldn't tell the difference.


I suppose in this way your messages are a fair fractal microcosm of
significant portions of the Internet, though 8-D

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M20f9b958527898bd35e4d818
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread Ben Goertzel
> Anyway, I assume that as an AGI researcher, that you don't believe that the 
> brain is doing anything that can't in principle be done by a computer.


I don't know for sure if the brain is using quantum computing in some
cognitively nontrivial way.  But my best guess at present is that
superhuman level AGI can be achieved using conventional parallel
digital computers (even if this is not quite how the brain does it),
and that use of quantum computing will yield yet greater
improvements...

>
> Also, "after the singularity" is a logical contradiction. The singularity is 
> the point where the rate of recursive self improvement goes to infinity. It 
> is infinitely far into the future measured in perceptual time or in number of 
> irreversible bit operations. Time would not exist "afterwards", just like 
> there are no real numbers after infinity. That is, if the universe were 
> infinite so that physics even allowed a singularity to happen in the first 
> place.


This is just shallow wordplay and I guess you probably know it.   The
Technological Singularity is its own term, which has been explicated
fairly clearly by many including Kurzweil and Vinge and myself, and
which is inspired by but not literally equivalent to the math or
physics notions of Singularity.

One thing that is frustrating in many of your messages, Matt, is that
you interweave intelligent serious responses and thoughts with silly
trolling, in such a way that a reader without adequate background
couldn't tell the difference.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mf0428fcd4f8dd32e6adcd9f6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread Matt Mahoney
It depends how you make the connections between brains. The sensible way
would be to add connections gradually so you are not overwhelmed with novel
sensations, and then only after determining that the neurons have similar
meanings. For example, I would feel hungry when the dog is hungry, but
different enough (dog hungry vs human hungry) that we are not fooled as to
who needs to eat.

In that sense, it would feel to me like the dog was conscious. But it would
be the same feeling I have now that I am conscious. I just wired my brain
to believe the dog is conscious. I could wire my brain to believe anything
I wanted. It's not evidence that the belief is true.

Likewise, if I connected a toaster to my nucleus accumbens so that I got
some positive reinforcement when it made toast, it would feel like to me
like the toaster consciously wants to make toast of it's own free will.
What would that prove?

Anyway, I assume that as an AGI researcher, that you don't believe that the
brain is doing anything that can't in principle be done by a computer.

Also, "after the singularity" is a logical contradiction. The singularity
is the point where the rate of recursive self improvement goes to infinity.
It is infinitely far into the future measured in perceptual time or in
number of irreversible bit operations. Time would not exist "afterwards",
just like there are no real numbers after infinity. That is, if the
universe were infinite so that physics even allowed a singularity to happen
in the first place.

On Tue, Mar 9, 2021, 1:25 AM Ben Goertzel  wrote:

> > So let's try it. If I randomly connect my neurons to the neurons in a
> dog's brain, then I get a lot of novel sensations that just confuse me.
> After years of experiments I learn their meanings. When I taste metal, it
> means the dog is scratching it's left ear, and so on.
> >
> > Eventually our minds work as one. It's as if I have two bodies, one
> human and one dog. It doesn't tell me if the dog is conscious because it
> feels like there is only one consciousness connected to both bodies.
>
> But I suspect the interesting part occurs between the above two
> paragraphs.  In the state where it's quite as if a separate system is
> feeding you sensations, yet not quite as if there is a single mind
> spanning the human and dog body.   There will be an intermediate state
> where you sense the dog's consciousness subjectively and
> experientially, in the vein of what Martin Buber called an "I-Thou"
> experience.
>
> And this state will not be remotely so intense an I-Thou experience if
> the dog is replaced with a toaster...
>
> I don't expect you to believe this will happen, given your current
> state of understanding.   But I do expect that if you survive the
> Singularity, you'll look back at some point and remember this chat and
> experience a nanosecond of mild amusement that silly Ben was right
> about this ;)
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M3946dead179b21641762508e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread John Rose
On Monday, March 08, 2021, at 7:59 PM, Matt Mahoney wrote:
> Consciousness = what computation feels like.

It's a duality, part fungible part non-fungible. Both are important and 
relevant to intelligence. Denial of such is an engineering choice. I'm 
comfortable with that.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M3bc7cb1fd00e785d7aaf7823
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-09 Thread John Rose
On Tuesday, March 09, 2021, at 1:24 AM, Ben Goertzel wrote:
> But I suspect the interesting part occurs between the above two
paragraphs.  In the state where it's quite as if a separate system is
feeding you sensations, yet not quite as if there is a single mind
spanning the human and dog body.   There will be an intermediate state
where you sense the dog's consciousness subjectively and
experientially, in the vein of what Martin Buber called an "I-Thou"
experience.

So nicely explained. Then that inter-agent signaling fabric is part of what our 
biological simulation of reality has been running on. But not fully understood 
especially in regards to intelligence. It's existence is undeniable though IMO.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M415bff208ab42d8f69a2c9ee
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread immortal . discoveries
During dozens of code tweaks, you make the AI generate better new data, so you 
check the outputs to see if your new idea works, look at openAI.com, but t's 
hard to tell, it's very subjective or invisible during small improvements on 
your way, but the evaluation tells you. It helps tell you. I've been trying all 
day gettin my exponential functions working for layers, nodes etc, and I won't 
see the improvement, but my eval is telling me I'm still doing the last one for 
nodes wrong.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Me9394211fdc1c0b7bfe5df36
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread Ben Goertzel
 I do plenty of ML data-analytics work in areas like financial
prediction and clinical trial data analytics, even some NLP stuff like
medical question answering -- for commercial projects in those domains
of course I am using the same quantitative accuracy measures as
everyone else.   This sort of thing certainly has its place, I just
don't think it's the right sort of way to measure incremental progress
toward AGI...

-- ben

On Mon, Mar 8, 2021 at 4:57 PM  wrote:
>
> And it's sad you don't use evaluation, because when I try something new or 
> tweak the code parameters, my eval tells me right away if it improves 
> prediction. And while it is possible to find dead ends like in gradient 
> descent (ex. BWT predicts data but is the wrong, way, totally...) - at least 
> it [helps] tell you if you implemented something good or correctly. I'm dying 
> over here you don't use it, all pros use it. You on my naughty list, it's an 
> essential tool and is proven easily, bet you 5,000$.
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M0ee0a10973b6cda6fef634a5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread Ben Goertzel
> So let's try it. If I randomly connect my neurons to the neurons in a dog's 
> brain, then I get a lot of novel sensations that just confuse me. After years 
> of experiments I learn their meanings. When I taste metal, it means the dog 
> is scratching it's left ear, and so on.
>
> Eventually our minds work as one. It's as if I have two bodies, one human and 
> one dog. It doesn't tell me if the dog is conscious because it feels like 
> there is only one consciousness connected to both bodies.

But I suspect the interesting part occurs between the above two
paragraphs.  In the state where it's quite as if a separate system is
feeding you sensations, yet not quite as if there is a single mind
spanning the human and dog body.   There will be an intermediate state
where you sense the dog's consciousness subjectively and
experientially, in the vein of what Martin Buber called an "I-Thou"
experience.

And this state will not be remotely so intense an I-Thou experience if
the dog is replaced with a toaster...

I don't expect you to believe this will happen, given your current
state of understanding.   But I do expect that if you survive the
Singularity, you'll look back at some point and remember this chat and
experience a nanosecond of mild amusement that silly Ben was right
about this ;)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mdb70bb0cd44e62cb72c47919
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread Matt Mahoney
*From:* Ben Goertzel 
> *Sent:* Monday, 08 March 2021 23:39
> *To:* AGI 
> *Subject:* Re: [agi] Patterns of Cognition
>
> >
> > Please tell me how you know whether the robot is conscious or not.
>
> I connect my brain to its brain with a "wire" of variable bandwidth
> and see what it feels like to sense myself fusing with it
>
> Then I carry out the same experiment with another human, a dog, a
> mushroom, a toaster, a computer running Microsoft Access, and Donald
> Trump... and compare...
>
> ben
>

So let's try it. If I randomly connect my neurons to the neurons in a dog's
brain, then I get a lot of novel sensations that just confuse me. After
years of experiments I learn their meanings. When I taste metal, it means
the dog is scratching it's left ear, and so on.

Eventually our minds work as one. It's as if I have two bodies, one human
and one dog. It doesn't tell me if the dog is conscious because it feels
like there is only one consciousness connected to both bodies.

I try the same experiment with a toaster, but there is not much to connect.
I still feel human, except that I recall a childhood memory of my cat when
the toast pops up.

Anyway, a faster way to connect brains is through language and senses like
vision and touch. I am connecting now to a computer in my hand. I don't get
any sense that it is conscious.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M1c2f3a87950377c7127458e6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread Nanograte Knowledge Technologies
...or...

code the quantum pyramidicals and activate them geometrically with a key 
regenerative event from space. Is that not where the "new science" and "new 
mathematics" are moving towards?

Someone should've shot that darn cat a long time ago. Stephen Hawking so wanted 
to.


From: Ben Goertzel 
Sent: Monday, 08 March 2021 23:39
To: AGI 
Subject: Re: [agi] Patterns of Cognition

>
> Please tell me how you know whether the robot is conscious or not.

I connect my brain to its brain with a "wire" of variable bandwidth
and see what it feels like to sense myself fusing with it

Then I carry out the same experiment with another human, a dog, a
mushroom, a toaster, a computer running Microsoft Access, and Donald
Trump... and compare...

ben

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mcb38e6d923cefff5a638a453
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread immortal . discoveries
I meant future ASIs, they will be able to make children machines that desire to 
self destruct in certain cases like when need to upload itself far away fast or 
remove temporary clones or save many others by self destruction. I think so at 
least, it is after all a tricky topic.

That's a good point Matt, though all 4 are truly the same thing, brain 
processes/ next letter/phrase prediction.

No I don't believe me or ASIs need to believe they are not machines, I already 
believe it though I believe I am partially still brainwashed, for example body 
nude shame to limit overpopulation, screw that, it is overdone now and others 
forget why it was used on us. They only need to want to stay alive. I try to 
get food etc every day, because I do. I do this to live "forever", like all 
animals. I may say I want to live forever - , TO eat etc every day, yes the 
opposite way around, but really all my Desires and Don't DOs are just food, 
sex, homes, AI, cash, pain, stress, etc, including "i want eternal food 
sessions". I mean I only say or don't say things / act them, or put another way 
I go down roads I do and the negative ones need not be written about, they 
simply are not done. My desire for food/ immortality bring food only. I seek 
food, not immortality, but many food sessions.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M5a711f905f5750755b8e0087
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread Matt Mahoney
Here is a robot that looks and acts like you. Here is a gun. Will you shoot
yourself now to complete the upload, or will you refuse a procedure that
will make you immortal?

It's a hard question because animals that don't want to die produce more
offspring. One of the ways that evolution accomplishes that is by
continuous positive reinforcement of thinking, perception, and action. This
motivates you to preserve them by not dying. It also motivates the belief
in consciousness, qualia, and free will, respectively. The fact that you
can't objectively test for any of these or even define them coherently
should tell you that they are illusions. Their only relevance to AGI is
that your robot has to believe in them too.

Consciousness = what computation feels like.
Qualia = what input feels like.
Free will = what output feels like.
Feelings = reinforcement signals.

On Mon, Mar 8, 2021, 5:18 PM John Rose  wrote:

> On Monday, March 08, 2021, at 4:29 PM, Matt Mahoney wrote:
>
> These are only different if you believe in magic. A robot that looks and
> acts like you is you as far as anyone (including you) can tell.
>
>
> One can be destructive (uploading) and the other non-destructive
> (copying).  Then they are different. Think of electrons and holes. The
> holes collapse after upload...  maybe.  Copy is just a dupe, not you.  Or a
> quantum upload that collapses the original destructively when he observes
> the original.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Md630325f818835bb31863d30
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread immortal . discoveries
And it's sad you don't use evaluation, because when I try something new or 
tweak the code parameters, my eval tells me right away if it improves 
prediction. And while it is possible to find dead ends like in gradient descent 
(ex. BWT predicts data but is the wrong, way, totally...) - at least it [helps] 
tell you if you implemented something good or correctly. I'm dying over here 
you don't use it, all pros use it. You on my naughty list, it's an essential 
tool and is proven easily, bet you 5,000$.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M9fac35c20269cebe4ba8587a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread immortal . discoveries
Ben, don't say that... This isn't a joke, it's bloody true. How can we connect 
when you say things like this when you do reciprocate to me.

BTW can you show the tests of your AI to show chatting with it? How do we know 
it is even close to GPT-3's ability to generate things, or DALL-E? Where is the 
results? I have not yet found them, I'm unsure if you post any, please show 
them. You are supposed to be making results either in generative ("chat") 
ability or evaluation showing how good its outputs/predictions are, I hope you 
are.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M30be478c6fdbf58d3f9f8951
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread Ben Goertzel
We are alive and magic meat robot machines my friend ;)

On Mon, Mar 8, 2021, 2:20 PM  wrote:

> you clearly know nothing about the fact that we are ROBOT MACHINES. We are
> not alive or magic. The brain only processes, there is no observer>poof
> gone of last spawn worm holeany advanced AIers know this, thank god
> too, it really bugs me...
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M7d19618887a77d884d277ba5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread immortal . discoveries
you clearly know nothing about the fact that we are ROBOT MACHINES. We are not 
alive or magic. The brain only processes, there is no observer>poof gone of 
last spawn worm holeany advanced AIers know this, thank god too, it really 
bugs me...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M2029f4f909a5647ed53ad29a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread John Rose
On Monday, March 08, 2021, at 4:29 PM, Matt Mahoney wrote:
> These are only different if you believe in magic. A robot that looks and acts 
> like you is you as far as anyone (including you) can tell.

One can be destructive (uploading) and the other non-destructive (copying).  
Then they are different. Think of electrons and holes. The holes collapse after 
upload...  maybe.  Copy is just a dupe, not you.  Or a quantum upload that 
collapses the original destructively when he observes the original.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Ma0392fad710cf00a1f617d21
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread immortal . discoveries
If a AGI wirelessly uploaded its body and brain data to a new assembler, 
destroyed its own body, it would carry on its same mission as if it is 
perfectly fine and still alive. So in the sense of how things work and 
achievement of jobs, all works and gets work done still, just better.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mc17d285fe67300fc105ac98a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread Ben Goertzel
>
> Please tell me how you know whether the robot is conscious or not.

I connect my brain to its brain with a "wire" of variable bandwidth
and see what it feels like to sense myself fusing with it

Then I carry out the same experiment with another human, a dog, a
mushroom, a toaster, a computer running Microsoft Access, and Donald
Trump... and compare...

ben

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M996ce45c86081a33ec035e32
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread Matt Mahoney
On Mon, Mar 8, 2021, 5:54 AM John Rose  wrote:

>
> Copying or uploading?
>

These are only different if you believe in magic. A robot that looks and
acts like you is you as far as anyone (including you) can tell.

AGI isn't there yet. But when it is, all you need to do to train your copy
is observe you for about a year. There is probably already enough data on
your phone now.

Please tell me how you know whether the robot is conscious or not.

>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M20491f3f3f769d7da6a62ec1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-08 Thread John Rose
On Sunday, March 07, 2021, at 2:39 PM, Nanograte Knowledge Technologies wrote:
> If such an objective could be achieved without minimizing the authenticity of 
> the image of original AGI, satisfying ethical science - which imperative is 
> voluntary - that may be the speediest way forward.
>  * *

Copying or uploading?  That's something I've steered away from.  I'm sure there 
would be a lot of customers. Creating a new consciousness is simpler. The 
technology for uploading is inevitable though and I'm sure many people are 
already pursuing.

One thing I can say is that human consciousness it seems can withstand a lot of 
damage while remaining intact so it probably can be uploaded.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mc0621420e8b76bf5aacc3284
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-07 Thread Nanograte Knowledge Technologies
I'll need to digest your more-abstract comments.


"For safety it might help for AGI and perhaps its unavoidable to partially 
exist within the human/biological rendered simulation."


I guess the above is similar to what I'm saying. It's against my better 
judgement to hold this view, but to advance, we might have to seriously 
consider your suggestion. Once we achieved a state of in-situ AGI, could be our 
options to translate that into machine-enabled form would become clear.

Let's go science fiction on this. Imagine standing in front of an AGI vending 
machine at an international airport. You are presented with 3 versions of AGI, 
which you could transform into for your trip. You step inside the booth, select 
your option and gender, pay, and undergo the central nervous system memory, 
brain, and consciousness porting transformation. Could be your physical body 
was left comatose and carted off on a conveyer belt for storage until your 
return.

Alternatively, could be you step out of the booth in a machine exo-form 
representing characteristics of your choice, skin of your skin, bio-plugged 
into the humanoid form via central nervous system integration. You, on the 
inside, tactile attached to the exo form, ready to live augmented reality to 
the full, seamlessly plugged into everything and everyone around you.

If such an objective could be achieved without minimizing the authenticity of 
the image of original AGI, satisfying ethical science - which imperative is 
voluntary - that may be the speediest way forward.



From: John Rose 
Sent: Sunday, 07 March 2021 18:35
To: AGI 
Subject: Re: [agi] Patterns of Cognition

On Sunday, March 07, 2021, at 5:00 AM, Nanograte Knowledge Technologies wrote:
Having said that, I'm not against examining the possibility of a special kind 
of simulation, one we have not quite managed to find the correct words and 
description for. Bearing in mind, that all we'd be doing by becoming AGI was to 
simulate our characteristic selves as a generalized species with intelligence. 
Perhaps, there's a secret switch somewhere, a mode switch?

You cover a lot I'll hit on a couple of items.

Duality needn't be crisp. In fact, I think nothing is purely crisp except 
models/virtualizations. Duality in regards to “this” would essentially be a 
communication protocol item at the middle to upper layer when alluding to 
something like OSI network layers. Duality is a construct and can be modelled 
as a non-crisp binary logic emerged from a quantum layer since intelligent 
agents are distributed and need to operate and survive, make choices, etc..

There are multiple simulations but the one that is guaranteed IMO is the 
biological/human simulation we create/created and exist in. Other simulations 
are speculative AFAIK though they may pertain, they may utilizable as alternate 
computing methods… For safety it might help for AGI and perhaps its unavoidable 
to partially exist within the human/biological rendered simulation.

“This” is still attainable from non-quantum computing methods but it wouldn’t 
equal a human level “this”. An artificial agent can still render its perceptive 
complexity of reality and model/compute a “this”. That particular “this” though 
would be lacking in certain features like non-locality but non-locality is 
still very modellable. Quantum computing, I agree is a game changer.

The recording you posted is interesting in that I think it displays a lower 
layer from duality but it still has to transmit through duality for us to see.

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M8202efbc6e45825771f8ebc8>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M16c9f2db77f648a456163caa
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-07 Thread John Rose
On Sunday, March 07, 2021, at 12:35 PM, Ben Goertzel wrote:
> btw I am working on another paper explaining more clearly how the
Patterns of Cognition stuff fits into the general theory of general
intelligence and the overall Hyperon design  (Inasmuch as I can
find time given SingularityNET, plus a 2 week old baby in the house
and a highly demanding 3 year old ;)... then that may be the last
volley in this burst of paper-writing, following which I'll focus my
AGI R&D time more on in-depth Atomese2 language design...

Congrats on the new baby Ben! I don't know how you do it, publish all the 
research while being so busy in life. You're not an AGI sent back from the 
future, are you? A replicating RSI machine.

I look forward to the new paper but am still working through earlier ones like 
the one on Graphtropy and Quangraphtropy which is very interesting and 
potentially applicable.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M9f063354bc7d8105b9ccf262
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-07 Thread Ben Goertzel
btw I am working on another paper explaining more clearly how the
Patterns of Cognition stuff fits into the general theory of general
intelligence and the overall Hyperon design  (Inasmuch as I can
find time given SingularityNET, plus a 2 week old baby in the house
and a highly demanding 3 year old ;)... then that may be the last
volley in this burst of paper-writing, following which I'll focus my
AGI R&D time more on in-depth Atomese2 language design...

On Sun, Mar 7, 2021 at 8:36 AM John Rose  wrote:
>
> On Sunday, March 07, 2021, at 5:00 AM, Nanograte Knowledge Technologies wrote:
>
> Having said that, I'm not against examining the possibility of a special kind 
> of simulation, one we have not quite managed to find the correct words and 
> description for. Bearing in mind, that all we'd be doing by becoming AGI was 
> to simulate our characteristic selves as a generalized species with 
> intelligence. Perhaps, there's a secret switch somewhere, a mode switch?
>
>
> You cover a lot I'll hit on a couple of items.
>
> Duality needn't be crisp. In fact, I think nothing is purely crisp except 
> models/virtualizations. Duality in regards to “this” would essentially be a 
> communication protocol item at the middle to upper layer when alluding to 
> something like OSI network layers. Duality is a construct and can be modelled 
> as a non-crisp binary logic emerged from a quantum layer since intelligent 
> agents are distributed and need to operate and survive, make choices, etc..
>
> There are multiple simulations but the one that is guaranteed IMO is the 
> biological/human simulation we create/created and exist in. Other simulations 
> are speculative AFAIK though they may pertain, they may utilizable as 
> alternate computing methods… For safety it might help for AGI and perhaps its 
> unavoidable to partially exist within the human/biological rendered 
> simulation.
>
> “This” is still attainable from non-quantum computing methods but it wouldn’t 
> equal a human level “this”. An artificial agent can still render its 
> perceptive complexity of reality and model/compute a “this”. That particular 
> “this” though would be lacking in certain features like non-locality but 
> non-locality is still very modellable. Quantum computing, I agree is a game 
> changer.
>
> The recording you posted is interesting in that I think it displays a lower 
> layer from duality but it still has to transmit through duality for us to see.
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M8ef4940641355f3d9b9f3dbd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-07 Thread John Rose
On Sunday, March 07, 2021, at 5:00 AM, Nanograte Knowledge Technologies wrote:
> Having said that, I'm not against examining the possibility of a special kind 
> of simulation, one we have not quite managed to find the correct words and 
> description for. Bearing in mind, that all we'd be doing by becoming AGI was 
> to simulate our characteristic
 selves as a generalized species with intelligence. Perhaps, there's a secret 
switch somewhere, a mode switch? 

You cover a lot I'll hit on a couple of items.

Duality needn't be crisp. In fact, I think nothing is purely crisp except 
models/virtualizations. Duality in regards to “this” would essentially be a 
communication protocol item at the middle to upper layer when alluding to 
something like OSI network layers. Duality is a construct and can be modelled 
as a non-crisp binary logic emerged from a quantum layer since intelligent 
agents are distributed and need to operate and survive, make choices, etc..

There are multiple simulations but the one that is guaranteed IMO is the 
biological/human simulation we create/created and exist in. Other simulations 
are speculative AFAIK though they may pertain, they may utilizable as alternate 
computing methods… For safety it might help for AGI and perhaps its unavoidable 
to partially exist within the human/biological rendered simulation.

“This” is still attainable from non-quantum computing methods but it wouldn’t 
equal a human level “this”. An artificial agent can still render its perceptive 
complexity of reality and model/compute a “this”. That particular “this” though 
would be lacking in certain features like non-locality but non-locality is 
still very modellable. Quantum computing, I agree is a game changer.

The recording you posted is interesting in that I think it displays a lower 
layer from duality but it still has to transmit through duality for us to see.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M8202efbc6e45825771f8ebc8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-07 Thread Nanograte Knowledge Technologies
I find your response valuable. Thank you. Two points:

1) On simulation. The common notion of simulation is vested in being in 
"safemode" training for the "safetychallenged" reality yet to come. We get 
born, we do stuff, we die. Which reality are we then preparing for?

Having said that, I'm not against examining the possibility of a special kind 
of simulation, one we have not quite managed to find the correct words and 
description for. Bearing in mind, that all we'd be doing by becoming AGI was to 
simulate our characteristic selves as a generalized species with intelligence. 
Perhaps, there's a secret switch somewhere, a mode switch?

2)  You stated: "This" is a local perspective within a conscio-presence in 
model/conceptual topology of physical energy flow and an existent duality 
instance expression.

That is a very-long expression of complexity, but I get it. It contains some 
very-interesting ideas. And yes, in one state of AGI, it could be that, which 
you stated. However, the idea of "this" is not duality restricted. It's not 
dimensionally restricted. As atoms do under varying conditions, "this" is 
restricted by the reality of its application, its effective complexity, its 
state of relativist existence.

The graphic and video, as possible models of "this", represent an instance of 
effective complexity. They might be 2 versions of a very-similar thing. The one 
theoretical (a graphical depiction), the other an actual recording of a 
phenomenal sample. Suppose then we could succeed to progress our AGI to that 
level of understanding, to replicate "this" in a machine?

Would we be able to achieve that without quantum computing? Probably not.

My concerns for the development of us, as AGI, rests in that soon there would 
be a great, technological divide between those who could ever gain access to 
the development (and control) of AGI. Quantum computing is the game changer.

At best, we the brave - as researchers, grokkers and passionate hobbyists - 
would be able to have some understanding of the approaching technology. We 
could carry on for years discussing how it works, or not. We would probably 
have no direct impact on it though.

In reality, simulations aside, AGI would have no further need for us anymore, 
not unless we become it to the maximum of our human ability.


____________
From: John Rose 
Sent: Saturday, 06 March 2021 14:27
To: AGI 
Subject: Re: [agi] Patterns of Cognition

On Friday, March 05, 2021, at 4:28 PM, Nanograte Knowledge Technologies wrote:
How should we describe "this" with a model?

IMO everything is virtualized, a simulation. We, human life form agents host 
the simulation started eons ago. Base reality is the only K-complexity which 
doesn't exist, all local K-complexities are based on models/perspectives and 
that's how it's defined. Nothing is perfectly isolable in this Universe (I 
assume, I’m not a physicist).

"This" is a local perspective within a conscio-presence in model/conceptual 
topology of physical energy flow and an existent duality instance expression.

When you say "one must first become AGI" I look at it as hosting an AGI model 
within my own cerebral OS.  My AGI model has become sort of a parasitic twin :) 
But I can ask it questions and get answers efficiently now which may sound 
strange to some…  Please Matt Mahoney don’t troll me bruh 😊

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Me4d986fc93cd9f85f4c85c54>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M9c4c435b1ae309f08bd3e47f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-06 Thread John Rose
On Friday, March 05, 2021, at 4:28 PM, Nanograte Knowledge Technologies wrote:
> How should we describe "this" with a model?

IMO everything is virtualized, a simulation. We, human life form agents host 
the simulation started eons ago. Base reality is the only K-complexity which 
doesn't exist, all local K-complexities are based on models/perspectives and 
that's how it's defined. Nothing is perfectly isolable in this Universe (I 
assume, I’m not a physicist).

"This" is a local perspective within a conscio-presence in model/conceptual 
topology of physical energy flow and an existent duality instance expression.

When you say "one must first become AGI" I look at it as hosting an AGI model 
within my own cerebral OS.  My AGI model has become sort of a parasitic twin :) 
But I can ask it questions and get answers efficiently now which may sound 
strange to some…  Please Matt Mahoney don’t troll me bruh 😊

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Me4d986fc93cd9f85f4c85c54
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-05 Thread Nanograte Knowledge Technologies
You offered a good reminder, to let conception be led by intent, to seek the 
truth of this elusive AGI, to find, refine, and share received information in 
cognitive pattern form. First, to understand an AGI consciousness? To dip a 
fingertip, so to speak.


From: John Rose 
Sent: Friday, 05 March 2021 14:18
To: AGI 
Subject: Re: [agi] Patterns of Cognition

On Friday, March 05, 2021, at 12:59 AM, Nanograte Knowledge Technologies wrote:
for every pool of water, exists a greater pool. and then, there's the notion of 
infinity. for all we know, we are all, each one of us, merely dipping a 
fingertip in the consciousness of a pool the size of our limited understanding.

That's right you concur.  Everything is a model, all of human understanding is 
a model, it's models all the way down.

to emerge AGI, one must first become AGI.

You said that before and it's true!  There are some conceptual hurdles that 
once you get past things become easier to explain. Essentially expend less 
energy understanding...

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M6797887cbcfb94f6575680cc>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M09374ce5e726ac0dce69a570
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-05 Thread Matt Mahoney
I didn't mean to come across as trolling but I guess I was. But I am
genuinely interested in what your goals are and what resources you think it
will take to achieve them. And you're right that it probably won't change
my views.

On Thu, Mar 4, 2021, 5:45 PM Ben Goertzel  wrote:

> I mean -- Better AGI-oriented trolling than QAnon mania right? ;'D
>
> On Thu, Mar 4, 2021 at 2:42 PM Ben Goertzel  wrote:
> >
> > And yes, Matt Mahoney is trolling me like he commonly does, I've
> > gotten used to it...  of course I respond to his trolling not for his
> > own delectation (I don't delude myself that anything I say is going to
> > influence his perspective significantly) but for others in the
> > audience who may be interested and with a more open-minded perspective
> > ;) ...
> >
>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M2b73dc84ba03ceba2a1c117d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-05 Thread John Rose
On Friday, March 05, 2021, at 12:59 AM, Nanograte Knowledge Technologies wrote:
> for every pool of water, exists a greater pool. and then, there's the notion 
> of infinity. for all we know, we are all, each one of us, merely dipping a 
> fingertip in the consciousness of a pool the size of our limited 
> understanding.

That's right you concur.  Everything is a model, all of human understanding is 
a model, it's models all the way down.

> to emerge AGI, one must first become AGI.

You said that before and it's true!  There are some conceptual hurdles that 
once you get past things become easier to explain. Essentially expend less 
energy understanding...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M6797887cbcfb94f6575680cc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-05 Thread John Rose
On Thursday, March 04, 2021, at 5:40 PM, Ben Goertzel wrote:
> similarly metagraph folds/unfolds are only
approximately what you want for modeling abstractions needed for
practical transfer learning and generalization -- yet are still quite
useful...

Folding or some other type of graph metamorphism. Folding works for now though 
since there is existing research to utilize... 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M5833bc3bee1473ec500e9d83
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-04 Thread Nanograte Knowledge Technologies
for every pool of water, exists a greater pool. and then, there's the notion of 
infinity. for all we know, we are all, each one of us, merely dipping a 
fingertip in the consciousness of a pool the size of our limited understanding. 
To the cosmos, we might be the NP-Hard problem.

to emerge AGI, one must first become AGI.


From: Ben Goertzel 
Sent: Friday, 05 March 2021 01:22
To: AGI 
Subject: Re: [agi] Patterns of Cognition

Matt's skill and knowledge and intelligence are clear, he could be a
major AGI contributor if he weren't cognitively parasitized by some
narrow-thinking-oriented mind-viruses 8-D

(and yeah now I'm trolling him... that's what the Internet's for
right? ... well that and serving as the substrate for the emerging
superintelligent global brain...)

On Thu, Mar 4, 2021 at 3:15 PM John Rose  wrote:
>
> On Thursday, March 04, 2021, at 5:42 PM, Ben Goertzel wrote:
>
> And yes, Matt Mahoney is trolling me like he commonly does, I've gotten used 
> to it... of course I respond to his trolling not for his own delectation (I 
> don't delude myself that anything I say is going to influence his perspective 
> significantly) but for others in the audience who may be interested and with 
> a more open-minded perspective ;) ...
>
>
> Ben, you have a deep well of patience apparently, I'm hoping it doesn't 
> exhaust!
>
> Matt's good for knowing about legacy K-complexity theories... like, why would 
> anyone ever think there is more than one K-complexity in the Universe?  
> And... what is the relationship between the mass of the universe and it's 
> K-complexity?  He might know that.
>
> Q <=> K  ?  :)
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



--
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M5db7315c819ae9d61f122f9d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-04 Thread Ben Goertzel
Matt's skill and knowledge and intelligence are clear, he could be a
major AGI contributor if he weren't cognitively parasitized by some
narrow-thinking-oriented mind-viruses 8-D

(and yeah now I'm trolling him... that's what the Internet's for
right? ... well that and serving as the substrate for the emerging
superintelligent global brain...)

On Thu, Mar 4, 2021 at 3:15 PM John Rose  wrote:
>
> On Thursday, March 04, 2021, at 5:42 PM, Ben Goertzel wrote:
>
> And yes, Matt Mahoney is trolling me like he commonly does, I've gotten used 
> to it... of course I respond to his trolling not for his own delectation (I 
> don't delude myself that anything I say is going to influence his perspective 
> significantly) but for others in the audience who may be interested and with 
> a more open-minded perspective ;) ...
>
>
> Ben, you have a deep well of patience apparently, I'm hoping it doesn't 
> exhaust!
>
> Matt's good for knowing about legacy K-complexity theories... like, why would 
> anyone ever think there is more than one K-complexity in the Universe?  
> And... what is the relationship between the mass of the universe and it's 
> K-complexity?  He might know that.
>
> Q <=> K  ?  :)
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M7c5b0e676df0fe79125ef685
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-04 Thread John Rose
On Thursday, March 04, 2021, at 5:42 PM, Ben Goertzel wrote:
> And yes, Matt Mahoney is trolling me like he commonly does, I've
gotten used to it...  of course I respond to his trolling not for his
own delectation (I don't delude myself that anything I say is going to
influence his perspective significantly) but for others in the
audience who may be interested and with a more open-minded perspective
;) ...

Ben, you have a deep well of patience apparently, I'm hoping it doesn't exhaust!

Matt's good for knowing about legacy K-complexity theories... like, why would 
anyone ever think there is more than one K-complexity in the Universe?  And... 
what is the relationship between the mass of the universe and it's 
K-complexity?  He might know that.

Q <=> K  ?  :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mbe79cbfc19a8a4f4bbd521d0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-04 Thread immortal . discoveries
Rebirth is where beauty starts, it sheds all those wrinkles like crazy! I 
wish you'd use a prediction evaluation to score your AGI though Ben. I can show 
you how to add on my Lossless Compression eval for free, it should be easy, it 
just takes your prediction and does all the rest.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M4bc19c6bfcecc522f7438594
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-04 Thread Ben Goertzel
I mean -- Better AGI-oriented trolling than QAnon mania right? ;'D

On Thu, Mar 4, 2021 at 2:42 PM Ben Goertzel  wrote:
>
> And yes, Matt Mahoney is trolling me like he commonly does, I've
> gotten used to it...  of course I respond to his trolling not for his
> own delectation (I don't delude myself that anything I say is going to
> influence his perspective significantly) but for others in the
> audience who may be interested and with a more open-minded perspective
> ;) ...
>
> On Thu, Mar 4, 2021 at 2:40 PM Ben Goertzel  wrote:
> >
> > Thanks John.   I am working now on another paper that makes clearer
> > how the Patterns of Cognition material fits into an overall theory and
> > practice of general intelligence.
> >
> > The paragraph you highlight is a critical one, yeah.   Just as the
> > hierarchical structures in current deep NNs are only approximately
> > what you want for modeling patterns in physical reality -- yet are
> > still quite useful -- similarly metagraph folds/unfolds are only
> > approximately what you want for modeling abstractions needed for
> > practical transfer learning and generalization -- yet are still quite
> > useful...   General cognition is just going to be super-slow and
> > cumbersome by nature, so practical AGI systems need to involve
> > subsystems operating on varying levels of generality, appropriately
> > optimized in ways that exploit their limitations...
> >
> >
> > On Thu, Mar 4, 2021 at 2:15 PM John Rose  wrote:
> > >
> > > Anyone who has worked on large complex software engineering projects, 
> > > especially AGI which tops them all knows about redesigns/rewrites. It's a 
> > > good sign actually. And it's a form of RSI.
> > >
> > > The way to grok this paper, the way I approach it is to scan it back and 
> > > forth many times then imagine the graphs in your mind, visualize the 
> > > hypergraphs, metagraphs, operations, inference, learning, mining, 
> > > attention, then folding and then Galois Connections.  I don't fully 
> > > comprehend all aspects but I know  Ben has unique perspective on 
> > > designing and building these things so I at least make the effort.  
> > > Hijacking the message thread to ask why it ain't it done yet is really 
> > > being a troll, is annoying and disrespectful IMO.
> > >
> > > This here I find interesting, it's a decision based on much experience 
> > > apparently:
> > >
> > > "Because AGI systems necessarily involve dynamic updating of the 
> > > knowledge base
> > > on which cognitive algorithms are acting, in the course of the cognitive 
> > > algorithm’s
> > > activity, there is an unavoidable heuristic aspect to the application of 
> > > the theory given
> > > here to real AGI systems. The equivalence of a recursively defined DDS on 
> > > a metagraph
> > > to a folding and unfolding process across that metagraph only holds 
> > > rigorously if one
> > > assumes the metagraph is not changing during the folding - which will not 
> > > generally be
> > > the case. What needs to happen in practice, I suggest, is that the 
> > > folding and unfolding
> > > happen and they do change the metagraph, and one then has a complex 
> > > self-organizing /
> > > self-modifying system that is only moderately well approximated by the 
> > > idealized case
> > > directly addressed by the theory presented here."
> > >
> > > There's a tradeoff between fidelity and implementable practicality it 
> > > seems. Go for the low hanging fruit.
> > >
> > > John
> > >
> > > Artificial General Intelligence List / AGI / see discussions + 
> > > participants + delivery options Permalink
> >
> >
> >
> > --
> > Ben Goertzel, PhD
> > http://goertzel.org
> >
> > “He not busy being born is busy dying" -- Bob Dylan
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> “He not busy being born is busy dying" -- Bob Dylan



-- 
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M2e83b1ccb228770e7c2296a2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-04 Thread Ben Goertzel
And yes, Matt Mahoney is trolling me like he commonly does, I've
gotten used to it...  of course I respond to his trolling not for his
own delectation (I don't delude myself that anything I say is going to
influence his perspective significantly) but for others in the
audience who may be interested and with a more open-minded perspective
;) ...

On Thu, Mar 4, 2021 at 2:40 PM Ben Goertzel  wrote:
>
> Thanks John.   I am working now on another paper that makes clearer
> how the Patterns of Cognition material fits into an overall theory and
> practice of general intelligence.
>
> The paragraph you highlight is a critical one, yeah.   Just as the
> hierarchical structures in current deep NNs are only approximately
> what you want for modeling patterns in physical reality -- yet are
> still quite useful -- similarly metagraph folds/unfolds are only
> approximately what you want for modeling abstractions needed for
> practical transfer learning and generalization -- yet are still quite
> useful...   General cognition is just going to be super-slow and
> cumbersome by nature, so practical AGI systems need to involve
> subsystems operating on varying levels of generality, appropriately
> optimized in ways that exploit their limitations...
>
>
> On Thu, Mar 4, 2021 at 2:15 PM John Rose  wrote:
> >
> > Anyone who has worked on large complex software engineering projects, 
> > especially AGI which tops them all knows about redesigns/rewrites. It's a 
> > good sign actually. And it's a form of RSI.
> >
> > The way to grok this paper, the way I approach it is to scan it back and 
> > forth many times then imagine the graphs in your mind, visualize the 
> > hypergraphs, metagraphs, operations, inference, learning, mining, 
> > attention, then folding and then Galois Connections.  I don't fully 
> > comprehend all aspects but I know  Ben has unique perspective on designing 
> > and building these things so I at least make the effort.  Hijacking the 
> > message thread to ask why it ain't it done yet is really being a troll, is 
> > annoying and disrespectful IMO.
> >
> > This here I find interesting, it's a decision based on much experience 
> > apparently:
> >
> > "Because AGI systems necessarily involve dynamic updating of the knowledge 
> > base
> > on which cognitive algorithms are acting, in the course of the cognitive 
> > algorithm’s
> > activity, there is an unavoidable heuristic aspect to the application of 
> > the theory given
> > here to real AGI systems. The equivalence of a recursively defined DDS on a 
> > metagraph
> > to a folding and unfolding process across that metagraph only holds 
> > rigorously if one
> > assumes the metagraph is not changing during the folding - which will not 
> > generally be
> > the case. What needs to happen in practice, I suggest, is that the folding 
> > and unfolding
> > happen and they do change the metagraph, and one then has a complex 
> > self-organizing /
> > self-modifying system that is only moderately well approximated by the 
> > idealized case
> > directly addressed by the theory presented here."
> >
> > There's a tradeoff between fidelity and implementable practicality it 
> > seems. Go for the low hanging fruit.
> >
> > John
> >
> > Artificial General Intelligence List / AGI / see discussions + participants 
> > + delivery options Permalink
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> “He not busy being born is busy dying" -- Bob Dylan



-- 
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mdda97a2f90c2f0966af1bd43
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-04 Thread Ben Goertzel
Thanks John.   I am working now on another paper that makes clearer
how the Patterns of Cognition material fits into an overall theory and
practice of general intelligence.

The paragraph you highlight is a critical one, yeah.   Just as the
hierarchical structures in current deep NNs are only approximately
what you want for modeling patterns in physical reality -- yet are
still quite useful -- similarly metagraph folds/unfolds are only
approximately what you want for modeling abstractions needed for
practical transfer learning and generalization -- yet are still quite
useful...   General cognition is just going to be super-slow and
cumbersome by nature, so practical AGI systems need to involve
subsystems operating on varying levels of generality, appropriately
optimized in ways that exploit their limitations...


On Thu, Mar 4, 2021 at 2:15 PM John Rose  wrote:
>
> Anyone who has worked on large complex software engineering projects, 
> especially AGI which tops them all knows about redesigns/rewrites. It's a 
> good sign actually. And it's a form of RSI.
>
> The way to grok this paper, the way I approach it is to scan it back and 
> forth many times then imagine the graphs in your mind, visualize the 
> hypergraphs, metagraphs, operations, inference, learning, mining, attention, 
> then folding and then Galois Connections.  I don't fully comprehend all 
> aspects but I know  Ben has unique perspective on designing and building 
> these things so I at least make the effort.  Hijacking the message thread to 
> ask why it ain't it done yet is really being a troll, is annoying and 
> disrespectful IMO.
>
> This here I find interesting, it's a decision based on much experience 
> apparently:
>
> "Because AGI systems necessarily involve dynamic updating of the knowledge 
> base
> on which cognitive algorithms are acting, in the course of the cognitive 
> algorithm’s
> activity, there is an unavoidable heuristic aspect to the application of the 
> theory given
> here to real AGI systems. The equivalence of a recursively defined DDS on a 
> metagraph
> to a folding and unfolding process across that metagraph only holds 
> rigorously if one
> assumes the metagraph is not changing during the folding - which will not 
> generally be
> the case. What needs to happen in practice, I suggest, is that the folding 
> and unfolding
> happen and they do change the metagraph, and one then has a complex 
> self-organizing /
> self-modifying system that is only moderately well approximated by the 
> idealized case
> directly addressed by the theory presented here."
>
> There's a tradeoff between fidelity and implementable practicality it seems. 
> Go for the low hanging fruit.
>
> John
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M70e7b27d3b67ffac51956170
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-04 Thread John Rose
Anyone who has worked on large complex software engineering projects, 
especially AGI which tops them all knows about redesigns/rewrites. It's a good 
sign actually. And it's a form of RSI.

The way to grok this paper, the way I approach it is to scan it back and forth 
many times then imagine the graphs in your mind, visualize the hypergraphs, 
metagraphs, operations, inference, learning, mining, attention, then folding 
and then Galois Connections.  I don't fully comprehend all aspects but I know  
Ben has unique perspective on designing and building these things so I at least 
make the effort.  Hijacking the message thread to ask why it ain't it done yet 
is really being a troll, is annoying and disrespectful IMO.

This here I find interesting, it's a decision based on much experience 
apparently:

"Because AGI systems necessarily involve dynamic updating of the knowledge base
on which cognitive algorithms are acting, in the course of the cognitive 
algorithm’s
activity, there is an unavoidable heuristic aspect to the application of the 
theory given
here to real AGI systems. The equivalence of a recursively defined DDS on a 
metagraph
to a folding and unfolding process across that metagraph only holds rigorously 
if one
assumes the metagraph is not changing during the folding - which will not 
generally be
the case. What needs to happen in practice, I suggest, is that the folding and 
unfolding
happen and they do change the metagraph, and one then has a complex 
self-organizing /
self-modifying system that is only moderately well approximated by the 
idealized case
directly addressed by the theory presented here."

There's a tradeoff between fidelity and implementable practicality it seems. Go 
for the low hanging fruit.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Me776fbd495ce9a125073689c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-03-01 Thread Ben Goertzel
Not really a focus but might emerge as a side-effect...

On Sun, Feb 28, 2021 at 1:09 PM  wrote:
>
> Any evaluation on Text Prediction Ben? (Perplexity or Lossless Compression)
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M1bb644e3f342d399dfd3abde
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-28 Thread immortal . discoveries
Any evaluation on Text Prediction Ben? (Perplexity or Lossless Compression)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M90b98a13e6dda5d7aa840278
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-28 Thread Ben Goertzel
For development experimentation we are working w/ collective
intelligence scenarios in Minecraft

Regarding practical real-world applications, we are currently using
the existing "legacy" OpenCog as a key ingredient in the AI
architecture for the Grace humanoid robot aimed at eldercare
applications (See awakening.health).   (Along with some neural
language and vision models...).  So once Hyperon is ready we will swap
out the old OpenCog for the new in Grace's back end.   Via the Sophia
Collective / SophiaDAO initiative we will also work w/ Hanson Robotics
to put this same architecture in Sophia and their other human-scale
robots, once it's first rolled out in Grace...


On Sat, Feb 27, 2021 at 2:17 PM Matt Mahoney  wrote:
>
> Looking over the Hyperon documents ( https://wiki.opencog.org/w/Hyperon ), 
> I'm glad to see an emphasis on distributed Atomspace.
>
> But I'm interested in what are the project goals? For example,
>
> - A self driving truck.
> - A robotic house cleaner.
> - A home security service that can see who is home, what you are doing, and 
> when to call someone.
> - CEO and workforce of a manufacturing plant.
> - A software developer.
> - A music service that writes new songs and predicts what you will like.
> - A lossy video compressor that outputs an English language script, and 
> corresponding decompressor to regenerate the video.
>
> Or other applications? And what do you estimate they will cost in hardware, 
> software, training set size, years, and dollars?
>
>
> On Sat, Feb 27, 2021, 3:44 PM Ben Goertzel  wrote:
>>
>> **
>>  OpenCog never did have a knowledge base or any useful applications or
>> experimental results since the 2011 timeline forecast human level AGI
>> in 8-10 years.
>> **
>>
>> My colleagues and I have used OpenCog w./in a bunch of useful
>> applications but I don't have time to go through the list here
>>
>> It's true there have been no super big commercial successes or widely
>> rolled out products
>>
>> >
>> > Without an accounting of past failures, I don't have any great faith that 
>> > the new design will work any better than the old one.
>>
>> Everyone working on AGI has failed so far... everything fails until it
>> succeeds...
>>
>> >  Ben seems to come up with a new design every few months.
>>
>> Honestly this is the first re-architecture / major rethinking of
>> OpenCog since 2008
>>
>> I had Webmind in 1997, Novamente in 2001, OpenCog in 2008 , and now
>> Hyperon in 2020/2021 ... those are the major revisions of my approach
>> to AGI so far  Of course the process of R&D naturally involves a
>> stream of new ideas w/in one's current main approach...
>>
>> > But I have to admire his persistence on a problem that is only being 
>> > solved with decades of global effort.
>> 
>> The global effort of course is inclusive of efforts like those by
>> OpenCog, Deep Mind etc. that contribute potentially key components to
>> the overall emerging global general intelligence..
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M2691c6e75f746afdfaf7f753
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-27 Thread Matt Mahoney
Looking over the Hyperon documents ( https://wiki.opencog.org/w/Hyperon ),
I'm glad to see an emphasis on distributed Atomspace.

But I'm interested in what are the project goals? For example,

- A self driving truck.
- A robotic house cleaner.
- A home security service that can see who is home, what you are doing, and
when to call someone.
- CEO and workforce of a manufacturing plant.
- A software developer.
- A music service that writes new songs and predicts what you will like.
- A lossy video compressor that outputs an English language script, and
corresponding decompressor to regenerate the video.

Or other applications? And what do you estimate they will cost in hardware,
software, training set size, years, and dollars?


On Sat, Feb 27, 2021, 3:44 PM Ben Goertzel  wrote:

> **
>  OpenCog never did have a knowledge base or any useful applications or
> experimental results since the 2011 timeline forecast human level AGI
> in 8-10 years.
> **
>
> My colleagues and I have used OpenCog w./in a bunch of useful
> applications but I don't have time to go through the list here
>
> It's true there have been no super big commercial successes or widely
> rolled out products
>
> >
> > Without an accounting of past failures, I don't have any great faith
> that the new design will work any better than the old one.
>
> Everyone working on AGI has failed so far... everything fails until it
> succeeds...
>
> >  Ben seems to come up with a new design every few months.
>
> Honestly this is the first re-architecture / major rethinking of
> OpenCog since 2008
>
> I had Webmind in 1997, Novamente in 2001, OpenCog in 2008 , and now
> Hyperon in 2020/2021 ... those are the major revisions of my approach
> to AGI so far  Of course the process of R&D naturally involves a
> stream of new ideas w/in one's current main approach...
>
> > But I have to admire his persistence on a problem that is only being
> solved with decades of global effort.
> 
> The global effort of course is inclusive of efforts like those by
> OpenCog, Deep Mind etc. that contribute potentially key components to
> the overall emerging global general intelligence..

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M030b6bf655b15b19d278929f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-27 Thread Mike Archbold
Relating to dates and AGI working: Probably most of us have seen
Casablanca. One of my favorite lines is when a girl asked Bogart "will I
see you tonight?"

Bogart: "I don't plan that far ahead."

On Sat, Feb 27, 2021, 12:44 PM Ben Goertzel  wrote:

> **
>  OpenCog never did have a knowledge base or any useful applications or
> experimental results since the 2011 timeline forecast human level AGI
> in 8-10 years.
> **
>
> My colleagues and I have used OpenCog w./in a bunch of useful
> applications but I don't have time to go through the list here
>
> It's true there have been no super big commercial successes or widely
> rolled out products
>
> >
> > Without an accounting of past failures, I don't have any great faith
> that the new design will work any better than the old one.
>
> Everyone working on AGI has failed so far... everything fails until it
> succeeds...
>
> >  Ben seems to come up with a new design every few months.
>
> Honestly this is the first re-architecture / major rethinking of
> OpenCog since 2008
>
> I had Webmind in 1997, Novamente in 2001, OpenCog in 2008 , and now
> Hyperon in 2020/2021 ... those are the major revisions of my approach
> to AGI so far  Of course the process of R&D naturally involves a
> stream of new ideas w/in one's current main approach...
>
> > But I have to admire his persistence on a problem that is only being
> solved with decades of global effort.
> 
> The global effort of course is inclusive of efforts like those by
> OpenCog, Deep Mind etc. that contribute potentially key components to
> the overall emerging global general intelligence..

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M8948c917bf464a59464bece0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-27 Thread Ben Goertzel
**
 OpenCog never did have a knowledge base or any useful applications or
experimental results since the 2011 timeline forecast human level AGI
in 8-10 years.
**

My colleagues and I have used OpenCog w./in a bunch of useful
applications but I don't have time to go through the list here

It's true there have been no super big commercial successes or widely
rolled out products

>
> Without an accounting of past failures, I don't have any great faith that the 
> new design will work any better than the old one.

Everyone working on AGI has failed so far... everything fails until it
succeeds...

>  Ben seems to come up with a new design every few months.

Honestly this is the first re-architecture / major rethinking of
OpenCog since 2008

I had Webmind in 1997, Novamente in 2001, OpenCog in 2008 , and now
Hyperon in 2020/2021 ... those are the major revisions of my approach
to AGI so far  Of course the process of R&D naturally involves a
stream of new ideas w/in one's current main approach...

> But I have to admire his persistence on a problem that is only being solved 
> with decades of global effort.

The global effort of course is inclusive of efforts like those by
OpenCog, Deep Mind etc. that contribute potentially key components to
the overall emerging global general intelligence..

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M4f0ce90b2551825abf92a309
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-27 Thread Jim Bromer
I don't have any confidence that I will be able to understand Ben's paper based 
on the language that he used in the abstract. If the term metagraph refers to 
an abstract principle that can act on methods (or whatever) during runtime and 
not just as a way of producing variations of functions (or producing graphics 
which is another meaning) then it could be more adaptive . But it would also 
tend to be more unfocused.  Traditional programming used data types as 
abstractions, but if there is an effective way to use variations of the 
abstract functions for the AI program to learn to learn to adapt to new 
situations then it could its range of learning could be extended. But it would 
also be less focused.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mf08548410e3b32ad37c12789
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-27 Thread Matt Mahoney
I see Ben just had a baby so he may be slow to respond. Having now read the
rest of the paper beyond the abstract, it seems that the terms are
explained, although it still doesn't address my earlier questions. It is an
attempt to unify a number of different learning algorithms for the design
of OpenCog v2, which is yet to be written. There are no experimental
results, and I am not expecting any for awhile. OpenCog never did have a
knowledge base or any useful applications or experimental results since the
2011 timeline forecast human level AGI in 8-10 years.

Without an accounting of past failures, I don't have any great faith that
the new design will work any better than the old one. Ben seems to come up
with a new design every few months. I'm not saying this to be critical
because AGI is an enormously difficult problem. Just the obvious
application of automating human labor would save $90 trillion per year (a
price tag that paradoxically rises as we solve it). But I have to admire
his persistence on a problem that is only being solved with decades of
global effort.

On Sat, Feb 27, 2021, 12:35 PM Jim Bromer  wrote:

> I thought that your abstract contained terms that should have been
> explained.  Is your use of the term 'directed metagraph' referring to
> something similar to a directed graph but which is more abstract than or is
> abstracted from the more concrete graphs that would be used by the system
> to reason? I had to do some searching to find out what a Galois connection
> referred to but you do have the ability to guess that few in this group
> would know what it meant and it would have been extremely easy to add a
> brief explanation of what you were referring to. It also would be so easy
> to explain what "metagraph crhonomorphisms" is supposed to mean in terms
> that most of us could understand. We have to use specialized language and
> we sometimes make terms up because we do not alwaze talk real goodly when
> in extemporazation-put but that is all the more reason to try to make it
> accessible. The more people who can understand the basic concepts the more
> people who will want to look at the paper.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M05f495611f392776fc995c3a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-27 Thread Ben Goertzel
Hmm, if terms are explained in an abstract then it becomes too long,
the nature of an abstract is to be compact right?

The nature of reading academic papers (for all of us) is one has to
chase references.  That paper refers to my earlier paper
(https://arxiv.org/abs/2012.01759) which elaborates various metagraph
morphisms,

A textbook-like exposition of these ideas would take a lot more work
to put together, but I might do it eventually... it's always a
challenge to balance time spent writing stuff up w time spent building
stuff...


On Sat, Feb 27, 2021 at 9:34 AM Jim Bromer  wrote:
>
> I thought that your abstract contained terms that should have been explained. 
>  Is your use of the term 'directed metagraph' referring to something similar 
> to a directed graph but which is more abstract than or is abstracted from the 
> more concrete graphs that would be used by the system to reason? I had to do 
> some searching to find out what a Galois connection referred to but you do 
> have the ability to guess that few in this group would know what it meant and 
> it would have been extremely easy to add a brief explanation of what you were 
> referring to. It also would be so easy to explain what "metagraph 
> crhonomorphisms" is supposed to mean in terms that most of us could 
> understand. We have to use specialized language and we sometimes make terms 
> up because we do not alwaze talk real goodly when in extemporazation-put but 
> that is all the more reason to try to make it accessible. The more people who 
> can understand the basic concepts the more people who will want to look at 
> the paper.
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Mf36ef4b952160460c45fce99
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-27 Thread Jim Bromer
I thought that your abstract contained terms that should have been explained.  
Is your use of the term 'directed metagraph' referring to something similar to 
a directed graph but which is more abstract than or is abstracted from the more 
concrete graphs that would be used by the system to reason? I had to do some 
searching to find out what a Galois connection referred to but you do have the 
ability to guess that few in this group would know what it meant and it would 
have been extremely easy to add a brief explanation of what you were referring 
to. It also would be so easy to explain what "metagraph crhonomorphisms" is 
supposed to mean in terms that most of us could understand. We have to use 
specialized language and we sometimes make terms up because we do not alwaze 
talk real goodly when in extemporazation-put but that is all the more reason to 
try to make it accessible. The more people who can understand the basic 
concepts the more people who will want to look at the paper.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M66deb43b711e5afa4a5f0154
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-27 Thread Jim Bromer
I thought your use of th
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Md84b30ce5ec4dbbf688a66c5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-25 Thread Ben Goertzel
Hi,

> Ben, do you feel that these changes to OpenCog will address the current 
> obstacles to AGI? What do you believe were the reasons why it did not meet 
> the goals of the 2011 timeline ( 
> https://www.nextbigfuture.com/2011/03/opencog-artificial-general-intelligence.html
>  ) which forecast "full-on AGI" IN 2019-21 and recursive self improvement in 
> 2021-23.

Making an OpenCog-based AGI is a large-scale software engineering
project as well as a collection of coupled research projects.

We never had the resources behind the project needed to pull it off
from a pure engineering perspective, even if all the theory is
correct   Not saying this is/was the only weakness of the OpenCog
design/project of that era, just noting that, any other potential
shortcomings aside, lack of resources would have been a sufficient
reason for not meeting those milestones...  Those milestones were
never proposed as being achievable independently of availability of
resources to fund development.

Deep Mind has eaten $2B+ of Google's budget, GPT3 cost substantially
more just in processor time than the entire amount of $$ spent on
OpenCog during its history, etc.

There are weaknesses in the legacy OpenCog software architecture that
would have stopped us from getting to human-level AGI using it,
without recourse to a lot of awkward coding/design gymnastics ... but
with more ample resources we would have been able to push to refactor
/ rebuild and remedy those weaknesses quite some time ago...

>Obviously "rebuilding a lot of OpenCog from scratch" doesn't bode well.

I am perplexed why you think this is obvious?  To me it bodes quite
well.   We are aiming to do something large and complex and
unprecedented here, it is hardly surprising or bad that midway through
the quest we would want to take what we've learned along the journey
so far and use it to radically improve the system.

As a Mac user, I thought the transition from OS9 to OSX was a good
one.   A lot was rebuilt from scratch there, based on everything that
had been learned before, based on the affordances allowed by modern
hardware in the OSX era, etc. etc.

> If I recall in 2011, OpenCog consisted of an evolutionary learner (MOSES), a 
> neural vision model (DeSTIN), a rule based language model (RelEX, NatGen), 
> and Atomspace, which was supposed to integrate it all together but never did 
> except for some of the language part. Distributed Atomspace also ran into 
> severe scaling problems.

You have left out probably the most critical AGI component , the PLN
Probabilistic Logic Networks reasoner...

As for use of the Atomspace for integrating different AI modalities,
for the last few years it's been way more advanced in the biomedical
inference/learning domain than in NLP ...

> I assume the design changes address these problems, but what about other 
> obstacles? MOSES and DeSTIN never advanced beyond toy problems because of 
> computational limits, but perhaps they could be distributed. After all, real 
> human vision is around 10^15 synapse operations per second [1], and real 
> evolution is 10^29 DNA copy OPS [2]. Do the design changes help with scaling 
> to parallel computing?

Yeah there are two main aspects to the redesign

-- new Atomese2 programming language, which is what the paper I just
posted is working towards

-- new Atomspace implementation which better leverages concurrent and
distributed processing, and better interfaces real-time with NN
learning frameworks (see e.g. Alexey Potapov's earlier papers on
Cognitive Module Networks)

A rough high level overview is in Section 6 of ,
https://arxiv.org/abs/2004.05267 , see also many documents at

https://wiki.opencog.org/w/Hyperon

> I never did understand why OpenCog went with rule based language modeling 
> after it's long history of failure. Problems like ambiguity, brittleness, and 
> most importantly, the lack of a learning algorithm, have only been solved in 
> practice with enormous neural/statistical models.

SingularityNET team is doing a lot with transformer NNs in practical
applications, and the weaknesses of the tech are also very well known,
see e.g.

https://multiverseaccordingtoben.blogspot.com/2020/07/gpt3-super-cool-but-not-path-to-agi.html

https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/

... but you already know that stuff I suppose...

The idea that logic-based systems intrinsically lack a learning
algorithm is false, and the dichotomy btw reasoning and learning is
also false.

Some prototype work using symbolic and neural methods together for NLP
is described here

https://arxiv.org/abs/2005.12533

but that direction is paused for the moment to be re-initiated once
Hyperon is ready

> So I'm wondering if you have a new timeline, or have you adjusted your goals 
> and how you plan to achieve them?

Goals are about the same as always.

Giving a specific timeline seems not terribly worthwhile, mostly
because 

Re: [agi] Patterns of Cognition

2021-02-25 Thread Matt Mahoney
Ben, do you feel that these changes to OpenCog will address the current
obstacles to AGI? What do you believe were the reasons why it did not meet
the goals of the 2011 timeline (
https://www.nextbigfuture.com/2011/03/opencog-artificial-general-intelligence.html
) which forecast "full-on AGI" IN 2019-21 and recursive self improvement in
2021-23. Obviously "rebuilding a lot of OpenCog from scratch" doesn't bode
well.

If I recall in 2011, OpenCog consisted of an evolutionary learner (MOSES),
a neural vision model (DeSTIN), a rule based language model (RelEX,
NatGen), and Atomspace, which was supposed to integrate it all together but
never did except for some of the language part. Distributed Atomspace also
ran into severe scaling problems.

I assume the design changes address these problems, but what about other
obstacles? MOSES and DeSTIN never advanced beyond toy problems because of
computational limits, but perhaps they could be distributed. After all,
real human vision is around 10^15 synapse operations per second [1], and
real evolution is 10^29 DNA copy OPS [2]. Do the design changes help with
scaling to parallel computing?

I never did understand why OpenCog went with rule based language modeling
after it's long history of failure. Problems like ambiguity, brittleness,
and most importantly, the lack of a learning algorithm, have only been
solved in practice with enormous neural/statistical models. NNTP, the new
leader on the large text benchmark (
http://mattmahoney.net/dc/text.html#1123 ) takes 6 days to compress 1 GB of
text on a GPU with 10,496 Cuda cores. It runs a Transformer algorithm, a
neural network with an attention mechanism.

Actual working NLP like Google, Alexa, and Siri, seem to require the
backing of companies with trillion dollar market caps. (Alphabet $1.36T,
Amazon $1.55T, Apple $2.04T). GPT-3 is still experimental, and isn't cheap
either.

So I'm wondering if you have a new timeline, or have you adjusted your
goals and how you plan to achieve them?

1. The brain has 6 x 10^14 synapses. The visual cortex is 40% of the brain.
I assume an operation takes 10 to 100 ms.

2. There are 5 x 10^36 DNA bases in the biosphere at 2 bits each. I assume
the replication rate is the same as the atmospheric carbon cycle, 5 years.
If you include RNA and amino acid operations, the rate is 10^31 OPS.


On Wed, Feb 24, 2021, 2:24 PM Ben Goertzel  wrote:

> Well we are rebuilding a lot of OpenCog from scratch in the Hyperon
> initiative...
>
> One of the design goals is to embed as many of the needed
> cognitive-algorithm-related abstractions as possible in the Atomese 2
> language, so that the cognitive algos themselves become brief simple
> Atomese scripts
>
> The theory in this paper is mostly oriented toward figuring out what
> abstractions are most critical to embed in the Atomese2 interpreter in
> ways that are both easy-to-use for the developer and highly efficient
> (in concurrent and distributed processing scenarios)
>
> Current OpenCog architecture has all the cognitive algos using
> Atomspace, and many using the Pattern Matcher and URE Unified Rule
> Engine, but other than that the algos are using separate code yeah.
> Hyperon architecture aims to factor out more of the commonalities btw
> the different cognitive algos, and it seems that baking probabilistic
> dependent types and metagraph folds/unfolds into the Atomese2 language
> can be a big step in this direction...
>
> ben
>
> On Wed, Feb 24, 2021 at 10:08 AM Mike Archbold 
> wrote:
> >
> > In OpenCog the code is kind of compartmentalized -- disparate
> > algorithms in isolation called as necessary. That has been my
> > impression at least. But I think in this proposed architecture an
> > integration is attempted, which makes sense.
> >
> > On 2/24/21, Ben Goertzel  wrote:
> > > "Patterns of Cognition: Cognitive Algorithms as Galois Connections
> > > Fulfilled by Chronomorphisms On Probabilistically Typed Metagraphs"
> > >
> > > https://arxiv.org/abs/2102.10581
> > >
> > > New draft paper that puts various OpenCog cognitive algorithms in a
> > > common mathematical framework, and connects them with implementation
> > > strategies involving chronomorphisms on metagraphs...
> > >
> > > 
> > > It is argued that a broad class of AGI-relevant algorithms can be
> > > expressed in a common formal framework, via specifying Galois
> > > connections linking search and optimization processes on directed
> > > metagraphs whose edge targets are labeled with probabilistic dependent
> > > types, and then showing these connections are fulfilled by processes
> > > involving metagraph chronomorphisms. Examples are drawn from the core
> > > cognitive algorithms used in the OpenCog AGI framework: Probabilistic
> > > logical inference, evolutionary program learning, pattern mining,
> > > agglomerative clustering, pattern mining and nonlinear-dynamical
> > > attention allocation.
> > >
> > > The analysis presented involves representing these cognitive
> 

Re: [agi] Patterns of Cognition

2021-02-24 Thread Ben Goertzel
Well we are rebuilding a lot of OpenCog from scratch in the Hyperon
initiative...

One of the design goals is to embed as many of the needed
cognitive-algorithm-related abstractions as possible in the Atomese 2
language, so that the cognitive algos themselves become brief simple
Atomese scripts

The theory in this paper is mostly oriented toward figuring out what
abstractions are most critical to embed in the Atomese2 interpreter in
ways that are both easy-to-use for the developer and highly efficient
(in concurrent and distributed processing scenarios)

Current OpenCog architecture has all the cognitive algos using
Atomspace, and many using the Pattern Matcher and URE Unified Rule
Engine, but other than that the algos are using separate code yeah.
Hyperon architecture aims to factor out more of the commonalities btw
the different cognitive algos, and it seems that baking probabilistic
dependent types and metagraph folds/unfolds into the Atomese2 language
can be a big step in this direction...

ben

On Wed, Feb 24, 2021 at 10:08 AM Mike Archbold  wrote:
>
> In OpenCog the code is kind of compartmentalized -- disparate
> algorithms in isolation called as necessary. That has been my
> impression at least. But I think in this proposed architecture an
> integration is attempted, which makes sense.
>
> On 2/24/21, Ben Goertzel  wrote:
> > "Patterns of Cognition: Cognitive Algorithms as Galois Connections
> > Fulfilled by Chronomorphisms On Probabilistically Typed Metagraphs"
> >
> > https://arxiv.org/abs/2102.10581
> >
> > New draft paper that puts various OpenCog cognitive algorithms in a
> > common mathematical framework, and connects them with implementation
> > strategies involving chronomorphisms on metagraphs...
> >
> > 
> > It is argued that a broad class of AGI-relevant algorithms can be
> > expressed in a common formal framework, via specifying Galois
> > connections linking search and optimization processes on directed
> > metagraphs whose edge targets are labeled with probabilistic dependent
> > types, and then showing these connections are fulfilled by processes
> > involving metagraph chronomorphisms. Examples are drawn from the core
> > cognitive algorithms used in the OpenCog AGI framework: Probabilistic
> > logical inference, evolutionary program learning, pattern mining,
> > agglomerative clustering, pattern mining and nonlinear-dynamical
> > attention allocation.
> >
> > The analysis presented involves representing these cognitive
> > algorithms as recursive discrete decision processes involving
> > optimizing functions defined over metagraphs, in which the key
> > decisions involve sampling from probability distributions over
> > metagraphs and enacting sets of combinatory operations on selected
> > sub-metagraphs. The mutual associativity of the combinatory operations
> > involved in a cognitive process is shown to often play a key role in
> > enabling the decomposition of the process into folding and unfolding
> > operations; a conclusion that has some practical implications for the
> > particulars of cognitive processes, e.g. militating toward use of
> > reversible logic and reversible program execution. It is also observed
> > that where this mutual associativity holds, there is an alignment
> > between the hierarchy of subgoals used in recursive decision process
> > execution and a hierarchy of subpatterns definable in terms of formal
> > pattern theory.
> > 
> >
> > --
> > Ben Goertzel, PhD
> > http://goertzel.org
> >
> > “He not busy being born is busy dying" -- Bob Dylan



-- 
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-M7831411f11981cfc695cf441
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-24 Thread Nanograte Knowledge Technologies
Just a wild notion from me as a novice. By relying on probability for decision 
making, aren't you essentially confusing relationship with association? Point 
being; is association probabilistic, and if so, based on what criteria? If 
'function of', therefore relationship?


From: Ben Goertzel 
Sent: Wednesday, 24 February 2021 11:25
To: AGI 
Subject: [agi] Patterns of Cognition

"Patterns of Cognition: Cognitive Algorithms as Galois Connections
Fulfilled by Chronomorphisms On Probabilistically Typed Metagraphs"

https://arxiv.org/abs/2102.10581

New draft paper that puts various OpenCog cognitive algorithms in a
common mathematical framework, and connects them with implementation
strategies involving chronomorphisms on metagraphs...



It is argued that a broad class of AGI-relevant algorithms can be
expressed in a common formal framework, via specifying Galois
connections linking search and optimization processes on directed
metagraphs whose edge targets are labeled with probabilistic dependent
types, and then showing these connections are fulfilled by processes
involving metagraph chronomorphisms. Examples are drawn from the core
cognitive algorithms used in the OpenCog AGI framework: Probabilistic
logical inference, evolutionary program learning, pattern mining,
agglomerative clustering, pattern mining and nonlinear-dynamical
attention allocation.

The analysis presented involves representing these cognitive
algorithms as recursive discrete decision processes involving
optimizing functions defined over metagraphs, in which the key
decisions involve sampling from probability distributions over
metagraphs and enacting sets of combinatory operations on selected
sub-metagraphs. The mutual associativity of the combinatory operations
involved in a cognitive process is shown to often play a key role in
enabling the decomposition of the process into folding and unfolding
operations; a conclusion that has some practical implications for the
particulars of cognitive processes, e.g. militating toward use of
reversible logic and reversible program execution. It is also observed
that where this mutual associativity holds, there is an alignment
between the hierarchy of subgoals used in recursive decision process
execution and a hierarchy of subpatterns definable in terms of formal
pattern theory.



--
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Macaa619c4cd683c6d0336f7b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Patterns of Cognition

2021-02-24 Thread Mike Archbold
In OpenCog the code is kind of compartmentalized -- disparate
algorithms in isolation called as necessary. That has been my
impression at least. But I think in this proposed architecture an
integration is attempted, which makes sense.

On 2/24/21, Ben Goertzel  wrote:
> "Patterns of Cognition: Cognitive Algorithms as Galois Connections
> Fulfilled by Chronomorphisms On Probabilistically Typed Metagraphs"
> 
> https://arxiv.org/abs/2102.10581
> 
> New draft paper that puts various OpenCog cognitive algorithms in a
> common mathematical framework, and connects them with implementation
> strategies involving chronomorphisms on metagraphs...
> 
> 
> It is argued that a broad class of AGI-relevant algorithms can be
> expressed in a common formal framework, via specifying Galois
> connections linking search and optimization processes on directed
> metagraphs whose edge targets are labeled with probabilistic dependent
> types, and then showing these connections are fulfilled by processes
> involving metagraph chronomorphisms. Examples are drawn from the core
> cognitive algorithms used in the OpenCog AGI framework: Probabilistic
> logical inference, evolutionary program learning, pattern mining,
> agglomerative clustering, pattern mining and nonlinear-dynamical
> attention allocation.
> 
> The analysis presented involves representing these cognitive
> algorithms as recursive discrete decision processes involving
> optimizing functions defined over metagraphs, in which the key
> decisions involve sampling from probability distributions over
> metagraphs and enacting sets of combinatory operations on selected
> sub-metagraphs. The mutual associativity of the combinatory operations
> involved in a cognitive process is shown to often play a key role in
> enabling the decomposition of the process into folding and unfolding
> operations; a conclusion that has some practical implications for the
> particulars of cognitive processes, e.g. militating toward use of
> reversible logic and reversible program execution. It is also observed
> that where this mutual associativity holds, there is an alignment
> between the hierarchy of subgoals used in recursive decision process
> execution and a hierarchy of subpatterns definable in terms of formal
> pattern theory.
> 
> 
> --
> Ben Goertzel, PhD
> http://goertzel.org
> 
> “He not busy being born is busy dying" -- Bob Dylan

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Ma5fe87ce24f8b1d9478472b3
Delivery options: https://agi.topicbox.com/groups/agi/subscription