Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-14 Thread Ben Goertzel
On Thu, Jan 14, 2021 at 12:10 PM John Rose wrote: > > I noticed you pulled out the Sheldrake reference. It just became irrelevant to the new simplified way of mapping CD p-bits into PLN simple TVs ... -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-14 Thread John Rose
Good idea.  The p-bits is clever but does open up a whole thing on emulation. I noticed you pulled out the Sheldrake reference. One of the reasons this paper is so good is that it relates to an actual AGI system in advanced development, the OpenCog framework.  And then all the logics intermappe

Re: [agi] AGI = GLOBAL CATASTROPHIC RISK?

2021-01-14 Thread stefan.reich.maker.of.eye via AGI
> Look, if we don't build AGI, we are going to die. Hurry up then! -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T86b555c591599ac6-M337361d887d665af87a4fb91 Delivery options: https://agi.topicbox.com/groups/agi/

Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-14 Thread Ben Goertzel
> I'll give this draft another read/bugfix before posting on Arxiv though... yeah -- I'm gonna remove the qubit related stuff and put that in a separate paper... I fixed some errors in that and did some more stuff but it's getting too much for a side-note in this paper which is focused on other ma

Re: [agi] AGI = GLOBAL CATASTROPHIC RISK?

2021-01-14 Thread immortal . discoveries
Yes an AGI* _in a box_* can attain more "data" by improving its intelligence. Say we have a dataset it has in the box "The cat ate food. The dog ate food. The cat meows." Now the AGI adds the ability to translate words cat=dog by seeing they share some contexts (eat) so maybe they share others t