Re: [agi] How to make assumptions in a logic engine?

2021-09-24 Thread immortal . discoveries
Someone I spoke to mentioned to use Codex for when the "AGI" gets a prompt such 
as below:

'Tell me a 30 word sentence but don't use any word you have already written ex. 
the the or sat sat:'

Because in this task you need to scan all your words written so far to make 
sure there is 0 duplicates.

I still don't see how codex can just get away with doing this either though, 
codex can only predict forward and can't intelligently look back at its code 
made so far like we do in the Task shown above, so I feel it is the same 
problem back to the start/ drawing board, for who offered this idea. But it 
could work, just it wouldn't be very human like.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-M955169f1f3a21c24ad5a4610
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-23 Thread immortal . discoveries
On Wednesday, September 22, 2021, at 11:04 PM, YKY (Yan King Yin, 甄景贤) wrote:
> I think for GPT-3 to become AGI, it may need:
1) the ability to do multi-step reasoning, eg. with reinforcement learning
2) the ability to make assumptions, this part may be tricky to do with
neural networks

Then I have the answers.

I have fleshed out 3 things for adding to "DALL-E":

1) Using reward for sensory prediction.
2) Slow thinking: Learning sensory rewards -- Agency.
3) Slow thinking: Using sensory rewards to use memories to do motor tasks.

I will email you my new note I just wrote up in simple to read form.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-M4172ce5e33e6816de5eae54a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-22 Thread James Bowery
On Wed, Sep 22, 2021 at 9:57 PM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> On 9/20/21, James Bowery  wrote:
> > Functions are degenerate Relations.  Anyone that starts from a functional
> > programming perspective has already lost the war.
>
> Some concepts seem to be functions more naturally,
>

You don't understand the point of avoiding premature degeneration in your
foundations, do you?



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-Mea6ca15099ac9ec368402d1f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-22 Thread Yan King Yin, 甄景贤
On 9/19/21, immortal.discover...@gmail.com
 wrote:
> So we have a context of 9 tic tac toe squares with 2 of your Xs in a row and
> his Os all over the place, you predict something probable and rewardful, the
> 3rd X to make a row. GPT would naturally learn this, Blender would also the
> reward part too, basically.
>
> As for a FORK, this is like two-of favorite meals. Give me some fries.or
> I could have said Give me some cake. I predict them about 50% each, based on
> how rewardful and popular they are seen in the data. In that case 50% the
> time I choose fries, then next time cake because fries has been inhibited
> and fired its neural energy now, changing the distribution.
>
> It's ok to pursue logic but I can't help but point out this sound exactly
> like my and Transformer AI. In fact, both those are same, simply the
> approach is different to solve the efficiency problem. In this case, I don't
> see how yours would be efficient, it seems like a GOFAI no? Isn't it GOFAI?
> This is not something that scales like GPT, AFAIK your logic based approach
> is focusing on a few rules and disregards how many resources it needs
> (compute doesn't matter, memory neither).
>
> *_How does your approach, to predict B for some context A, be efficient like
> GPT? There is a lot to leverage when given a context, and GPT leverages it.
> Or, if you intend to use Transformer+logic, why? Transformer already does
> all methods you mentioned to leverage context._*

This is the **most important question** concerning the future of AGI,
in my opinion.

GPT-3 is just 1 step away from AGI.

Recently, a company in Beijing built a language model (LM) similar to GPT-3,
called Wu Dao 2.0 (悟道) with 10x the number of weights (1.75 trillion).

BERT or GPT-3 are basically Turing-universal computing modules.

I think for GPT-3 to become AGI, it may need:
1) the ability to do multi-step reasoning, eg. with reinforcement learning
2) the ability to make assumptions, this part may be tricky to do with
neural networks

[ more on this... this is just a partial reply ]

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-M8bd85b6e7259c55688d267a3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-22 Thread Yan King Yin, 甄景贤
On 9/20/21, James Bowery  wrote:
> Functions are degenerate Relations.  Anyone that starts from a functional
> programming perspective has already lost the war.

Some concepts seem to be functions more naturally,
for example in programming you return a single value
instead of a set of values.  You either perform an action
or not.

But I see that relations do play a central role in categorical logic.

> Here's a question for ya'll:
>
> What is the Relational generalization of Curry-Howard?

Let me see... a relation is a sub-object of the Cartesian product of
two domains or types, such as A × B.  A sub-object of A × B is a new type,
call it R, whose elements are propositions such as aRb or R(a,b).

According to the book "Lectures on the CHI",
a relation, such as R, is a type constructor,
a proposition, such as aRb, is a dependent type.
The type constructor returns a new type aRb for each a ∈ A and b ∈ B.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-Mab2bdb14214795d205a9ec13
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-19 Thread James Bowery
Functions are degenerate Relations.  Anyone that starts from a functional
programming perspective has already lost the war.

Here's a question for ya'll:

What is the Relational generalization of Curry-Howard?

On Sat, Sep 18, 2021 at 10:09 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> On 9/14/21, ivan.mo...@gmail.com  wrote:
> > Hi YKY :)
> >
> > As for Tic Tac Toe, I believe this should work: write a winning strategy
> in
> > any functional language (being lambda calculus based). Then convert it to
> > logic rules by Curry-Howard correspondence. And voilà, you have a logic
> > representation of the winning strategy.
>
> Yes, I'm glad you mentioned Curry-Howard.  My objective here is to
> formulate a set of logic rules for Tic Tac Toe that is very "natural" and
> close to the way humans think about this game.
>
> For example, the human child will first learn the concept of 3-in-a-row,
> etc.  Then he learns that "XX□" is almost 3-in-a-row, and is a "can win"
> situation.  Then he learns that a "fork" is two distinct "can win"
> situations.
> Notice that the "fork" predicate is expressed with the "can win" predicate
> which is learned earlier (ie, more primitive).
>
> In other words, if we want to *re-use* earlier-learned predicates, we need
> to make *assumptions*, ie, imagined moves.  By making imaginary moves
> we get back to the situations we have encountered *before*, instead of
> having to invent new concepts from scratch.
>
> In classical (ancient) logic-based AI, they have so-called ATMS
> (assumption-based truth maintenance systems) that keep track of
> assumptions and the conclusions they lead to.  This makes the
> inference engine quite complicated.
>
> Curry-Howard can offer an interesting and perhaps useful insight:
> under Curry-Howard, an assumption in logic is just a variable.  For
> example if we assume the proposition A, this corresponds to having
> a variable x:A of type A.
>
> When we create a proof making use of this assumption, this corresponds
> to having a function f, taking a proof of A to a proof of B.  In other
> words,
> it is a λ-term "λx. f(x)".  Notice that in this λ-term the variable x is
> bound
> and not free.  This means that our proof has *discharged* the assumption
> A.
>
> Simon Thompson's book "Type Theory and Functional Programming" [1991]
> explains all this very nicely.
>
> > Other than Curry-Howard, from what I learned, logic represents a Turing
> > complete language, just use implication as a function symbol, and manage
> > variables in related predicates. We start from axioms (progressively
> > asserted Tic Tac Toe moves) that are raw material taken for granted, from
> > which all the conclusions in planing are deduced. When you check all the
> > branching conclusions, asserting all the possible opponent moves in
> between,
> > if you encounter a "win" combination, there could be a potential path for
> > winning if the opponent moves as predicted. This should work for any
> system,
> > including the Tic Tac Toe game. But beware, there could be an infinite
> loop
> > in rules, just like in regular programming, and it happens on recursive
> > implication. This could be avoided by tracking the recursion count,
> > rejecting high count branches.
> >
> > For Tic Tac Toe, just find a way to represent a board as a predicate
> system
> > (maybe one 9 parameters long, or three 3 parameters long, or whatever
> else
> > fits), define all the winning combinations, and that is half a job done.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-Ma5a6365ef5649312927ea6d4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-18 Thread immortal . discoveries
So we have a context of 9 tic tac toe squares with 2 of your Xs in a row and 
his Os all over the place, you predict something probable and rewardful, the 
3rd X to make a row. GPT would naturally learn this, Blender would also the 
reward part too, basically.

As for a FORK, this is like two-of favorite meals. Give me some fries.or I 
could have said Give me some cake. I predict them about 50% each, based on how 
rewardful and popular they are seen in the data. In that case 50% the time I 
choose fries, then next time cake because fries has been inhibited and fired 
its neural energy now, changing the distribution.

It's ok to pursue logic but I can't help but point out this sound exactly like 
my and Transformer AI. In fact, both those are same, simply the approach is 
different to solve the efficiency problem. In this case, I don't see how yours 
would be efficient, it seems like a GOFAI no? Isn't it GOFAI? This is not 
something that scales like GPT, AFAIK your logic based approach is focusing on 
a few rules and disregards how many resources it needs (compute doesn't matter, 
memory neither).

*_How does your approach, to predict B for some context A, be efficient like 
GPT? There is a lot to leverage when given a context, and GPT leverages it. Or, 
if you intend to use Transformer+logic, why? Transformer already does all 
methods you mentioned to leverage context._*
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-Madc00c6b2628f6dd840d2df0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-18 Thread Yan King Yin, 甄景贤
On 9/14/21, ivan.mo...@gmail.com  wrote:
> Hi YKY :)
>
> As for Tic Tac Toe, I believe this should work: write a winning strategy in
> any functional language (being lambda calculus based). Then convert it to
> logic rules by Curry-Howard correspondence. And voilà, you have a logic
> representation of the winning strategy.

Yes, I'm glad you mentioned Curry-Howard.  My objective here is to
formulate a set of logic rules for Tic Tac Toe that is very "natural" and
close to the way humans think about this game.

For example, the human child will first learn the concept of 3-in-a-row,
etc.  Then he learns that "XX□" is almost 3-in-a-row, and is a "can win"
situation.  Then he learns that a "fork" is two distinct "can win" situations.
Notice that the "fork" predicate is expressed with the "can win" predicate
which is learned earlier (ie, more primitive).

In other words, if we want to *re-use* earlier-learned predicates, we need
to make *assumptions*, ie, imagined moves.  By making imaginary moves
we get back to the situations we have encountered *before*, instead of
having to invent new concepts from scratch.

In classical (ancient) logic-based AI, they have so-called ATMS
(assumption-based truth maintenance systems) that keep track of
assumptions and the conclusions they lead to.  This makes the
inference engine quite complicated.

Curry-Howard can offer an interesting and perhaps useful insight:
under Curry-Howard, an assumption in logic is just a variable.  For
example if we assume the proposition A, this corresponds to having
a variable x:A of type A.

When we create a proof making use of this assumption, this corresponds
to having a function f, taking a proof of A to a proof of B.  In other words,
it is a λ-term "λx. f(x)".  Notice that in this λ-term the variable x is bound
and not free.  This means that our proof has *discharged* the assumption
A.

Simon Thompson's book "Type Theory and Functional Programming" [1991]
explains all this very nicely.

> Other than Curry-Howard, from what I learned, logic represents a Turing
> complete language, just use implication as a function symbol, and manage
> variables in related predicates. We start from axioms (progressively
> asserted Tic Tac Toe moves) that are raw material taken for granted, from
> which all the conclusions in planing are deduced. When you check all the
> branching conclusions, asserting all the possible opponent moves in between,
> if you encounter a "win" combination, there could be a potential path for
> winning if the opponent moves as predicted. This should work for any system,
> including the Tic Tac Toe game. But beware, there could be an infinite loop
> in rules, just like in regular programming, and it happens on recursive
> implication. This could be avoided by tracking the recursion count,
> rejecting high count branches.
>
> For Tic Tac Toe, just find a way to represent a board as a predicate system
> (maybe one 9 parameters long, or three 3 parameters long, or whatever else
> fits), define all the winning combinations, and that is half a job done.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-Mca85424b4fc94610800e1dc4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-13 Thread ivan . moony
Hi YKY :)

As for Tic Tac Toe, I believe this should work: write a winning strategy in any 
functional language (being lambda calculus based). Then convert it to logic 
rules by Curry-Howard correspondence. And voilà, you have a logic 
representation of the winning strategy.

Other than Curry-Howard, from what I learned, logic represents a Turing 
complete language, just use implication as a function symbol, and manage 
variables in related predicates. We start from axioms (progressively asserted 
Tic Tac Toe moves) that are raw material taken for granted, from which all the 
conclusions in planing are deduced. When you check all the branching 
conclusions, asserting all the possible opponent moves in between, if you 
encounter a "win" combination, there could be a potential path for winning if 
the opponent moves as predicted. This should work for any system, including the 
Tic Tac Toe game. But beware, there could be an infinite loop in rules, just 
like in regular programming, and it happens on recursive implication. This 
could be avoided by tracking the recursion count, rejecting high count branches.

For Tic Tac Toe, just find a way to represent a board as a predicate system 
(maybe one 9 parameters long, or three 3 parameters long, or whatever else 
fits), define all the winning combinations, and that is half a job done.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-Mad2009e132c0980dda5990c7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-13 Thread Yan King Yin, 甄景贤
On 9/11/21, Matt Mahoney  wrote:
> Practical programs have time constraints. Play whichever winning move you
> discover first.

That's not a bad strategy per se and may be a primitive brain mechanism,
but then how do you explain humans' ability to plan ahead and
reason about games like chess or Tic Tac Toe?

YKY

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-M19375bda2840019d0bf98075
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-13 Thread Daniel Jue
The NARS implementations I've been using do not explicitly have a
simulation model, but there is probably a way of forming the Narsese to get
the results you're looking for.  My intuition is that you'd be looking for
frequency/confidence scores in a parent statement which indicate a
successful path to a goal state.

At first glance your problem sounded like a Monte Carlo Tree Search
candidate, since the strength of a decision tree node is based on a
sampling of its child nodes' success rate.  (And of course the search part
can be informed rather than truly random, so you can discard
nonsensical choices.)

On Sat, Sep 11, 2021 at 2:48 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> When thinking about the game of Tic Tac Toe,
> I found that it is most natural to allow assumptions in the logic rules.
> 
> For example, in the definition of a potential "fork",
> in which the player X can win in 2 ways.
> 
> How can we write the rules to determine a potential fork?
> Here is a very "natural" way to state it:
> 
> assume X plays move a:
> assume O plays an arbitrary (non-winning) move,
> assume X plays move b then X wins,
> or, assume X plays move c then X wins,
> and b != c
> then x is a potential fork.
> 
> So I wonder how can a logic inference engine handle assumptions?
> Does OpenCog or NARS have this ability?
> 
> Thanks :)
> YKY


-- 
Daniel Jue
Cognami LLC
240-515-7802
www.cognami.ai

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-M6fb3874e720945ed9f444b51
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How to make assumptions in a logic engine?

2021-09-11 Thread Matt Mahoney
Practical programs have time constraints. Play whichever winning move you
discover first.

On Sat, Sep 11, 2021, 2:48 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> When thinking about the game of Tic Tac Toe,
> I found that it is most natural to allow assumptions in the logic rules.
> 
> For example, in the definition of a potential "fork",
> in which the player X can win in 2 ways.
> 
> How can we write the rules to determine a potential fork?
> Here is a very "natural" way to state it:
> 
> assume X plays move a:
> assume O plays an arbitrary (non-winning) move,
> assume X plays move b then X wins,
> or, assume X plays move c then X wins,
> and b != c
> then x is a potential fork.
> 
> So I wonder how can a logic inference engine handle assumptions?
> Does OpenCog or NARS have this ability?
> 
> Thanks :)
> YKY

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-Mf9405d9d8deeebb0d7756615
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] How to make assumptions in a logic engine?

2021-09-10 Thread Yan King Yin, 甄景贤
When thinking about the game of Tic Tac Toe,
I found that it is most natural to allow assumptions in the logic rules.

For example, in the definition of a potential "fork",
in which the player X can win in 2 ways.

How can we write the rules to determine a potential fork?
Here is a very "natural" way to state it:

assume X plays move a:
assume O plays an arbitrary (non-winning) move,
assume X plays move b then X wins,
or, assume X plays move c then X wins,
and b != c
then x is a potential fork.

So I wonder how can a logic inference engine handle assumptions?
Does OpenCog or NARS have this ability?

Thanks :)
YKY

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T74958068c4e0a30f-Me5665989cf89a81d743e90be
Delivery options: https://agi.topicbox.com/groups/agi/subscription