Someone I spoke to mentioned to use Codex for when the "AGI" gets a prompt such
as below:
'Tell me a 30 word sentence but don't use any word you have already written ex.
the the or sat sat:'
Because in this task you need to scan all your words written so far to make
sure there is 0 duplicates.
On Wednesday, September 22, 2021, at 11:04 PM, YKY (Yan King Yin, 甄景贤) wrote:
> I think for GPT-3 to become AGI, it may need:
1) the ability to do multi-step reasoning, eg. with reinforcement learning
2) the ability to make assumptions, this part may be tricky to do with
neural networks
Then I hav
On Wed, Sep 22, 2021 at 9:57 PM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:
> On 9/20/21, James Bowery wrote:
> > Functions are degenerate Relations. Anyone that starts from a functional
> > programming perspective has already lost the war.
>
> Some concepts seem to be funct
On 9/19/21, immortal.discover...@gmail.com
wrote:
> So we have a context of 9 tic tac toe squares with 2 of your Xs in a row and
> his Os all over the place, you predict something probable and rewardful, the
> 3rd X to make a row. GPT would naturally learn this, Blender would also the
> reward par
On 9/20/21, James Bowery wrote:
> Functions are degenerate Relations. Anyone that starts from a functional
> programming perspective has already lost the war.
Some concepts seem to be functions more naturally,
for example in programming you return a single value
instead of a set of values. You
Functions are degenerate Relations. Anyone that starts from a functional
programming perspective has already lost the war.
Here's a question for ya'll:
What is the Relational generalization of Curry-Howard?
On Sat, Sep 18, 2021 at 10:09 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.co
So we have a context of 9 tic tac toe squares with 2 of your Xs in a row and
his Os all over the place, you predict something probable and rewardful, the
3rd X to make a row. GPT would naturally learn this, Blender would also the
reward part too, basically.
As for a FORK, this is like two-of fa
On 9/14/21, ivan.mo...@gmail.com wrote:
> Hi YKY :)
>
> As for Tic Tac Toe, I believe this should work: write a winning strategy in
> any functional language (being lambda calculus based). Then convert it to
> logic rules by Curry-Howard correspondence. And voilà, you have a logic
> representation
Hi YKY :)
As for Tic Tac Toe, I believe this should work: write a winning strategy in any
functional language (being lambda calculus based). Then convert it to logic
rules by Curry-Howard correspondence. And voilà, you have a logic
representation of the winning strategy.
Other than Curry-Howar
On 9/11/21, Matt Mahoney wrote:
> Practical programs have time constraints. Play whichever winning move you
> discover first.
That's not a bad strategy per se and may be a primitive brain mechanism,
but then how do you explain humans' ability to plan ahead and
reason about games like chess or Tic
The NARS implementations I've been using do not explicitly have a
simulation model, but there is probably a way of forming the Narsese to get
the results you're looking for. My intuition is that you'd be looking for
frequency/confidence scores in a parent statement which indicate a
successful path
Practical programs have time constraints. Play whichever winning move you
discover first.
On Sat, Sep 11, 2021, 2:48 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:
> When thinking about the game of Tic Tac Toe,
> I found that it is most natural to allow assumptions in the log
When thinking about the game of Tic Tac Toe,
I found that it is most natural to allow assumptions in the logic rules.
For example, in the definition of a potential "fork",
in which the player X can win in 2 ways.
How can we write the rules to determine a potential fork?
Here is a very "natural" w
13 matches
Mail list logo