Wait, notice this: we have DNA and meme spreadings/infections/boastings, we
love it and push it everywhere. But DNA and meme are not the human brain that
both DNA and memes create. The human brain consists of a prediction
system/algorithm, a reward system, while the memes are memories in that
b
On 2021-09-10 23:25:PM, Matt Mahoney wrote:
Evolution only cares about how many offspring you have, but motivation
to play and learn skills makes you more likely to survive and have more.
I don't think this is a trait we want in machines that are supposed to
serve us. The goal is not runaway
Sure you can make a super duper HQ (RTS pun, good game, great songs;
https://www.youtube.com/watch?v=3KAVfhqH0zg) GPT-9 that predicts really really
good, but it won't ever nag about AGI all day, like me. And that's a problem.
On the bright side, It will not nag about the common words like the, c
On Fri, Sep 10, 2021, 12:21 AM wrote:
>
> I already know how to make it emerge a desire to maintain input, learning,
> and output. You need this ability if you want AGI.
>
No you don't. You only need to model it in humans so you can predict human
behavior. Then you program a robot to carry out t
On Fri, Sep 10, 2021 at 2:59 PM Ben Goertzel via AGI
wrote:
> ah yes these are very familiar. materials! ;)
>
> Linas Vepstas and I have been batting around Coecke's papers for an
> awfully long time now...
Good. I know I mentioned it to Linas in 2019, and possibly even 2010, but I
didn't know
ah yes these are very familiar. materials! ;)
Linas Vepstas and I have been batting around Coecke's papers for an
awfully long time now...
On Thu, Sep 9, 2021 at 9:54 PM Rob Freeman wrote:
>
> On Fri, Sep 10, 2021 at 2:36 PM Ben Goertzel via AGI
> wrote:
>>
>> ...
>> Working out the specifics
On Fri, Sep 10, 2021 at 2:36 PM Ben Goertzel via AGI
wrote:
> ...
> Working out the specifics of the Curry-Howard mapping from MeTTa to
> intuitionistic logics, and from there to categorial semantics, is one
> of the things on our plate for the next couple months
Ah, if that is to be worked out
>> -- design of a new programming language (MeTTA = Meta Type Talk)
>> designed to serve as the type system and formalism behind the new
>> version of OpenCog Atomspace. A lot of math/CS work has been done in
>> the last 9 months leading up to this design...
>
>
> Meta Type Talk...
>
> Is it a fai
On Thursday, September 09, 2021, at 11:22 PM, Matt Mahoney wrote:
> If you program your AGI to positively reinforce input, learning, and output,
> will it develop senses of qualia, consciousness, and free will? I mean in the
> sense that it is motivated like we are to preserve the reward signal b
On Fri, Sep 10, 2021 at 1:49 PM Ben Goertzel via AGI
wrote:
> ...
> Our OpenCog/SNet team is spending a lot of time on down-to-earth
> stuff, some of which we'll talk about in some future AGI Discussion
> sessions
>
> Mainly
>
> -- design of a new programming language (MeTTA = Meta Type Talk)
> d
On Thu, Sep 9, 2021 at 5:06 PM wrote:
>
> I dont think Man as a group has much chance in getting that to happen. Need
> to get more down to earth I think.
Our OpenCog/SNet team is spending a lot of time on down-to-earth
stuff, some of which we'll talk about in some future AGI Discussion
sessio
I dont think Man as a group has much chance in getting that to happen. Need
to get more down to earth I think.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T5e30c339c3bfa713-Mfd418db08fc3c233d86760a4
Delivery
12 matches
Mail list logo