Re: [agi] The Job market.

2019-10-01 Thread James Bowery
The vast majority of fear mongering about "AGI" is from the elites
themselves, which is why you won't find a _single_ billionaire, nor social
"scientist" in elite universities recognizing lossless compression as a
fair dispute processing mode.  Indeed, Harvard University's Jonathan Haidt
is so terrified of the truth coming out that he's actually come out against
Occam's Razor .

It is these opponents of this aspect of "AGI" that are going to kill
hundreds of millions by closing off the only dispute processing mode that
they, themselves, have made essential by removing from their fellow human
beings the option of sorting proponents of social theories into governments
that test them .

They're junkies.

On Tue, Oct 1, 2019 at 7:37 PM Steve Richfield 
wrote:

> This thread is an existence proof that people working on AGI have NO clue
> how much damage their creations would do in the hands of the power elite.
> If AI has made things THIS bad, then the damage that AGI would do is
> unimaginable - but that never even entered the conversation.
>
> Forgive them, for they no not what they do? Hell no. You guys recklessly
> threaten the world's population without even looking where this is going.
>
> The Terminator sequel considered the ethics of killing people like those
> on this forum - and decided it was OK.
>
> How does this not fully meet the definition of insanity - of being a
> danger to yourselves and others?
>
> Steve
> On Mon, Sep 30, 2019, 5:42 PM  wrote:
>
>>
>>
>>
>>
>> Thanks Stefan.
>>
>>
>>
>>
>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-M99cac90f5bc28a83272cfca0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-01 Thread Steve Richfield
This thread is an existence proof that people working on AGI have NO clue
how much damage their creations would do in the hands of the power elite.
If AI has made things THIS bad, then the damage that AGI would do is
unimaginable - but that never even entered the conversation.

Forgive them, for they no not what they do? Hell no. You guys recklessly
threaten the world's population without even looking where this is going.

The Terminator sequel considered the ethics of killing people like those on
this forum - and decided it was OK.

How does this not fully meet the definition of insanity - of being a danger
to yourselves and others?

Steve
On Mon, Sep 30, 2019, 5:42 PM  wrote:

>
>
>
>
> Thanks Stefan.
>
>
>
>
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-M30379976599634107fab81da
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Physical temporal pattern loops

2019-10-01 Thread keghnfeem
 Boss want me to come over, to his and wife's place, tomorrow to drink cool 
fuel!!!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7db02624de5eea01-M0569e393ed5482f676c7d125
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Genetic evolution of logic rules experiment

2019-10-01 Thread keghnfeem
Neuroevolution of Augmenting Topologies (NEAT):



https://www.youtube.com/watch?v=b3D8jPmcw-g
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T77af318d4abfa8a8-M8ee799e67846f9845cb23741
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Genetic evolution of logic rules experiment

2019-10-01 Thread Matt Mahoney
On Tue, Oct 1, 2019, 9:21 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

>
> From the data of this model, it would be *inferred* that "John is
> probably unhappy / heart-broken".  It is this inference mechanism that is
> very mysterious to us.
>

Human reproductive behavior is very complex. It is encoded in our 10^9 bits
of DNA. Humans, prairie voles, and some species of birds fall in love
because these species evolved so that offspring raised by two parents have
a better chance of survival.

You could probably write the rules for human sexual behavior, but they are
not well understood. It is difficult to study because of taboos on sexual
signaling. Those taboos exist because they result in more children. Humans
are the only primates that cover their reproductive organs, don't go into
heat, or that mate outside the ovulation interval.

It's not a matter of inference, but simply doing research to figure out the
rules. Your alternate is to model evolution, but now you are talking about
10^48 DNA copy operations on 10^37 bits.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T77af318d4abfa8a8-M37277fc7d26239757cad3227
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Physical temporal pattern loops

2019-10-01 Thread Stefan Reich via AGI
These represent actions in the real world. We should represent them as a
record. I propose the following fields:

-Actors [in the event]
-Actions [what is done in the event]
-Results
-Continuations [what might happen afterwards]
-Previous events [what happened before]
-Who wants this
-Who doesn't want this

So, say, the use case is going to the fridge for getting a beer.

-Actors: me
-Actions: I walk etc.
-Results: I can drink
-Continuation: I continue watching TV
-Previous events: I sat down in the couch
-Who wants this: me, obviously
-Who doesn't want this: my wife (lol)

Anyway, that's the idea. You can use the same record for things computers
do, incidentally. And for abstract stuff as well.

-Actors: the matrix
-Actions: matrix multiplication
-Results: data is now in a better representation

etc.

See? You can just program AI.

Cheers

-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7db02624de5eea01-Mec43ffc5d537e113e5a53a3b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Genetic evolution of logic rules experiment

2019-10-01 Thread Yan King Yin, 甄景贤
On Tue, Oct 1, 2019 at 7:48 AM Brett N Martensen 
wrote:

> Matt is right - Logic needs to be grounded on experiences.
>
>
http://matt.colorado.edu/teaching/highcog/readings/b8.pdf


That's a good paper, I will read it in details later.

I made a mistake earlier.  When the brain thinks about "John loves Mary",
its representation is not just the juxtaposition of 3 the concepts "John",
"love", and "Mary". Rather, the brain constructs a model composed of a
whole bunch of *ramifications* of "John loves Mary".  For example:  John
would be assumed to be a typical man with typical male characteristics.
John's love for Mary would assume the typical emotions of romantic love,
etc.  All these little pieces of (assumed, or abduced) knowledge constitute
the mental model.

When the brain hears that "Mary doesn't love John", it adds some further
facts to the constructed model.

>From the data of this model, it would be *inferred* that "John is probably
unhappy / heart-broken".  It is this inference mechanism that is very
mysterious to us.

It seems reasonable to assume that the mental models are constructed from
neural "features", ie, activation patterns.  But we don't know how the
brain jumps from one mental model to a slightly different mental model
containing new conclusions.

It would be very fruitful to compare this mechanism with symbolic logic
rules.  It may lead to a better way to build AGIs, different from the
logic-based approach.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T77af318d4abfa8a8-Mf9252f5b8f70066a3928d5f3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Genetic evolution of logic rules experiment

2019-10-01 Thread Stefan Reich via AGI
> We also know from 35 years of experience (beginning with Cyc) that logic
based knowledge representation is not a path to AGI

I'm begging to differ! But then you already know that I do, I guess :-)

On Mon, 30 Sep 2019 at 23:47, Matt Mahoney  wrote:

> Boolean logic is a subset of neural networks. A single neuron can
> implement any logic gate. Assume the output is clamped between 0 and 1.
>
> A and B = A + B - 1.
> A or B = A + B.
> Not A = -A + 1.
>
> But first order logic is not so simple. We also know from 35 years of
> experience (beginning with Cyc) that logic based knowledge representation
> is not a path to AGI, in spite of what seems to be a straightforward
> approach not requiring a lot of computing power.
>
> I hope you understand why this is the case. You can't train logic models.
> Language evolved to be learnable on slow, massively parallel networks.
> Semantics comes before grammar. If you want to know what works, study text
> compression.
>
> On Mon, Sep 30, 2019, 8:13 AM YKY (Yan King Yin, 甄景贤) <
> generic.intellige...@gmail.com> wrote:
>
>> On 9/27/19, Steve Richfield  wrote:
>> > YKY,
>> >
>> > The most basic function of neurons is process control. That is where
>> > evolution started - and continues. We are clearly an adaptive control
>> > system. Unfortunately, there has been little study of the underlying
>> > optimal "logic" of adaptive control systems.
>> >
>> > I strongly believe that a different sort of "logic" is at work, and that
>> > what we call "intelligence" is simply a larger adaptive control system
>> > working according to that "logic". We are clearly more intelligent than
>> > ants, but that is more quantatative than qualitative.
>> >
>> > Learning logic seems like a good idea, but you might want to reconsider
>> the
>> > logic you are learning.
>> >
>> > Steve
>>
>>
>> Thanks for the comment.  It is a very common objection indeed, and also
>> has some good reasons behind it.
>>
>> From cognitive psychology most people tend to believe that the brain uses
>> *model-based* reasoning instead of *rules-based* reasoning.  We don't
>> fully understand the brain's mechanism, but we may guess at some general
>> principles.  I think the brain uses some sort of neural representations,
>> which are composed of neural "*features*", ie, certain patterns of
>> neurons' activations.
>>
>> Each neuron is either ON or OFF or may be regarded to activate with a
>> fuzzy truth value.  Thus we can view each neural feature as a "micro" *logic
>> proposition*.  That creates a rough correspondence between neural
>> representations and logic representations.
>>
>> Indeed, it is not so surprising, as we can express our thoughts into
>> natural language with relative ease, and natural language has the structure
>> of logic propositions.
>>
>> For example, the visual cortex can recognize images such as "cat" and
>> "dog".  Through a dynamical process, it recognizes the situation of "cat
>> chases dog".  This is likely represented by a juxtaposition of "cat",
>> "chase", "dog" neural features.  This is very similar to the
>> *predicate-logic* expression:  chase(cat, dog).
>>
>> So there may not be a huge gap between neural and logical
>> representations.  The next question is:  How does the brain jump from one
>> neural representation state to the next?  In logic, this is achieved via 
>> *rules
>> with variables and quantifiers*.
>>
>> At this point I am not so sure if the brain's mechanism is really similar
>> to the logical mechanism.  That's why I think you raised a good question,
>> and I still don't have a good answer  , but it is a good place to start
>> thinking.
>>
>> So I assume the brain jumps from one neural representation to the next,
>> and that such neural states are formed via juxtaposition of "*concepts*"
>> (which are some neural activation patterns).  One way that this may be
>> different from logic rules is that the neural representations can be
>> "distributive".
>>
>> I need to think about this more, but there may exist a rough
>> correspondence between logic rules and neural state-transitions.
>>
>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T77af318d4abfa8a8-M723d6ae6316c4de68fb90834
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Genetic evolution of logic rules experiment

2019-10-01 Thread Yan King Yin, 甄景贤
On Mon, Sep 30, 2019 at 11:04 PM Stefan Reich via AGI 
wrote:

> Uh... so where is it on GitHub?
>


The code is here (still under development):
https://github.com/Cybernetic1/GILR

There are further explanations in the README and some screenshots.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T77af318d4abfa8a8-M9558ab26e806aabe01c14397
Delivery options: https://agi.topicbox.com/groups/agi/subscription