Re: [agi] AI Mind spares Indicative() and improves SpreadAct() mind-module.

2018-06-05 Thread MP via AGI
Mine, as I’ve said, is literally a JavaScript to Java translation of Arthur’s 
JavaScript "mind."

What’s tinybrain? How does it work?

Sent from ProtonMail Mobile

On Tue, Jun 5, 2018 at 1:43 PM, Stefan Reich via AGI  
wrote:

>> My Java mind
>
> Hold on! What? *I* am making a Java mind. Where's your source? Mine is here: 
> http://tinybrain.de/1016060
>
> Stefan
>
> MP via AGI  schrieb am Di., 5. Juni 2018 19:03:
>
>> John, I definitely feel the same way about the massive obscurities. I even 
>> tried muddling through his diagrams and explanations to no avail. What I was 
>> able to do is port his ungodly bizarre code to java - literally copying and 
>> pasting with a few syntax tweaks - and got it running... somewhat. I still 
>> don’t even know where to begin to really "get" what’s going on.
>>
>> My Java "mind" can "say" a few things before crapping out on me. What bugs 
>> me the most is the EnBoot module. A ton of direct variable assignments are 
>> made, and I don’t get why certain values were chosen...
>>
>> It’s a nightmare that runs on internet explorer. But it’s something.
>>
>> Sent from ProtonMail Mobile
>>
>> On Tue, Jun 5, 2018 at 10:14 AM,  wrote:
>>
>>> Arthur,
>>>
>>> Every time you start posting about your "AI Mind" app I briefly go and look 
>>> at the JS source, "View page source" from the web browser, and here are a 
>>> few thoughts (after working with thousands of source codes over the years, 
>>> and instead of me just saying "If there were an example of how not to write 
>>> an AI app this would be it"):
>>>
>>> 1. Ancient source code started when variable names were required to be 
>>> short due to memory constraints, programmer laziness, and/or unprofessional 
>>> selfishness.
>>>
>>> 2. App code has never been truly refined out of small memory constraints.
>>>
>>> 3. Code is intentionally obscure to hide non-understandings but provide a 
>>> sense of security to author and others by representing "something" 
>>> abstractly.
>>>
>>> 4. Obscure code to deceive readers - or - honestly and unintentionally 
>>> hiding the misunderstood complexity of subject by making a first-person 
>>> reasonable effort at understanding but unprovably failing.
>>>
>>> 5. Code probably cannot be clearly rewritten since there are obscured 
>>> forgotten memories of misunderstood concepts though somewhat indexed by 
>>> dates as comments.
>>>
>>> 6. All these things encrusted over time... layer after layer... often 
>>> hosted as a talking point, a reference point for similar related 
>>> limitations.
>>>
>>> 7. - OR - with very low probability, there is real genius hidden in said 
>>> code, loops and loops of abstract recursive representations, the most 
>>> advanced chat-bot ever created... but I have not the time or energy to 
>>> investigate further as I assume few have, perhaps another intention of said 
>>> app is to wear out the seeker of such truths? I cannot rule-out that this 
>>> app is actually towards some really great AI but unfortunately it looks 
>>> like the opposite and is childishly underpowered and frivolously incomplete.
>>>
>>> But there is some sort of novelty to this I suppose.
>>>
>>> If there were a museum of coding oddities this would definitely be top 10.
>>>
>>> IMO the code one writes is a reflection of oneself, a projection of sort. 
>>> "AI Mind" is more about you Arthur, your mind over time, and much is 
>>> revealed.
>>>
>>> So, you can imagine if an AGI were to attempt to kludgely hack out some 
>>> representation of a mind in similar circumstance what would it "hide", 
>>> limit, and represent at the same time? What would it look like?
>>>
>>> Note JavaScript and JavaScript AI is becoming increasingly advanced. For 
>>> example, see FAQ auto-creators, bot builders, etc. that use JS and 
>>> Typescript is a very powerful abstraction of JS that is surprisingly 
>>> becoming widely adopted...
>>>
>>> John
>
> Am 05.06.2018 19:03 schrieb "MP via AGI" :
>
>> John, I definitely feel the same way about the massive obscurities. I even 
>> tried muddling through his diagrams and explanations to no avail. What I was 
>> able to do is port his ungodly bizarre code to java - literally copying and 
>> pasting with a few syntax tweaks - and got it running... somewhat. I still 
>> don’t even know where to begin to really "get" what’s going on.
>>
>> My Java "mind" can "say" a few things before crapping out on me. What bugs 
>> me the most is the EnBoot module. A ton of direct variable assignments are 
>> made, and I don’t get why certain values were chosen...
>>
>> It’s a nightmare that runs on internet explorer. But it’s something.
>>
>> Sent from ProtonMail Mobile
>>
>> On Tue, Jun 5, 2018 at 10:14 AM,  wrote:
>>
>>> Arthur,
>>>
>>> Every time you start posting about your "AI Mind" app I briefly go and look 
>>> at the JS source, "View page source" from the web browser, and here are a 
>>> few thoughts (after working with thousands of source codes over the years, 
>>> and inste

Re: [agi] AI Mind spares Indicative() and improves SpreadAct() mind-module.

2018-06-05 Thread Stefan Reich via AGI
> My Java mind

Hold on! What? *I* am making a Java mind. Where's your source? Mine is
here: http://tinybrain.de/1016060

Stefan

MP via AGI  schrieb am Di., 5. Juni 2018 19:03:

> John, I definitely feel the same way about the massive obscurities. I even
> tried muddling through his diagrams and explanations to no avail. What I
> was able to do is port his ungodly bizarre code to java - literally copying
> and pasting with a few syntax tweaks - and got it running... somewhat. I
> still don’t even know where to begin to really "get" what’s going on.
>
> My Java "mind" can "say" a few things before crapping out on me. What bugs
> me the most is the EnBoot module. A ton of direct variable assignments are
> made, and I don’t get why certain values were chosen...
>
> It’s a nightmare that runs on internet explorer. But it’s something.
>
>
> Sent from ProtonMail Mobile
>
>
> On Tue, Jun 5, 2018 at 10:14 AM,  wrote:
>
> Arthur,
>
> Every time you start posting about your "AI Mind" app I briefly go and
> look at the JS source, "View page source" from the web browser, and here
> are a few thoughts (after working with thousands of source codes over the
> years, and instead of me just saying "If there were an example of how not
> to write an AI app this would be it"):
>
> 1. Ancient source code started when variable names were required to be
> short due to memory constraints, programmer laziness, and/or unprofessional
> selfishness.
>
> 2. App code has never been truly refined out of small memory constraints.
>
> 3. Code is intentionally obscure to hide non-understandings but provide a
> sense of security to author and others by representing "something"
> abstractly.
>
> 4. Obscure code to deceive readers - or - honestly and unintentionally
> hiding the misunderstood complexity of subject by making a first-person
> reasonable effort at understanding but unprovably failing.
>
> 5. Code probably cannot be clearly rewritten since there are obscured
> forgotten memories of misunderstood concepts though somewhat indexed by
> dates as comments.
>
> 6. All these things encrusted over time... layer after layer... often
> hosted as a talking point, a reference point for similar related
> limitations.
>
> 7. - OR - with very low probability, there is real genius hidden in said
> code, loops and loops of abstract recursive representations, the most
> advanced chat-bot ever created... but I have not the time or energy to
> investigate further as I assume few have, perhaps another intention of said
> app is to wear out the seeker of such truths? I cannot rule-out that this
> app is actually towards some really great AI but unfortunately it looks
> like the opposite and is childishly underpowered and frivolously
> incomplete.
>
>
> But there is some sort of novelty to this I suppose.
>
> If there were a museum of coding oddities this would definitely be top 10.
>
> IMO the code one writes is a reflection of oneself, a projection of sort.
> "AI Mind" is more about you Arthur, your mind over time, and much is
> revealed.
>
> So, you can imagine if an AGI were to attempt to kludgely hack out some
> representation of a mind in similar circumstance what would it "hide",
> limit, and represent at the same time? What would it look like?
>
> Note JavaScript and JavaScript AI is becoming increasingly advanced. For
> example, see FAQ auto-creators, bot builders, etc. that use JS and
> Typescript is a very powerful abstraction of JS that is surprisingly
> becoming widely adopted...
>
> John
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

Am 05.06.2018 19:03 schrieb "MP via AGI" :

John, I definitely feel the same way about the massive obscurities. I even
tried muddling through his diagrams and explanations to no avail. What I
was able to do is port his ungodly bizarre code to java - literally copying
and pasting with a few syntax tweaks - and got it running... somewhat. I
still don’t even know where to begin to really "get" what’s going on.

My Java "mind" can "say" a few things before crapping out on me. What bugs
me the most is the EnBoot module. A ton of direct variable assignments are
made, and I don’t get why certain values were chosen...

It’s a nightmare that runs on internet explorer. But it’s something.


Sent from ProtonMail Mobile


On Tue, Jun 5, 2018 at 10:14 AM,  wrote:

Arthur,

Every time you start posting about your "AI Mind" app I briefly go and look
at the JS source, "View page source" from the web browser, and here are a
few thoughts (after working with thousands of source codes over the years,
and instead of me just saying "If there were an example of how not to write
an AI app this would be it"):

1. Ancient source c

Re: [agi] AI Mind spares Indicative() and improves SpreadAct() mind-module.

2018-06-05 Thread MP via AGI
John, I definitely feel the same way about the massive obscurities. I even 
tried muddling through his diagrams and explanations to no avail. What I was 
able to do is port his ungodly bizarre code to java - literally copying and 
pasting with a few syntax tweaks - and got it running... somewhat. I still 
don’t even know where to begin to really "get" what’s going on.

My Java "mind" can "say" a few things before crapping out on me. What bugs me 
the most is the EnBoot module. A ton of direct variable assignments are made, 
and I don’t get why certain values were chosen...

It’s a nightmare that runs on internet explorer. But it’s something.

Sent from ProtonMail Mobile

On Tue, Jun 5, 2018 at 10:14 AM,  wrote:

> Arthur,
>
> Every time you start posting about your "AI Mind" app I briefly go and look 
> at the JS source, "View page source" from the web browser, and here are a few 
> thoughts (after working with thousands of source codes over the years, and 
> instead of me just saying "If there were an example of how not to write an AI 
> app this would be it"):
>
> 1. Ancient source code started when variable names were required to be short 
> due to memory constraints, programmer laziness, and/or unprofessional 
> selfishness.
>
> 2. App code has never been truly refined out of small memory constraints.
>
> 3. Code is intentionally obscure to hide non-understandings but provide a 
> sense of security to author and others by representing "something" abstractly.
>
> 4. Obscure code to deceive readers - or - honestly and unintentionally hiding 
> the misunderstood complexity of subject by making a first-person reasonable 
> effort at understanding but unprovably failing.
>
> 5. Code probably cannot be clearly rewritten since there are obscured 
> forgotten memories of misunderstood concepts though somewhat indexed by dates 
> as comments.
>
> 6. All these things encrusted over time... layer after layer... often hosted 
> as a talking point, a reference point for similar related limitations.
>
> 7. - OR - with very low probability, there is real genius hidden in said 
> code, loops and loops of abstract recursive representations, the most 
> advanced chat-bot ever created... but I have not the time or energy to 
> investigate further as I assume few have, perhaps another intention of said 
> app is to wear out the seeker of such truths? I cannot rule-out that this app 
> is actually towards some really great AI but unfortunately it looks like the 
> opposite and is childishly underpowered and frivolously incomplete.
>
> But there is some sort of novelty to this I suppose.
>
> If there were a museum of coding oddities this would definitely be top 10.
>
> IMO the code one writes is a reflection of oneself, a projection of sort. "AI 
> Mind" is more about you Arthur, your mind over time, and much is revealed.
>
> So, you can imagine if an AGI were to attempt to kludgely hack out some 
> representation of a mind in similar circumstance what would it "hide", limit, 
> and represent at the same time? What would it look like?
>
> Note JavaScript and JavaScript AI is becoming increasingly advanced. For 
> example, see FAQ auto-creators, bot builders, etc. that use JS and Typescript 
> is a very powerful abstraction of JS that is surprisingly becoming widely 
> adopted...
>
> John
>
> [Artificial General Intelligence List](https://agi.topicbox.com/latest) / AGI 
> / see [discussions](https://agi.topicbox.com/groups/agi) + 
> [participants](https://agi.topicbox.com/groups/agi/members) + [delivery 
> options](https://agi.topicbox.com/groups) 
> [Permalink](https://agi.topicbox.com/groups/agi/T7d4ef049c1079ece-M6a7c94400bd97137369b5f87)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7d4ef049c1079ece-M1723b88611b1c44a3fd46b7e
Delivery options: https://agi.topicbox.com/groups


Re: [agi] AI Mind spares Indicative() and improves SpreadAct() mind-module.

2018-06-05 Thread johnrose
Arthur,

Every time you start posting about your "AI Mind" app I briefly go and look at 
the JS source, "View page source" from the web browser, and here are a few 
thoughts (after working with thousands of source codes over the years, and 
instead of me just saying "If there were an example of how not to write an AI 
app this would be it"):

1. Ancient source code started when variable names were required to be short 
due to memory constraints, programmer laziness, and/or unprofessional 
selfishness.

2. App code has never been truly refined out of small memory constraints.

3. Code is intentionally obscure to hide non-understandings but provide a sense 
of security to author and others by representing "something" abstractly.

4. Obscure code to deceive readers - or - honestly and unintentionally hiding 
the misunderstood complexity of subject by making a first-person reasonable 
effort at understanding but unprovably failing.

5. Code probably cannot be clearly rewritten since there are obscured forgotten 
memories of misunderstood concepts though somewhat indexed by dates as comments.

6. All these things encrusted over time... layer after layer... often hosted as 
a talking point, a reference point for similar related limitations.

7. - OR - with very low probability, there is real genius hidden in said code, 
loops and loops of abstract recursive representations, the most advanced 
chat-bot ever created... but I have not the time or energy to investigate 
further as I assume few have, perhaps another intention of said app is to wear 
out the seeker of such truths? I cannot rule-out that this app is actually 
towards some really great AI but unfortunately it looks like the opposite and 
is childishly underpowered and frivolously incomplete.


But there is some sort of novelty to this I suppose.

If there were a museum of coding oddities this would definitely be top 10.

IMO the code one writes is a reflection of oneself, a projection of sort. "AI 
Mind" is more about you Arthur, your mind over time, and much is revealed.

So, you can imagine if an AGI were to attempt to kludgely hack out some 
representation of a mind in similar circumstance what would it "hide", limit, 
and represent at the same time? What would it look like?

Note JavaScript and JavaScript AI is becoming increasingly advanced. For 
example, see FAQ auto-creators, bot builders, etc. that use JS and Typescript 
is a very powerful abstraction of JS that is surprisingly becoming widely 
adopted...

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7d4ef049c1079ece-M6a7c94400bd97137369b5f87
Delivery options: https://agi.topicbox.com/groups


Re: [agi] AI Mind spares Indicative() and improves SpreadAct() mind-module.

2018-06-03 Thread MP via AGI
I assumed it was showing progress, not a form of "trolling." Your work, while 
immensely difficult for me to understand (as I’ve privately discussed) is still 
nonetheless fascinating from afar. Keep it up!

Sent from ProtonMail Mobile

On Mon, Jun 4, 2018 at 12:30 AM, A.T. Murray via AGI  
wrote:

> On Sun, Jun 3, 2018 at 10:24 PM, MP via AGI  wrote:
>
>> What’s going to be done to fix this issue?
>
> The issue (of generating "THINK" from normal activation rather than from 
> SpreadAct() activation) has been fixed by the indicated "bugfix" to the 
> SpreadAct() module, namely by restoring the missing "psyExam" line of code 
> that was rendering SpreadAct() ineffective.
> Btw (by the way), I post these quasi-lab-notes here not to troll the list, 
> but rather to show the work that I am accomplishing.
> Arthur
>
>> Sent from ProtonMail Mobile
>>
>> On Mon, Jun 4, 2018 at 12:21 AM, A.T. Murray via AGI  
>> wrote:
>>
>>> We have a problem where the AI Mind is calling Indicative() two times in a 
>>> row for no good reason. After a what-think query, the AI is supposed to 
>>> call Indicative() a first time, then ConJoin(), and then Indicative() 
>>> again. We could make the governance depend upon either the 840=THINK verb 
>>> or upon the conj-flag from the ConJoin() module, which, however, is not set 
>>> positive until control flows the first time through the Indicative() 
>>> module. Although we have been setting conj back to zero at the end of 
>>> ConJoin(), we could delay the resetting in order to use conjas a 
>>> control-flag for whether or not to generate thought-clauses joined by one 
>>> or more conjunctions. Such a method shifts the problem back to the 
>>> ConJoin() module, which will probably have to check conceptual memory for 
>>> how many ideas have high activation above a certain threshold for 
>>> warranting the use of a conjunction. Accordingly we go into the Table of 
>>> Variables webpage and we write a description of conj as a two-purpose 
>>> variable. Then we need to decide where to reset conj back to zero, if not 
>>> at the end of Indicative(). We move the zero-reset of conjfrom ConJoin() to 
>>> the EnThink() module, and we stop getting more than one call to 
>>> Indicative() in normal circumstances. However, when we input a what-query, 
>>> which sets the whatcon variable to a positive one, we encounter problems.
>>> Suddenly it looks as though answers to a what-think query have been coming 
>>> not from SpreadAct(), but simply from the activation of the 840=THINK 
>>> concept. It turns out that a line of "psyExam" code was missing from a 
>>> SpreadAct() search-loop, with the result that no engrams were being found 
>>> or activated -- which activation is the main job of the SpreadAct() module.
>>>
>>> --
>>> http://ai.neocities.org/AiMind.html
>>> http://www.amazon.com/dp/0595654371
>>> http://cyborg.blogspot.com/2018/06/jmpj0603.html
>>> http://github.com/BuildingXwithJS/proposals/issues/22
>
> [Artificial General Intelligence List](https://agi.topicbox.com/latest) / AGI 
> / see [discussions](https://agi.topicbox.com/groups/agi) + 
> [participants](https://agi.topicbox.com/groups/agi/members) + [delivery 
> options](https://agi.topicbox.com/groups) 
> [Permalink](https://agi.topicbox.com/groups/agi/T7d4ef049c1079ece-M968094292c1b5a47a52fd899)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7d4ef049c1079ece-M9a65aae9d1b02bf95a4b9fce
Delivery options: https://agi.topicbox.com/groups


Re: [agi] AI Mind spares Indicative() and improves SpreadAct() mind-module.

2018-06-03 Thread A.T. Murray via AGI
On Sun, Jun 3, 2018 at 10:24 PM, MP via AGI  wrote:

> What’s going to be done to fix this issue?
>

The issue (of generating "THINK" from normal activation rather than from
SpreadAct() activation) has been fixed by the indicated "bugfix" to the
SpreadAct() module, namely by restoring the missing "psyExam" line of code
that was rendering SpreadAct() ineffective.

Btw (by the way), I post these quasi-lab-notes here not to troll the list,
but rather to show the work that I am accomplishing.

Arthur

>
> Sent from ProtonMail Mobile
>
>
> On Mon, Jun 4, 2018 at 12:21 AM, A.T. Murray via AGI 
> wrote:
>
> We have a problem where the AI Mind is calling Indicative() two times in a
> row for no good reason. After a what-think query, the AI is supposed to
> call Indicative() a first time, then ConJoin(), and then Indicative()
> again. We could make the governance depend upon either the 840=THINK verb
> or upon the conj-flag from the ConJoin() module, which, however, is not set
> positive until control flows the first time through the Indicative()
> module. Although we have been setting conj back to zero at the end of
> ConJoin(), we could delay the resetting in order to use conjas a
> control-flag for whether or not to generate thought-clauses joined by one
> or more conjunctions. Such a method shifts the problem back to the
> ConJoin() module, which will probably have to check conceptual memory for
> how many ideas have high activation above a certain threshold for
> warranting the use of a conjunction. Accordingly we go into the Table of
> Variables webpage and we write a description of conj as a two-purpose
> variable. Then we need to decide where to reset conj back to zero, if not
> at the end of Indicative(). We move the zero-reset of conjfrom ConJoin() to
> the EnThink() module, and we stop getting more than one call to
> Indicative() in normal circumstances. However, when we input a what-query,
> which sets the whatcon variable to a positive one, we encounter problems.
>
> Suddenly it looks as though answers to a what-think query have been coming
> not from SpreadAct(), but simply from the activation of the 840=THINK
> concept. It turns out that a line of "psyExam" code was missing from a
> SpreadAct() search-loop, with the result that no engrams were being found
> or activated -- which activation is the main job of the SpreadAct() module.
>
> --
> http://ai.neocities.org/AiMind.html
> http://www.amazon.com/dp/0595654371
> http://cyborg.blogspot.com/2018/06/jmpj0603.html
> http://github.com/BuildingXwithJS/proposals/issues/22
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7d4ef049c1079ece-M968094292c1b5a47a52fd899
Delivery options: https://agi.topicbox.com/groups


Re: [agi] AI Mind spares Indicative() and improves SpreadAct() mind-module.

2018-06-03 Thread MP via AGI
What’s going to be done to fix this issue?

Sent from ProtonMail Mobile

On Mon, Jun 4, 2018 at 12:21 AM, A.T. Murray via AGI  
wrote:

> We have a problem where the AI Mind is calling Indicative() two times in a 
> row for no good reason. After a what-think query, the AI is supposed to call 
> Indicative() a first time, then ConJoin(), and then Indicative() again. We 
> could make the governance depend upon either the 840=THINK verb or upon the 
> conj-flag from the ConJoin() module, which, however, is not set positive 
> until control flows the first time through the Indicative() module. Although 
> we have been setting conj back to zero at the end of ConJoin(), we could 
> delay the resetting in order to use conjas a control-flag for whether or not 
> to generate thought-clauses joined by one or more conjunctions. Such a method 
> shifts the problem back to the ConJoin() module, which will probably have to 
> check conceptual memory for how many ideas have high activation above a 
> certain threshold for warranting the use of a conjunction. Accordingly we go 
> into the Table of Variables webpage and we write a description of conj as a 
> two-purpose variable. Then we need to decide where to reset conj back to 
> zero, if not at the end of Indicative(). We move the zero-reset of conjfrom 
> ConJoin() to the EnThink() module, and we stop getting more than one call to 
> Indicative() in normal circumstances. However, when we input a what-query, 
> which sets the whatcon variable to a positive one, we encounter problems.
> Suddenly it looks as though answers to a what-think query have been coming 
> not from SpreadAct(), but simply from the activation of the 840=THINK 
> concept. It turns out that a line of "psyExam" code was missing from a 
> SpreadAct() search-loop, with the result that no engrams were being found or 
> activated -- which activation is the main job of the SpreadAct() module.
>
> --
> http://ai.neocities.org/AiMind.html
> http://www.amazon.com/dp/0595654371
> http://cyborg.blogspot.com/2018/06/jmpj0603.html
> http://github.com/BuildingXwithJS/proposals/issues/22
>
> [Artificial General Intelligence List](https://agi.topicbox.com/latest) / AGI 
> / see [discussions](https://agi.topicbox.com/groups/agi) + 
> [participants](https://agi.topicbox.com/groups/agi/members) + [delivery 
> options](https://agi.topicbox.com/groups) 
> [Permalink](https://agi.topicbox.com/groups/agi/T7d4ef049c1079ece-Mecbd29a8ca6e95dc7ac2ef18)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7d4ef049c1079ece-M88969800b435c2757ad29c5e
Delivery options: https://agi.topicbox.com/groups