On Tue, Jun 5, 2018 at 10:11 PM, Arnold Müller via AGI <agi@agi.topicbox.com
> wrote:
> How does this mind persist knowledge?
>

All three of the AI Minds (in JavaScript, Forth and Perl) have persistent
knowledge by means of conceptual engrams tagged together for knowledge
representation (KR). There is a special tru(th)-value tag for indicating an
idea that is strongly believed. Currently there is a high tru(th) value
only on the idea "GOD DOES NOT PLAY DICE", originally expressed by Albert
Einstein as "GOTT SPIELT NICHT WUERFEL". Since "Seeing is believing", the
AI must wait for embodiment in robots for high truth-values to be set on
the basis of visual perception.


> ..and going from 3 entities to say 10, 100, 1000 - what is the impact on
> responsiveness?
>

Perhaps the responsiveness will slow down as the program grows larger. It
must eventually become a MasPar (massively parallel) program.


> Does it modify its own data structures/code or store it in some file, say
> json?
>

It is not yet self-modifying code. There is a language called Dylan, which
is supposed to be "DY(namic) LA(nguage", but I have never learned Dylan.
Some parts of the AI, such as learning English or German or Russian syntax,
could be made self-modifying by having nine modules for nine parts of
speech and by having gradient variables involved with each module so that
types of Chomskyan sentences could be strung together in a spiral of
learning, including some parts of speech (such as
http://ai.neocities.org/ConJoin.html for "conjunction") with a high
inclusion-value, and leaving out other parts of speech with a low
inclusion-value.


> How does it evolve/become more intelligent?
>

As more and more mental abilities are coded in by the AI mind maintainer --
http://ai.neocities.org/maintainer.html -- the AI becomes more intelligent.
Right now as of AI D-Day June 6, 2018 it can reason with inference and
answer various what-queries such as "what do you think".These posting of
the AI progress to the AI mail-list are my attempt to get other AGI
enthusiasts to either create similar AI Minds or to "embrace and extend"
one of the three current AI Minds, especially the Perlmind for Web sites or
the Forthmind for robots.



> What is the mechanism for these minds to share their knowledge, is there
> an API or could they just "talk" to each other thru the same interface
> humans use to communicate with it?
>

There is currently no Aplication Programming Interface (API), but better
programmers than I could build one in Perl or in Forth. One fellow fifteen
years ago made his version of MindForth able to send and receive e-mails,
but unfortunately MindForth did not become capable of true thinking until
January of 2008.


> Could it be trained as an agent, say to act as a sales representative
> selling specific goods?
>

Even the current JavaScript AI could have its innate knowledge base --
http://ai.neocities.org/MindBoot.html -- rewritten or expanded to include
information on "specific goods" being offered for sale by a sales agent or
a commercial website. The Perl AI could be hooked in with shopping carts
and financial transactions, etc.


> How easy is it for the masses of js developers out there to
> understand/extend the basic open source mind framework, contribute and have
> it grow exponentially?
>

There are probably several million JavaScript programmers out there. The
initial post of this AGI thread was also posted as
https://groups.google.com/d/msg/comp.lang.javascript/IT_ZpzG03iQ/MXvFu0ScAwAJ
in comp.lang.javascript on Usenet, as the latest one of four or five such
posts. The subReddit http://old.reddit.com/r/javascript is also where I
bring the AI to the attention of JavaScript developers, who could easily
extend the "basic open source mind framework" by expanding the MindBoot,
adding new linguistic algorithms, or porting into other natural languages
beyond the current English and Russian.


> Is it on github?
>

Yes, at https://github.com/PriorArt/AGI/blob/master/ghost.pl in Perl and at
http://github.com/BuildingXwithJS/proposals/issues/22 where a JavaScript
expert has offered to evaluate the AI Mind in JavaScript. An old User
Manual at http://github.com/kernc/mindforth/blob/master/wiki/JsAiManual.wiki
needs to be rewritten to cover many recent new AI features.


> How does it integrate with other AGI efforts, are there distinct modules
> to share?
>

Yes, the "distinct modules to share" are listed and linked in an "AiTree"
at the bottom of the http://ai.neocities.org/InFerence.html webpage.


> Please explain it as general concepts without specific abbreviated
> variable names, module names that have no meaning unless deciphered or
> single lines of code - it is aGi after all, not this very narrowness - then
> we could all think more about it.
>

Our AGI list-moderator John Rose helpfully brought up some concerns such as
"1. Ancient source code started when variable names were required to be
short due to memory constraints, programmer laziness, and/or unprofessional
selfishness." True indeed; programmers used to name variables after their
girlfriends. In recent decades the polite standard is to use long,
understandable variable-names. Two major concerns have limited my AI
variables to short names. The chief concern is that the flag-panel for any
given concept in the Psy conceptual array often needs to be stored with a
line of code containing fifteen variables -- one for each associative tag.
If the variabler-names were more than three characters in length, it would
be difficult to store a record in the knowledge base (KB). Secondly, the
concern is mitigated because the  http://ai.neocities.org/var.html webpage
describes all the variables and lets the AI Mind Maintainer link from
documentation straight to any variable in the var.html "Table of
Variables".


> It should actually have been self aware enough to have answered all these
> questions and more about itself right?
>

No, because it is still an infant AI.

Thanks for asking all the above intelligent questions. I saw the posts from
John ROse et al. only as I was uploading the initial post of this thread,
and I was too tired from long AI coding sessions to respond until now.

Respectfully submitted,

Arthur T. Murray



>
>
>
>
>
>
> On Tue, Jun 5, 2018, 23:51 A.T. Murray via AGI <agi@agi.topicbox.com>
> wrote:
>
>> In the 5jun18A.html JavaScript AI Mind we would like to re-organize the
>> SpreadAct() mind-module for spreading activation. It should have special
>> cases at the top and default normal operation at the bottom. The special
>> cases include responding to what-queries and what-think queries, such as
>> "what do you think". Whereas JavaScript lets you escape from a loop with
>> the "break" statement, JavaScript also lets you escape from a subroutine or
>> mind-module with the "return" statement that causes program-flow to abandon
>> the rest of the mind-module code and return to the supervenient module. So
>> in SpreadAct() we may put the special-test cases at the top and with the
>> inclusion of a "return" statement so that program-flow will execute the
>> special test and then return immediately to the calling module without
>> executing the rest of SpreadAct().
>>
>> When we run the JSAI without input, we notice that at first a chain of
>> thought ensues based solely on conceptual activations and without making
>> use of the SpreadAct() module. The AI says, "I HELP KIDS" and then "KIDS
>> MAKE ROBOTS" and "ROBOTS NEED ME". As AI Mind maintainers we would like to
>> make sure that SpreadAct() gets called to maintain chains of thought, not
>> only so that the AI keeps on thinking but also so that the maturing AI Mind
>> will gradually become able to follow chains of thought in all available
>> directions, not just from direct objects to related ideas but also
>> backwards from direct objects to related subjects or from verbs to related
>> subjects and objects.
>>
>> In the EnNounPhrase() module we insert a line of code to turn each direct
>> object into an actpsi or concept-to-be-activated in the default operation
>> at the bottom of the SpreadAct() module. We observe that the artificial
>> Mind begins to follow associative chains of thought much more reliably than
>> before, when only haphazard activation was operating. In the special
>> test-cases of the SpreadAct() module we insert the "return" statement in
>> order to perform only the special case and to skip the treatment of a
>> direct object as a point of departure into a chain of thought. Then we
>> observe something strange when we ask the AI "what do you think", after the
>> initial output of "I HELP KIDS". The AI responds to our query with "I THINK
>> THAT KIDS MAKE ROBOTS", which is the idea engendered by the initial thought
>> of "I HELP KIDS" where "KIDS" as a direct object becomes the actpsi going
>> into SpreadAct(). So the beastie really is telling us what is currently on
>> its mind, whereas previously it would answer, "I THINK THAT I AM A PERSON".
>> When we delay entering our question a little, the AI responds "I THINK THAT
>> ROBOTS NEED ME".
>>
>> --
>> http://ai.neocities.org/AiMind.html
>> http://www.amazon.com/dp/0595654371
>> http://cyborg.blogspot.com/2018/06/jmpj0605.html
>> http://github.com/BuildingXwithJS/proposals/issues/22
>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups> Permalink
> <https://agi.topicbox.com/groups/agi/T6e65e55f3a3cf199-M3b79e559e92efc1eca7e8338>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6e65e55f3a3cf199-Md2b0e434295e0f55f1a583e2
Delivery options: https://agi.topicbox.com/groups

Reply via email to