Re: [agi] MindForth achieves True AI functionality

2008-01-26 Thread Richard Loosemore

A. T. Murray wrote:

MindForth free open AI source code on-line at
http://mentifex.virtualentity.com/mind4th.html 
has become a True AI-Complete thinking mind 
after years of tweaking and debugging.


On 22 January 2008 the AI Forthmind began to 
think effortlessly and almost flawlessly in 
loops of meandering chains of thought. 

Users are invited to download the AI Mind 
and decide for themselves if what they see 
is machine intelligence and thinking. The 
http://mentifex.virtualentity.com/m4thuser.html 
User Manual explains all the steps involved. 

MindForth is the Model-T of True AI software, 
roughly comparable to the state of the art in 
automobiles one hundred years ago in 1908. 
As such, the AI in Forth will not blow you 
away with any advanced features, but will 
subtly show you the most primitive display 
of spreading activation among concepts.


The world's first publicly available True AI 
achieves meandering chains of thought by 
detouring away from incomplete ideas 
lacking knowledge-base data and by asking 
questions of the human user when the AI is 
unable to complete a sentence of thought. 

The original MindForth program has spawned 
http://AIMind-I.com as the first offspring 
in the evolution of artificial intelligence.


ATM/Mentifex


Okay now you got my attention.

Arthur:  what has it achieved with its thnking?

Can you show an example of its best cogitations?

If it is just producing meandering chains of thought then this is not 
AI, because randome chains of thought are trivially easy to produce (it 
was done already n the 1960s).




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90240515-b2a8bc


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-26 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
No computer is going to start writing and debugging software faster and 
more accurately than we can UNLESS we design it to do so, and during the 
design process we will have ample opportunity to ensre that the machine 
will never be able to pose a danger of any kind.


Perhaps, but the problem is like trying to design a safe gun.


It is 100% NOT like trying to design a safe gun.  There is no 
resemblance whatsoever to that problem.



Maybe you can
program it with a moral code, so it won't write malicious code.  But the two
sides of the security problem require almost identical skills.  Suppose you
ask the AGI to examine some operating system or server software to look for
security flaws.  Is it supposed to guess whether you want to fix the flaws or
write a virus?


If it has a moral code (it does) then why on earth would it have to 
guess whether you want it fix the flaws or fix the virus?  By asking 
that question you are implicitly assuming that this AGI is not an AGI 
at all, but something so incredibly stupid that it cannot tell the 
difference between these two  so if you make that assumption we have 
nothing to worry about, because it would be too stupid to be a general 
intlligence and therefore not even potentially dangerous.





Suppose you ask it to write a virus for the legitimate purpose of testing the
security of your system.  It downloads copies of popular software from the
internet and analyzes it for vulnerabilities, finding several.  As instructed,
it writes a virus, a modified copy of itself running on the infected system. 
Due to a bug, it continues spreading.  Oops...  Hard takeoff.


Again, you implicitly assume that this AGI is so stupid that it makes 
a copy of itself and inserts it into a virus when asked to make an 
experimental virus.  Any system that stupid does not have a general 
intelligence, and will never cause a hard takeoff because an absolute 
prerequisite for hard takeoff is that the system have the wits to know 
about these kind of no-brainer [:-)] questions.


This kind of Stupid-AGI scenario comes up all the time - the SL4 list 
was absolutely them, when last I was wasting my time over there, and 
when I last encountered anyone from SIAI they were still spouting them 
all the time without the slightest understandng of the incoherence of 
what they were saying.






Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90241804-cdba1c


Re: [agi] MindForth achieves True AI functionality

2008-01-26 Thread A. T. Murray
In response to Richard Loosemore below,

A. T. Murray wrote:
 MindForth free open AI source code on-line at
 http://mentifex.virtualentity.com/mind4th.html 
 has become a True AI-Complete thinking mind 
 after years of tweaking and debugging.
 
 On 22 January 2008 the AI Forthmind began to 
 think effortlessly and almost flawlessly in 
 loops of meandering chains of thought. 
 
 Users are invited to download the AI Mind 
 and decide for themselves if what they see 
 is machine intelligence and thinking. The 
 http://mentifex.virtualentity.com/m4thuser.html 
 User Manual explains all the steps involved. 
 
 MindForth is the Model-T of True AI software, 
 roughly comparable to the state of the art in 
 automobiles one hundred years ago in 1908. 
 As such, the AI in Forth will not blow you 
 away with any advanced features, but will 
 subtly show you the most primitive display 
 of spreading activation among concepts.
 
 The world's first publicly available True AI 
 achieves meandering chains of thought by 
 detouring away from incomplete ideas 
 lacking knowledge-base data and by asking 
 questions of the human user when the AI is 
 unable to complete a sentence of thought. 
 
 The original MindForth program has spawned 
 http://AIMind-I.com as the first offspring 
 in the evolution of artificial intelligence.
 
 ATM/Mentifex

 Okay now you got my attention.

 Arthur:  what has it achieved with its thinking?

Up until Tues.22.JAN.2008 (four days ago) the AI
would always encounter some bug that derailed its
thinking. But starting three years ago in March
of 2005 I coded extensive diagnostic routines
into MindForth. Gradually it stopped spouting
gibberish (a frequent complaint against Mentifex AI),
but still countless bugs kept popping up that I
had to deal with one after another.

Suddenly on 22.JAN.2008 there were no show-stopper
bugs anymore -- just glitches in need of improvement. 

 Can you show an example of its best cogitations?

You can tell it a multitude of subject-verb-object (SVO) 
facts, and then you can query it in various ways.

Now the following thing is a very new development.

Six years ago, when I was gearing up to publish AI4U,
my goal for the AI output was (then) that it should
parrot back each sentence of input, because, after
all, each SVO concept had been activated by the
mere fact of input. A few weeks ago, that goal changed
to what the AI does now -- it briefly activates only
one concept at a time, of either input or reentrant
output. So now if you enter cats eat fish, the
AI briefly activates each concept, coming to rest
on the FISH concept (which is new to the AI).

Immediately the SVO mind-module starts to generate
a sentence about the active FISH concept, but the
verbPhrase module fails to find a suffciently
active verb. The detour variable then detours
the thought process all the way up the Chomskyan
syntactic superstructure to the SVO module, or the 
English module even higher, or maybe to the Think
module higher still (I don't remember without
inspecting the code), where the detour-flag calls 
the whatAuxSDO (what-do-Subjects-do) module to
ask the human user a question about FISH.

As the AI stands right now today since 24.JAN.2008,
the output will look like

FISHWHAT DO FISH DO

If the human user (or person in job category attendant)
answers the question, then the AI knows one more fact, 
and continues the dialogue with the human user.

But (and this is even more interesting) if the human
user just sits there to watch the AI think and does
not answer the question, the AI repeats the question
a few times. Then, in a development I coded also
on Tues.22.JAN.2008 because the AI display was so
bland and boring, a thotnum (thought-number) 
system detects the repetitious thought inherent
in the question, and diverts the train of thought
to the EGO self-resuscitation module, which 
activates the oldest post-vault concept in 
the self-rejuvenating memory of the AI Mind.

Right now the AI just blurts out the name of 
the oldest concept (say, CATS) and I need to
code in some extra activation to get a sentence
going.

But if you converse with the AI using known words
or if you answer all queries about unknown words,
you and the AI gradually fill its knowledge base
with SVO-type facts -- not a big ontology like
in the Cyc that Stephen Reed worked on, but still
a large domain of subject-verb-object possibilities.

You may query the KB in several ways, e.g.:

what do cats eat 

cats

cats eat

and so forth, entered as a line of user input.


 If it is just producing meandering chains of thought 
 then this is not AI, because randome chains of thought 
 are trivially easy to produce (it was done already in 
 the 1960s).

The difference here in January of 2008 is that the
words forming the thoughts are conceptualized, 
and thought in MindForth occurs only by
spreading activation. Eventually there
will be fancier forms of thought, such as
prepositional phrases, but in this
Model-T of 

Re: [agi] MindForth achieves True AI functionality

2008-01-26 Thread Richard Loosemore

A. T. Murray wrote:

In response to Richard Loosemore below,

A. T. Murray wrote:

MindForth free open AI source code on-line at
http://mentifex.virtualentity.com/mind4th.html 
has become a True AI-Complete thinking mind 
after years of tweaking and debugging.


On 22 January 2008 the AI Forthmind began to 
think effortlessly and almost flawlessly in 
loops of meandering chains of thought. 

Users are invited to download the AI Mind 
and decide for themselves if what they see 
is machine intelligence and thinking. The 
http://mentifex.virtualentity.com/m4thuser.html 
User Manual explains all the steps involved. 

MindForth is the Model-T of True AI software, 
roughly comparable to the state of the art in 
automobiles one hundred years ago in 1908. 
As such, the AI in Forth will not blow you 
away with any advanced features, but will 
subtly show you the most primitive display 
of spreading activation among concepts.


The world's first publicly available True AI 
achieves meandering chains of thought by 
detouring away from incomplete ideas 
lacking knowledge-base data and by asking 
questions of the human user when the AI is 
unable to complete a sentence of thought. 

The original MindForth program has spawned 
http://AIMind-I.com as the first offspring 
in the evolution of artificial intelligence.


ATM/Mentifex

Okay now you got my attention.

Arthur:  what has it achieved with its thinking?


Up until Tues.22.JAN.2008 (four days ago) the AI
would always encounter some bug that derailed its
thinking. But starting three years ago in March
of 2005 I coded extensive diagnostic routines
into MindForth. Gradually it stopped spouting
gibberish (a frequent complaint against Mentifex AI),
but still countless bugs kept popping up that I
had to deal with one after another.

Suddenly on 22.JAN.2008 there were no show-stopper
bugs anymore -- just glitches in need of improvement. 

Can you show an example of its best cogitations?


You can tell it a multitude of subject-verb-object (SVO) 
facts, and then you can query it in various ways.


Now the following thing is a very new development.

Six years ago, when I was gearing up to publish AI4U,
my goal for the AI output was (then) that it should
parrot back each sentence of input, because, after
all, each SVO concept had been activated by the
mere fact of input. A few weeks ago, that goal changed
to what the AI does now -- it briefly activates only
one concept at a time, of either input or reentrant
output. So now if you enter cats eat fish, the
AI briefly activates each concept, coming to rest
on the FISH concept (which is new to the AI).

Immediately the SVO mind-module starts to generate
a sentence about the active FISH concept, but the
verbPhrase module fails to find a suffciently
active verb. The detour variable then detours
the thought process all the way up the Chomskyan
syntactic superstructure to the SVO module, or the 
English module even higher, or maybe to the Think

module higher still (I don't remember without
inspecting the code), where the detour-flag calls 
the whatAuxSDO (what-do-Subjects-do) module to

ask the human user a question about FISH.

As the AI stands right now today since 24.JAN.2008,
the output will look like

FISHWHAT DO FISH DO

If the human user (or person in job category attendant)
answers the question, then the AI knows one more fact, 
and continues the dialogue with the human user.


But (and this is even more interesting) if the human
user just sits there to watch the AI think and does
not answer the question, the AI repeats the question
a few times. Then, in a development I coded also
on Tues.22.JAN.2008 because the AI display was so
bland and boring, a thotnum (thought-number) 
system detects the repetitious thought inherent

in the question, and diverts the train of thought
to the EGO self-resuscitation module, which 
activates the oldest post-vault concept in 
the self-rejuvenating memory of the AI Mind.


Right now the AI just blurts out the name of 
the oldest concept (say, CATS) and I need to

code in some extra activation to get a sentence
going.

But if you converse with the AI using known words
or if you answer all queries about unknown words,
you and the AI gradually fill its knowledge base
with SVO-type facts -- not a big ontology like
in the Cyc that Stephen Reed worked on, but still
a large domain of subject-verb-object possibilities.

You may query the KB in several ways, e.g.:

what do cats eat 


cats

cats eat

and so forth, entered as a line of user input.

If it is just producing meandering chains of thought 
then this is not AI, because randome chains of thought 
are trivially easy to produce (it was done already in 
the 1960s).


The difference here in January of 2008 is that the
words forming the thoughts are conceptualized, 
and thought in MindForth occurs only by

spreading activation. Eventually there
will be fancier forms of thought, such as
prepositional phrases, but in this
Model-T of 

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-26 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Matt Mahoney wrote:
  Maybe you can
  program it with a moral code, so it won't write malicious code.  But the
 two
  sides of the security problem require almost identical skills.  Suppose
 you
  ask the AGI to examine some operating system or server software to look
 for
  security flaws.  Is it supposed to guess whether you want to fix the flaws
 or
  write a virus?
 
 If it has a moral code (it does) then why on earth would it have to 
 guess whether you want it fix the flaws or fix the virus?  By asking 
 that question you are implicitly assuming that this AGI is not an AGI 
 at all, but something so incredibly stupid that it cannot tell the 
 difference between these two  so if you make that assumption we have 
 nothing to worry about, because it would be too stupid to be a general 
 intlligence and therefore not even potentially dangerous.

If I hired you as a security analyst to find flaws in a piece of software, and
I didn't tell you what I was going to do with the information, how would you
know?

  Suppose you ask it to write a virus for the legitimate purpose of testing
 the
  security of your system.  It downloads copies of popular software from the
  internet and analyzes it for vulnerabilities, finding several.  As
 instructed,
  it writes a virus, a modified copy of itself running on the infected
 system. 
  Due to a bug, it continues spreading.  Oops...  Hard takeoff.
 
 Again, you implicitly assume that this AGI is so stupid that it makes 
 a copy of itself and inserts it into a virus when asked to make an 
 experimental virus.  Any system that stupid does not have a general 
 intelligence, and will never cause a hard takeoff because an absolute 
 prerequisite for hard takeoff is that the system have the wits to know 
 about these kind of no-brainer [:-)] questions.

Mistakes happen. http://en.wikipedia.org/wiki/Morris_worm

If you perform 1000 security tests and 999 of them shut down when they are
supposed to, then you have still failed.

Software correctness is undecidable -- the halting problem reduces to it. 
Computer security isn't going to be magically solved by AGI.  The problem will
actually get worse, because complex systems are harder to get right.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90306957-bdd0f5