Re: [agi] Speculative thoughts on future AGI's

2004-02-05 Thread John Pritchard

http://www.goertzel.org/dynapsyc/2004/AllSeeingAI.htm
 

Hi Ben,

Many comments...  like your article, I'll go ahead and presume a variety 
of "General" or "Advanced" AI -- mine differs in being less defined than 
yours (as always ;-), and rooted in the belief that an appropriate (self 
operational, etc) system based on a kind of heterarchical syntax tree 
(eg,  distributed database, global brain) is the only thing that can 
yield GAI, and the rest (eg, morality) falls out of this.

My comments relate to "context", as in "context sensitive languages". 

In terms of morality and friendliness and their origins, it is what we 
can call the Context of a natural life that it is, in an individual, a 
"life form" -- meaning it is fragile (subject to) and dependent on its 
environment including other life forms.  It is this Context that first 
forms the basis of semantic meanings, and upon which higher order 
meaning(s) are typically based.  (I would not hold up mathematics as a 
certain commonality, although generally the concept of abstraction being 
common is undoubtedly valuable, because I am certain in my belief that 
artificial mathematics would be completely different than ours).

Likewise for the artificial life it will have its own context, an an 
identical way, which implies that it will never comprehend most 
statements from natural life, and natural life will never comprehend 
most statements from such an artificial life. 

Building GAI with more intensive methods requires Context.  A point in 
the knowledge- awareness space can be a fuzzy intersection of multiple 
domains, ie, context.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Teaching AI's to self-modify

2004-07-05 Thread John Pritchard
Hi Ben,
If the AI "knows" the machine as its natural context (stacks, registers, 
etc., ie, "world"), then the supercompiled code should be the only code 
it can comprehend and self modify.  The code produced by the C++ 
compiler would be orders of magnitude more complex.  Imagine an article 
in your favorite magazine where every noun and verb were indirect 
references to paragraphs in other articles of the magazine, and you 
could only aproach comprehension of the article by route of approaching 
the comprehension of the intersection of all of the articles --- after 
reading and rereading every article many times --- eg, Foucault's 
exemplary chapter from 'Les Mots', "Las Meninas".  This is the situation 
faced by a "self aware" (as in the first sentence) AI written in C++.  
If the product of supercompilation is akin to the most minimal 
implementation in assembler (machine code), then that's the only thing 
the AI will be able to understand --- unless you're thinking of the AI 
as an expert system programmed with knowledge of the product of the C++ 
compiler.

That being said, it's difficult to imagine.  Rather, consider the "self" 
of the AI, as in "itself", is an application in nodes and links and self 
modification in terms of nodes and links rather than machine code 
(differentiating terms).  The remainder follows.  If the application is 
the knowledge of the machine, etc..  Don't drink, and think about this one!

Regards from NJ,
John
 
The idea is to maintain two versions of each Novamente-internal procedure:
 
-- a version that's amenable to learning (and generally highly 
compact), but not necessarily rapid to execute
-- a version that's rapid to execute (produced by supercompiling the 
former version)
 
As learning produces new procedures, they may be supercompiled. 
 
At present supercompiling a Novamente procedure is slow and takes up 
to a few seconds or even a few minutes for a large, complex 
procedure.   However, the supercompiler itself is still in a 
preliminary version and could probably be optimized by an order of 
magnitude.  In the long run there is also the concept of 
"supercompiling the supercompiler" ;-)
 
This is research that we're just now starting to play around with -- 
we got the first results supercompiling very simple Novamente 
procedures just last week.  If all goes well, more serious work in 
this direction will commence in the fall.
 
Of course, this aspect is "just computer science" -- albeit very 
difficult and advanced computer science.  It's the learning of the 
procedures that's the really subtle part... which is the topic that 
has taken far more of my attention . Fortunately the 
Java-Supercompilers team (Andrei and Arkady Klimov) seem to have the 
supercompilation aspect well in hand...
 
But though it's "just CS", it wil be very useful for Novamente-learned 
procedures to be faster than, rather than a couple orders of magnitude 
slower than, Novamente-embedded procedures coded directly in C++ . 
This is not the hardest part of having Novamente learn to 
self-improve, but it's a necessary component.
 
-- Ben G
 
 

-Original Message-
*From:* [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of [EMAIL PROTECTED]
*Sent:* Sunday, July 04, 2004 11:11 PM
*To:* [EMAIL PROTECTED]
*Subject:* Re: [agi] Teaching AI's to self-modify
Ben,
 
Aren't optimizations by supercompilation going to make the future
changes of that code more (maybe a lot more) difficult? 
 
Sincerely,
Jiri
 
In a message dated 7/4/2004 9:58:41 AM Eastern Standard Time,
[EMAIL PROTECTED] writes:

http://www.supercompilers.com/

To unsubscribe, change your address, or temporarily deactivate
your subscription, please go to
http://v2.listbox.com/member/[EMAIL PROTECTED] 


To unsubscribe, change your address, or temporarily deactivate your 
subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED] 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]