RE: [agi] Teaching AI's to self-modify

2004-08-16 Thread Ophir Shai
Title: Message



Hello Ben,
Is 
there a prototype of Novamente I can download and run on a PC, for NLP 
purposes?..
Thanks,
Shai.

  
  -Original Message-From: Ben Goertzel 
  [mailto:[EMAIL PROTECTED] Sent: Saturday, July 31, 2004 7:05 
  PMTo: [EMAIL PROTECTED]Subject: Re: [agi] Teaching AI's 
  to self-modify
  
  Hi Dennis,
  
  Sorry for the long delay in reply, I've been on vacation and am now 
  plowing through a big pile of backed-up emails
  
  To answer your three questions...
  
  1) We have an NLP framework that uses a variation of the carnegie-mellon 
  "link parser" together with a bunch of special "semantic algorithms" for 
  mapping syntactic links into semantic ones, and some probabilistic inference 
  based algorithms for semantic disambiguation  reference resolution. 
  Given a sentence this framework makes a number of prioritized guesses 
  regarding the correct interpretation of the sentence. The user gets to 
  view these guesses and correct them via a UI; for instance, they can select 
  from several possible parses, they can change the selected meaning of a word 
  to a different one (choosing from a menu of known meanings or defining a new 
  one), etc. The overall process is much slower than just freely typing in 
  natural language, but much faster than entering knowledge using an expert 
  system, formal logic based approach.
  
  This will go into commercial use for one of our customers as of early 
  september.
  
  2) Of course, I agree that ambiguity can never be fully eliminated 
  from natural language. NOvamente internally can deal with 
  ambiguity. 
  
  3) As for an example of a program generated by Sasha In 
  language processing, an example would be an algorithm for reference resolution 
  --- normally one would code such a thing in C++ and stick it in a Novamente 
  MindAgent, but if one codes it in Sasha then Novamente can not only execute it 
  but also study and modify it, because it will be represented in the form of 
  Novamente nodes and links. In a vision context, an example would be an 
  algorithm for edge detection.
  
  This is less far along than INLINK. Right now we are using Sasha to 
  generate programs doing simple math stuff and list manipulations 
  etc.Over the next couple months we will hook it into the rest of 
  Novamente and start using it for applications like the ones described 
  above. The point is, it's a way of getting procedural knowledge into the 
  system in a precise yet learnable/adaptable way. Whereas INLINK is a way 
  of getting declarative knowledge into the system.
  
  Not very much like the process of teaching a human infant, of 
  course. I think that kind of experiential interactive learning is going 
  to be critical to teaching Novamente, BUT, I think it makes sense to augment 
  it with these kinds of tricks like INLINK and Sasha, as a way of overcoming 
  the deficit Novamente has, compared to humans, in terms of its lack of an 
  evolved cognitive endowment.
  
  I plan to write a document on this stuff during the next few days (or 
  perhaps the next week if things go slowly), and I'll post it to the list when 
  I do...
  --Ben G
  
  Dennis Gorelik [EMAIL PROTECTED] 
  wrote:
  Ben,1) 
Could you describe what is the architecture of you INLINKinteractive 
framework?How is it going to handle natural language?2) I doubt 
that it's possible to communicate in natural languagecompletely 
unambiguously. There always will be some uncertainty.Intelligent system 
itself will have to decide how to interpretincoming message.3) 
Could you give an example of a program which will be generated bySasha 
programming framework?Sunday, July 4, 2004, 8:58:01 AM, you 
wrote:BG We're developing two powerful methods for communicating 
with Novamente:BG 1) the INLINK interactive NL framework, which 
allows naturalBG language to be communicated to Novamente correctly 
andBG unambiguouslyBG 2) the Sasha programming 
framework, which allows the easyBG construction of software programs 
that manipulate Novamente nodesBG and links [and, rapid executable 
versions of these programs willBG be produced via supercompilation, 
www.supercompilers.com]. RightBG now, Novamente MindAgents, the 
processes that guide NovamenteBG cognition, are coded as C++ objects 
which are opaque to NovamenteBG cognition; but with the advent of 
Sasha, we'll be able to codeBG MindAgents as Novamente nodes and 
links which can in principle beBG modified/improved/replaced by 
Novamente cognition.---To unsubscribe, change your 
address, or temporarily deactivate your subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]
  
  
  To unsubscribe, change your address, or temporarily deactivate your 
  subscription, please go to 
  http://v2.listbox.com/member/[EMAIL PROTECTED]
  __

RE: [agi] Teaching AI's to self-modify

2004-08-16 Thread Ben Goertzel
Title: Message




Shai,

I'm sorry, but we 
don't currently provide the Novamentesystem except to members of our 
RD team, or to firms that have licensed Novamente-based products to use for 
some specific purpose (such as natural language processing). This may 
change in the future but probably not in the near 
future.

As an 
aside, Novamente doesn't normally run on a single PC, it runs on a network of 
PC's, each of which must be running Linux (latest versions of SUSE or 
RedHat). Our simple test configuration for language processing using 
Novamente uses 3 machines, with a separate instance of the Novamente core 
software process on each of them.

-- 
Ben

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]On Behalf Of Ophir 
  ShaiSent: Monday, August 16, 2004 7:31 AMTo: 
  [EMAIL PROTECTED]Subject: RE: [agi] Teaching AI's to 
  self-modify
  Hello Ben,
  Is 
  there a prototype of Novamente I can download and run on a PC, for NLP 
  purposes?..
  Thanks,
  Shai.
  

-Original Message-From: Ben Goertzel 
[mailto:[EMAIL PROTECTED] Sent: Saturday, July 31, 2004 7:05 
PMTo: [EMAIL PROTECTED]Subject: Re: [agi] Teaching 
AI's to self-modify

Hi Dennis,

Sorry for the long delay in reply, I've been on vacation and am now 
plowing through a big pile of backed-up emails

To answer your three questions...

1) We have an NLP framework that uses a variation of the 
carnegie-mellon "link parser" together with a bunch of special "semantic 
algorithms" for mapping syntactic links into semantic ones, and some 
probabilistic inference based algorithms for semantic disambiguation  
reference resolution. Given a sentence this framework makes a number 
of prioritized guesses regarding the correct interpretation of the 
sentence. The user gets to view these guesses and correct them via a 
UI; for instance, they can select from several possible parses, they can 
change the selected meaning of a word to a different one (choosing from a 
menu of known meanings or defining a new one), etc. The overall 
process is much slower than just freely typing in natural language, but much 
faster than entering knowledge using an expert system, formal logic based 
approach.

This will go into commercial use for one of our customers as of early 
september.

2) Of course, I agree that ambiguity can never be fully 
eliminated from natural language. NOvamente internally can deal with 
ambiguity. 

3) As for an example of a program generated by Sasha In 
language processing, an example would be an algorithm for reference 
resolution --- normally one would code such a thing in C++ and stick it in a 
Novamente MindAgent, but if one codes it in Sasha then Novamente can not 
only execute it but also study and modify it, because it will be represented 
in the form of Novamente nodes and links. In a vision context, an example 
would be an algorithm for edge detection.

This is less far along than INLINK. Right now we are using Sasha 
to generate programs doing simple math stuff and list manipulations 
etc.Over the next couple months we will hook it into the rest of 
Novamente and start using it for applications like the ones described 
above. The point is, it's a way of getting procedural knowledge into 
the system in a precise yet learnable/adaptable way. Whereas INLINK is 
a way of getting declarative knowledge into the system.

Not very much like the process of teaching a human infant, of 
course. I think that kind of experiential interactive learning is 
going to be critical to teaching Novamente, BUT, I think it makes sense to 
augment it with these kinds of tricks like INLINK and Sasha, as a way of 
overcoming the deficit Novamente has, compared to humans, in terms of its 
lack of an evolved cognitive endowment.

I plan to write a document on this stuff during the next few days (or 
perhaps the next week if things go slowly), and I'll post it to the list 
when I do...
--Ben G

Dennis Gorelik [EMAIL PROTECTED] 
wrote:
Ben,1) 
  Could you describe what is the architecture of you INLINKinteractive 
  framework?How is it going to handle natural language?2) I 
  doubt that it's possible to communicate in natural languagecompletely 
  unambiguously. There always will be some uncertainty.Intelligent 
  system itself will have to decide how to interpretincoming 
  message.3) Could you give an example of a program which will be 
  generated bySasha programming framework?Sunday, July 4, 2004, 
  8:58:01 AM, you wrote:BG We're developing two powerful methods 
  for communicating with Novamente:BG 1) the INLINK interactive 
  NL framework, which allows naturalBG language to be communicated

Re: [agi] Teaching AI's to self-modify

2004-07-31 Thread Ben Goertzel

Hi Dennis,

Sorry for the long delay in reply, I've been on vacation and am now plowing through a big pile of backed-up emails

To answer your three questions...

1) We have an NLP framework that uses a variation of the carnegie-mellon "link parser" together with a bunch of special "semantic algorithms" for mapping syntactic links into semantic ones, and some probabilistic inference based algorithms for semantic disambiguation  reference resolution. Given a sentence this framework makes a number of prioritized guesses regarding the correct interpretation of the sentence. The user gets to view these guesses and correct them via a UI; for instance, they can select from several possible parses, they can change the selected meaning of a word to a different one (choosing from a menu of known meanings or defining a new one), etc. The overall process is much slower than just freely typing in natural language, but much faster than entering knowledge using an expert system, formal logic based approach.

This will go into commercial use for one of our customers as of early september.

2) Of course, I agree that ambiguity can never be fully eliminated from natural language. NOvamente internally can deal with ambiguity. 

3) As for an example of a program generated by Sasha In language processing, an example would be an algorithm for reference resolution --- normally one would code such a thing in C++ and stick it in a Novamente MindAgent, but if one codes it in Sasha then Novamente can not only execute it but also study and modify it, because it will be represented in the form of Novamente nodes and links. In a vision context, an example would be an algorithm for edge detection.

This is less far along than INLINK. Right now we are using Sasha to generate programs doing simple math stuff and list manipulations etc.Over the next couple months we will hook it into the rest of Novamente and start using it for applications like the ones described above. The point is, it's a way of getting procedural knowledge into the system in a precise yet learnable/adaptable way. Whereas INLINK is a way of getting declarative knowledge into the system.

Not very much like the process of teaching a human infant, of course. I think that kind of experiential interactive learning is going to be critical to teaching Novamente, BUT, I think it makes sense to augment it with these kinds of tricks like INLINK and Sasha, as a way of overcoming the deficit Novamente has, compared to humans, in terms of its lack of an evolved cognitive endowment.

I plan to write a document on this stuff during the next few days (or perhaps the next week if things go slowly), and I'll post it to the list when I do...
--Ben G

Dennis Gorelik [EMAIL PROTECTED] wrote:
Ben,1) Could you describe what is the architecture of you INLINKinteractive framework?How is it going to handle natural language?2) I doubt that it's possible to communicate in natural languagecompletely unambiguously. There always will be some uncertainty.Intelligent system itself will have to decide how to interpretincoming message.3) Could you give an example of a program which will be generated bySasha programming framework?Sunday, July 4, 2004, 8:58:01 AM, you wrote:BG We're developing two powerful methods for communicating with Novamente:BG 1) the INLINK interactive NL framework, which allows naturalBG language to be communicated to Novamente correctly andBG unambiguouslyBG 2) the Sasha programming framework, which allows the easyBG construction of software
 programs that manipulate Novamente nodesBG and links [and, rapid executable versions of these programs willBG be produced via supercompilation, www.supercompilers.com]. RightBG now, Novamente MindAgents, the processes that guide NovamenteBG cognition, are coded as C++ objects which are opaque to NovamenteBG cognition; but with the advent of Sasha, we'll be able to codeBG MindAgents as Novamente nodes and links which can in principle beBG modified/improved/replaced by Novamente cognition.---To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



RE: [agi] Teaching AI's to self-modify

2004-07-05 Thread Ben Goertzel




The idea is to maintain two versions of each 
Novamente-internal procedure:

-- a version that's amenable to learning 
(and generally highly compact), but not necessarily rapid to 
execute
-- a version that's rapid to execute 
(produced by supercompiling the former version)

As learning produces new procedures, they 
may be supercompiled. 

At present supercompiling a Novamente 
procedure is slow and takes up to a few seconds or even a few minutes for a 
large, complex procedure. However, the supercompiler itself is still 
in a preliminary version and could probably be optimized by an order of 
magnitude. In the long run there is also the concept of "supercompiling 
the supercompiler" ;-)

This is research that we're just now 
starting to play around with -- we got the first results supercompiling very 
simple Novamente procedures just last week. If all goes well, more serious 
work in this direction will commence in the fall.

Of course, this aspect is "just computer 
science" -- albeit very difficult and advanced computer science. It's the 
learning of the procedures that's the really subtle part... which is the topic 
that has taken far more of my attention . Fortunately the Java-Supercompilers 
team (Andrei and Arkady Klimov) seem to have the supercompilation aspect well in 
hand...

But though it's "just CS", it wil be very 
useful for Novamente-learned procedures to be faster than, rather than a couple 
orders of magnitude slower than, Novamente-embedded procedures coded directly in 
C++ . This is not the hardest part of having Novamente learn to 
self-improve, but it's a necessarycomponent.

-- Ben G



  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]On Behalf Of 
  [EMAIL PROTECTED]Sent: Sunday, July 04, 2004 11:11 PMTo: 
  [EMAIL PROTECTED]Subject: Re: [agi] Teaching AI's to 
  self-modify
  
  Ben,
  
  Aren'toptimizations bysupercompilation going to make the 
  future changes of that code more (maybe a lot 
more)difficult?
  
  Sincerely,
  Jiri
  
  In a message dated 7/4/2004 9:58:41 AM Eastern Standard Time, 
  [EMAIL PROTECTED] writes:
  http://www.supercompilers.com/
  
  
  
  To unsubscribe, change your address, or temporarily deactivate your 
  subscription, please go to 
  http://v2.listbox.com/member/[EMAIL PROTECTED]
  


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Teaching AI's to self-modify

2004-07-05 Thread John Pritchard
Hi Ben,
If the AI knows the machine as its natural context (stacks, registers, 
etc., ie, world), then the supercompiled code should be the only code 
it can comprehend and self modify.  The code produced by the C++ 
compiler would be orders of magnitude more complex.  Imagine an article 
in your favorite magazine where every noun and verb were indirect 
references to paragraphs in other articles of the magazine, and you 
could only aproach comprehension of the article by route of approaching 
the comprehension of the intersection of all of the articles --- after 
reading and rereading every article many times --- eg, Foucault's 
exemplary chapter from 'Les Mots', Las Meninas.  This is the situation 
faced by a self aware (as in the first sentence) AI written in C++.  
If the product of supercompilation is akin to the most minimal 
implementation in assembler (machine code), then that's the only thing 
the AI will be able to understand --- unless you're thinking of the AI 
as an expert system programmed with knowledge of the product of the C++ 
compiler.

That being said, it's difficult to imagine.  Rather, consider the self 
of the AI, as in itself, is an application in nodes and links and self 
modification in terms of nodes and links rather than machine code 
(differentiating terms).  The remainder follows.  If the application is 
the knowledge of the machine, etc..  Don't drink, and think about this one!

Regards from NJ,
John
 
The idea is to maintain two versions of each Novamente-internal procedure:
 
-- a version that's amenable to learning (and generally highly 
compact), but not necessarily rapid to execute
-- a version that's rapid to execute (produced by supercompiling the 
former version)
 
As learning produces new procedures, they may be supercompiled. 
 
At present supercompiling a Novamente procedure is slow and takes up 
to a few seconds or even a few minutes for a large, complex 
procedure.   However, the supercompiler itself is still in a 
preliminary version and could probably be optimized by an order of 
magnitude.  In the long run there is also the concept of 
supercompiling the supercompiler ;-)
 
This is research that we're just now starting to play around with -- 
we got the first results supercompiling very simple Novamente 
procedures just last week.  If all goes well, more serious work in 
this direction will commence in the fall.
 
Of course, this aspect is just computer science -- albeit very 
difficult and advanced computer science.  It's the learning of the 
procedures that's the really subtle part... which is the topic that 
has taken far more of my attention . Fortunately the 
Java-Supercompilers team (Andrei and Arkady Klimov) seem to have the 
supercompilation aspect well in hand...
 
But though it's just CS, it wil be very useful for Novamente-learned 
procedures to be faster than, rather than a couple orders of magnitude 
slower than, Novamente-embedded procedures coded directly in C++ . 
This is not the hardest part of having Novamente learn to 
self-improve, but it's a necessary component.
 
-- Ben G
 
 

-Original Message-
*From:* [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of [EMAIL PROTECTED]
*Sent:* Sunday, July 04, 2004 11:11 PM
*To:* [EMAIL PROTECTED]
*Subject:* Re: [agi] Teaching AI's to self-modify
Ben,
 
Aren't optimizations by supercompilation going to make the future
changes of that code more (maybe a lot more) difficult? 
 
Sincerely,
Jiri
 
In a message dated 7/4/2004 9:58:41 AM Eastern Standard Time,
[EMAIL PROTECTED] writes:

http://www.supercompilers.com/

To unsubscribe, change your address, or temporarily deactivate
your subscription, please go to
http://v2.listbox.com/member/[EMAIL PROTECTED] 


To unsubscribe, change your address, or temporarily deactivate your 
subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED] 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Teaching AI's to self-modify

2004-07-05 Thread Ben Goertzel

Hi John,

Initially Novamente will not know anything about its underlying hardware
architecture.

Rather, it will learn procedures that are represented in a fairly abstract
mathematical form (combinatory logic) and that manipulate Novamente nodes
and links as primitives alongside ints, floats and strings.

In a later phase of teaching, we will teach Novamente about computer
hardware and its impact on algorithms, etc., but we can get a long way
before this is necessary -- perhaps even to the level of superhuman
superintelligence ;-) ... math is pretty powerful!

-- Ben

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of John Pritchard
 Sent: Monday, July 05, 2004 2:31 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [agi] Teaching AI's to self-modify



 Hi Ben,

 If the AI knows the machine as its natural context (stacks, registers,
 etc., ie, world), then the supercompiled code should be the only code
 it can comprehend and self modify.  The code produced by the C++
 compiler would be orders of magnitude more complex.  Imagine an article
 in your favorite magazine where every noun and verb were indirect
 references to paragraphs in other articles of the magazine, and you
 could only aproach comprehension of the article by route of approaching
 the comprehension of the intersection of all of the articles --- after
 reading and rereading every article many times --- eg, Foucault's
 exemplary chapter from 'Les Mots', Las Meninas.  This is the situation
 faced by a self aware (as in the first sentence) AI written in C++.
 If the product of supercompilation is akin to the most minimal
 implementation in assembler (machine code), then that's the only thing
 the AI will be able to understand --- unless you're thinking of the AI
 as an expert system programmed with knowledge of the product of the C++
 compiler.

 That being said, it's difficult to imagine.  Rather, consider the self
 of the AI, as in itself, is an application in nodes and links and self
 modification in terms of nodes and links rather than machine code
 (differentiating terms).  The remainder follows.  If the application is
 the knowledge of the machine, etc..  Don't drink, and think about
 this one!

 Regards from NJ,

 John

 
  The idea is to maintain two versions of each Novamente-internal
 procedure:
 
  -- a version that's amenable to learning (and generally highly
  compact), but not necessarily rapid to execute
  -- a version that's rapid to execute (produced by supercompiling the
  former version)
 
  As learning produces new procedures, they may be supercompiled.
 
  At present supercompiling a Novamente procedure is slow and takes up
  to a few seconds or even a few minutes for a large, complex
  procedure.   However, the supercompiler itself is still in a
  preliminary version and could probably be optimized by an order of
  magnitude.  In the long run there is also the concept of
  supercompiling the supercompiler ;-)
 
  This is research that we're just now starting to play around with --
  we got the first results supercompiling very simple Novamente
  procedures just last week.  If all goes well, more serious work in
  this direction will commence in the fall.
 
  Of course, this aspect is just computer science -- albeit very
  difficult and advanced computer science.  It's the learning of the
  procedures that's the really subtle part... which is the topic that
  has taken far more of my attention . Fortunately the
  Java-Supercompilers team (Andrei and Arkady Klimov) seem to have the
  supercompilation aspect well in hand...
 
  But though it's just CS, it wil be very useful for Novamente-learned
  procedures to be faster than, rather than a couple orders of magnitude
  slower than, Novamente-embedded procedures coded directly in C++ .
  This is not the hardest part of having Novamente learn to
  self-improve, but it's a necessary component.
 
  -- Ben G
 
 
 
  -Original Message-
  *From:* [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] Behalf Of [EMAIL PROTECTED]
  *Sent:* Sunday, July 04, 2004 11:11 PM
  *To:* [EMAIL PROTECTED]
  *Subject:* Re: [agi] Teaching AI's to self-modify
 
  Ben,
 
  Aren't optimizations by supercompilation going to make the future
  changes of that code more (maybe a lot more) difficult?
 
  Sincerely,
  Jiri
 
  In a message dated 7/4/2004 9:58:41 AM Eastern Standard Time,
  [EMAIL PROTECTED] writes:
 
  http://www.supercompilers.com/
 
 
 
  To unsubscribe, change your address, or temporarily deactivate
  your subscription, please go to
  http://v2.listbox.com/member/[EMAIL PROTECTED]
 
  
  To unsubscribe, change your address, or temporarily deactivate your
  subscription, please go to
  http://v2.listbox.com/member/[EMAIL PROTECTED

Re: [agi] Teaching AI's to self-modify

2004-07-05 Thread deering



Ben, I hope you are going to keep a human in the 
loop. 


Human in the loop scenario:

The alpha Novamente makes a suggestion about some 
change to its software.
The human implements the change on the beta 
Novamente running on a separate machine, and tests it.
If it seems to be an improvement, it is 
incorporated into the alpha Novamente.


Human not in the loop scenario:

The Novamente looks at its code.
The Novamente makes changes to its code, and 
reboots itself.

The Novamente looks at its code.
The Novamente makes changes to its code, and 
reboots itself.

The Novamente looks at its code.
The Novamente makes changes to its code, and 
reboots itself.
The humans wonder what the hell is going on.


Mike Deering.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




RE: [agi] Teaching AI's to self-modify

2004-07-05 Thread Ben Goertzel




Hi,

Because we use a lot of evolutionary learning methods, it will work more 
like:

A 
whole populatoin of Novamentes (10 or so for starters, later perhaps much more) 
repeatedly try out new MindAgents (cognitive-control objects) on some 
test-cognitive-problems and see how well it does. Another Novamente, the 
controller, studies which of the new MindAgents work well, and mines patterns 
among these, creating new MindAgents to try out 

So 
there is no human in the learning loop

Furthermore, for a human to understand the intricate details of a learned 
procedure (e.g. an automaticallylearned MindAgent) may be very 
hard Just as understanding the details of our own adaptively learned 
neural wiring is very hard

-- 
Ben

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]On Behalf Of deeringSent: 
  Monday, July 05, 2004 2:54 PMTo: 
  [EMAIL PROTECTED]Subject: Re: [agi] Teaching AI's to 
  self-modify
  Ben, I hope you are going to keep a human in the 
  loop. 
  
  
  Human in the loop scenario:
  
  The alpha Novamente makes a suggestion about some 
  change to its software.
  The human implements the change on the beta 
  Novamente running on a separate machine, and tests it.
  If it seems to be an improvement, it is 
  incorporated into the alpha Novamente.
  
  
  Human not in the loop scenario:
  
  The Novamente looks at its code.
  The Novamente makes changes to its code, and 
  reboots itself.
  
  The Novamente looks at its code.
  The Novamente makes changes to its code, and 
  reboots itself.
  
  The Novamente looks at its code.
  The Novamente makes changes to its code, and 
  reboots itself.
  The humans wonder what the hell is going on.
  
  
  Mike Deering.
  
  
  To unsubscribe, change your address, or temporarily deactivate your 
  subscription, please go to 
  http://v2.listbox.com/member/[EMAIL PROTECTED]
  


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Teaching AI's to self-modify

2004-07-04 Thread Dennis Gorelik
Ben,

1) Could you describe what is the architecture of you INLINK
interactive framework?
How is it going to handle natural language?

2) I doubt that it's possible to communicate in natural language
completely unambiguously. There always will be some uncertainty.
Intelligent system itself will have to decide how to interpret
incoming message.

3) Could you give an example of a program which will be generated by
Sasha programming framework?

Sunday, July 4, 2004, 8:58:01 AM, you wrote:

BG We're developing two powerful methods for communicating with Novamente:
 
BG 1) the INLINK interactive NL framework, which allows natural
BG language to be communicated to Novamente correctly and
BG unambiguously
 
BG 2) the Sasha programming framework, which allows the easy
BG construction of software programs that manipulate Novamente nodes
BG and links [and, rapid executable versions of these programs will
BG be produced via supercompilation, www.supercompilers.com].  Right
BG now, Novamente MindAgents, the processes that guide Novamente
BG cognition, are coded as C++ objects which are opaque to Novamente
BG cognition; but with the advent of Sasha, we'll be able to code
BG MindAgents as Novamente nodes and links which can in principle be
BG modified/improved/replaced by Novamente cognition.


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]