Hi,
 
As we're thinking about it now, Novamente Version 1 will not have feature 4.  It will involve Novamente learning a lot of small programs to use within its overall architecture, but not modifying its overall architecture.
 
Technically speaking: Novamente Version 1 will be C++ code, and within this C++ code, there will small programs running in a language called Sasha.  Novamente will write its own Sasha code to run in its C++ "Mind OS", but will not modify its C++ source.
 
The plan for Novamente Version 2 is still sketchy, because we're focusing on Version 1, which still has a long way to go.  One possible path is to write a fast, scalable Sasha compiler and write the whole thing in Sasha.  Then the Sasha-programming skills of Novamente Version 1 will fairly easily translate into skills at deeper-level self-modification.  (Of course, the Sasha compiler will be in C++ ... so eventually you can't escape teaching Novamente C++ ;-).
 
How intelligent Novamente Version 1 will be -- well ... hmmm ... who knows!! 
 
Among the less sexy benefits of the Novamente Version 2 architecture, I really like the idea of having Novamente correct bugs in its own source code.  It is really hard to get a complex system like this truly bug-free.....  An AGI should be a lot better at debugging very complex code than humans are! 
 
So the real answer to your question is, I'm not sure.  My hope, and my guess, is that Novamente Version 1 will --- with ample program learning and self-modification on the Sasha level -- be able to achieve levels of intelligence that seem huge by human standards. 
 
Of course, a lot of sci-fi scenarios suggest themselves: What happens when we have a super-smart Version 1 system  and it codes Version 2 and finds a security hole in Linux and installs Version 2 in place of half of itself, then all of itself... etc. 
 
 
-- Ben G
 
 
 
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Philip Sutton
Sent: Sunday, February 16, 2003 10:55 AM
To: [EMAIL PROTECTED]
Subject: [agi] Novamente: how crtical is self-improvement to getting human parity?

Hi Ben,

As far as I can work out, there are four things that could conceivably contribute to a Novamente reaching human intelligence parity:

1   the cleverness/power of the original architecture

2   the intensity, length and effectiveness of the Novamente learning
    after being booted up

3   the upgrading of the achitecture/code base by humans as a result of
    learning by anyone (including Novamentes).

4   the self-improvement of the achitecture/code base by the Novamente
    as a result of learning by anyone (humans and Novamentes).

To what extend is the learning system of the current Novamente system (current or planned for the first switched on version) dependent on or intertwined with the capacity for a Novamente to alter its own fundamental architecture?

It seems to me that the risk of getting to the sigularity (or even a dangerous earlier stage) without the human plus AGI community being adequately prepared and sufficiently ethically mature lies in the possiblity of AGIs self-improving on an unhalted exponential trajectory.

If you could get Novamentes to human parity using strategies 1-3 only then you might be able to control the process of moving beyond human parity sufficiently to make it safe.

If getting to human parity relies on strategy 4 then the safety strategy could well be very problematic - Eliezer's full Friendly AI program might need to be applied in full (ie. developing the theory of friendlieness first and then applying "Supersaturated Friendliness" (as Eliezer calls it).

What do you reckon?

Cheers, Philip

Reply via email to