I totally concur with you here, Ben. It makes no sense to open source whatever you have of Novamente at this time for the exact reason you say. Nor does the 'open is more secure' paradigm apply to this problem -- It's not a security vulnerability we're facing, but at least the greatest problem solving tool (to disturbingly shortcut a potentially sentient being) to ever be created. The danger is in someone redesigning and recreating the creature in a dangerous way, not finding ways to break the first one. In this case, assuming you do accomplish AGI (which I'm sure we are all rooting for you), obscurity is the best choice for now. I only regret that I won't be able to look over your design personally for the simple joy of seeing how it is being done.
-Greg Shipley


----- Original Message ----- From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Sunday, December 11, 2005 9:46 PM
Subject: Re: [agi] Forums and Commerical Open Source Project


.
I can't speak for others, but my goal is to create AGI as a tool, not as
something "sentient".  I believe it is possible to do that, but that
possibility does not appeal to me.  Building AGI as a passive tool is much
more important IMO.

My main interest is just the opposite of yours...

The problem with this (and Ben's argument to withhold his Novamente design)
is that you can't tell how "safe" some information is.  A good textbook on
AI or machine learning may enable some people to build AGIs, so do some
journal papers. The state of AI research is always advancing, and we can't
just arbitrary draw a line that says "this is making AGI too easy".  It's
just very improbable that the entire AI research community would agree on
such a standard.  In reality the information will become more and more
available over time.  It seems that public ownership of AGI (under some
regulation) is inevitable.

I am not proposing any kind of regulation or making any general
statement about what others should do with their AGI designs.

I am merely stating: The  Novamente design, even incompletely fleshed
out as it is, seems to me sufficiently likely to lead to a really
powerful AGI that the idea of releasing it publicly (to a public
including individuals possessing what I believe are dangerous goals
and beliefs) scares me.

I will publish various things related to the design, e.g. I have a
book on philosophy of mind and another one on probabilistic inference
that will be sent out for publication before too long.  But my own
judgment is that, once I finally create a superhuman AGI in 2012 or
whenever, someone else might be able to go back and read my 2006
Novamente book and use that how to figure out how to replicate my
achievement and create an evil opposition AGI.  I would rather not let
that happen.  I'd rather create the first AGI and make it a good one
without having to worry about evil opponent AGI's.

I understand that any such decision is somewhat arbitrary, but in life
and work we must make all sorts of somewhat arbitrary decisions.  This
is just one of them.

Nor is this an irrevocable decision -- I could change my mind.   These
are issues I'm constantly thinking over and discussing with my
collaborators.

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to