--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > Maybe you can
> > program it with a moral code, so it won't write malicious code.  But the
> two
> > sides of the security problem require almost identical skills.  Suppose
> you
> > ask the AGI to examine some operating system or server software to look
> for
> > security flaws.  Is it supposed to guess whether you want to fix the flaws
> or
> > write a virus?
> 
> If it has a moral code (it does) then why on earth would it have to 
> guess whether you want it fix the flaws or fix the virus?  By asking 
> that question you are implicitly assuming that this "AGI" is not an AGI 
> at all, but something so incredibly stupid that it cannot tell the 
> difference between these two .... so if you make that assumption we have 
> nothing to worry about, because it would be too stupid to be a "general" 
> intlligence and therefore not even potentially dangerous.

If I hired you as a security analyst to find flaws in a piece of software, and
I didn't tell you what I was going to do with the information, how would you
know?

> > Suppose you ask it to write a virus for the legitimate purpose of testing
> the
> > security of your system.  It downloads copies of popular software from the
> > internet and analyzes it for vulnerabilities, finding several.  As
> instructed,
> > it writes a virus, a modified copy of itself running on the infected
> system. 
> > Due to a bug, it continues spreading.  Oops...  Hard takeoff.
> 
> Again, you implicitly assume that this "AGI" is so stupid that it makes 
> a copy of itself and inserts it into a virus when asked to make an 
> experimental virus.  Any system that stupid does not have a general 
> intelligence, and will never cause a hard takeoff because an absolute 
> prerequisite for hard takeoff is that the system have the wits to know 
> about these kind of no-brainer [:-)] questions.

Mistakes happen. http://en.wikipedia.org/wiki/Morris_worm

If you perform 1000 security tests and 999 of them shut down when they are
supposed to, then you have still failed.

Software correctness is undecidable -- the halting problem reduces to it. 
Computer security isn't going to be magically solved by AGI.  The problem will
actually get worse, because complex systems are harder to get right.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=90306957-bdd0f5

Reply via email to