Randall,
Your comments below are unfounded, and all the worse for being so
poisonously phrased. If you read the conversation from the beginning
you will discover why: Matt initially suggested the idea that an AGI
might be asked to develop a virus of maximum potential, for purposes of
testing a security system, and that it might respond by inserting an
entire AGI system into the virus, since this would give the virus its
maximum potential. The thrust of my reply was that his entire idea of
Matt's made no sense, since the AGI could not be a "general"
intelligence if it could not see the full implications of the request.
Please feel free to accuse me of gross breaches of rhetorical etiquette,
but if you do, please make sure first that I really have committed the
crimes. ;-)
Richard Loosemore
Randall Randall wrote:
I pulled in some extra context from earlier messages to
illustrate an interesting event, here.
On Jan 27, 2008, at 12:24 PM, Richard Loosemore wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Matt Mahoney wrote:
Suppose you
ask the AGI to examine some operating system or server software to
look for
security flaws. Is it supposed to guess whether you want to fix
the flaws or
write a virus?
If it has a moral code (it does) then why on earth would it have to
guess whether you want it fix the flaws or fix the virus?
If I hired you as a security analyst to find flaws in a piece of
software, and
I didn't tell you what I was going to do with the information, how
would you
know?
This is so silly it is actually getting quite amusing... :-)
So, you are positing a situation in which I am an AGI, and you want to
hire me as a security analyst, and you say to me: "Please build the
most potent virus in the world (one with a complete AGI inside it),
because I need it for security purposes, but I am not going to tell
you what I will do with the thing you build."
And we are assuming that I am an AGI with at least two neurons to rub
together?
How would I know what you were going to do with the information?
I would say "Sorry, pal, but you must think I was born yesterday. I
am not building such a virus for you or anyone else, because the
dangers of building it, even as a test, are so enormous that it would
be ridiculous. And even if I did think it was a valid request, I
wouldn't do such a thing for *anyone* who said 'I cannot tell you what
I will do with the thing that you build'!"
In the context of the actual quotes, above, the following statement
is priceless.
It seems to me that you have completely lost track of the original
issue in this conversation, so your other comments are meaningless
with respect to that original context.
Let's look at this again:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Matt Mahoney wrote:
Suppose you
ask the AGI to examine some operating system or server software to
look for
security flaws. Is it supposed to guess whether you want to fix
the flaws or
write a virus?
If it has a moral code (it does) then why on earth would it have to
guess whether you want it fix the flaws or fix the virus?
Notice that in Matt's "Is it supposed to guess whether you want to fix the
flaws or write a virus?" there's no suggestion that you're asking the AGI
to write a virus, only that you're asking it for security information.
Richard
then quietly changes "to" to "it", thereby changing the meaning of the
sentence
to the form he prefers to argue against (however ungrammatical), and
then he
manages to finish up by accusing *Matt* of forgetting what Matt
originally said
on the matter.
--
Randall Randall <[EMAIL PROTECTED]>
"Someone needs to invent a Bayesball bat that exists solely for
smacking people [...] upside the head." -- Psy-Kosh on reddit.com
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=90627563-22941c