Re: [agi] This is not a good turn for the discussion [WAS Re: Singularity Outcomes ...]

2008-01-28 Thread Randall Randall


On Jan 28, 2008, at 12:03 PM, Richard Loosemore wrote:
Your comments below are unfounded, and all the worse for being so  
poisonously phrased.  If you read the conversation from the  
beginning you will discover why:  Matt initially suggested the idea  
that an AGI might be asked to develop a virus of maximum potential,  
for purposes of testing a security system, and that it might  
respond by inserting an entire AGI system into the virus, since  
this would give the virus its maximum potential.  The thrust of my  
reply was that his entire idea of Matt's made no sense, since the  
AGI could not be a "general" intelligence if it could not see the  
full implications of the request.


Please feel free to accuse me of gross breaches of rhetorical  
etiquette, but if you do, please make sure first that I really have  
committed the crimes.  ;-)


I notice everyone else has (probably wisely) ignored
my response anyway.

I thought I'd done well at removing the most "poisonously
phrased" parts of my email before sending, but I agree I
should have waiting a few hours and revisited it before
sending, even so.  In any case, changes in meaning due to
sloppy copying of others' arguments are just SOP for most
internet arguments these days.  :(

To bring this slightly back to AGI:

The thrust of my reply was that his entire idea of Matt's made no  
sense, since the AGI could not be a "general" intelligence if it  
could not see the full implications of the request.


I'm sure you know that most humans fail to see the full
implications of *most* things.  Is it your opinion, then,
that a human is not a general intelligence?

--
Randall Randall <[EMAIL PROTECTED]>
"If I can do it in Alabama, then I'm fairly certain you
 can get away with it anywhere." -- Dresden Codak



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=90632569-c873ac


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Randall Randall


I pulled in some extra context from earlier messages to
illustrate an interesting event, here.

On Jan 27, 2008, at 12:24 PM, Richard Loosemore wrote:

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

Matt Mahoney wrote:

Suppose you
ask the AGI to examine some operating system or server software  
to look for
security flaws.  Is it supposed to guess whether you want to  
fix the flaws or

write a virus?


If it has a moral code (it does) then why on earth would it have to
guess whether you want it fix the flaws or fix the virus?


If I hired you as a security analyst to find flaws in a piece of  
software, and
I didn't tell you what I was going to do with the information,  
how would you

know?


This is so silly it is actually getting quite amusing... :-)

So, you are positing a situation in which I am an AGI, and you want  
to hire me as a security analyst, and you say to me:  "Please build  
the most potent virus in the world (one with a complete AGI inside  
it), because I need it for security purposes, but I am not going to  
tell you what I will do with the thing you build."


And we are assuming that I am an AGI with at least two neurons to  
rub together?


How would I know what you were going to do with the information?

I would say "Sorry, pal, but you must think I was born yesterday.   
I am not building such a virus for you or anyone else, because the  
dangers of building it, even as a test, are so enormous that it  
would be ridiculous.  And even if I did think it was a valid  
request, I wouldn't do such a thing for *anyone* who said 'I cannot  
tell you what I will do with the thing that you build'!"


In the context of the actual quotes, above, the following statement
is priceless.

It seems to me that you have completely lost track of the original  
issue in this conversation, so your other comments are meaningless  
with respect to that original context.


Let's look at this again:


--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

Matt Mahoney wrote:

Suppose you
ask the AGI to examine some operating system or server software  
to look for
security flaws.  Is it supposed to guess whether you want to  
fix the flaws or

write a virus?


If it has a moral code (it does) then why on earth would it have to
guess whether you want it fix the flaws or fix the virus?


Notice that in Matt's "Is it supposed to guess whether you want to  
fix the
flaws or write a virus?" there's no suggestion that you're asking the  
AGI
to write a virus, only that you're asking it for security  
information.  Richard
then quietly changes "to" to "it", thereby changing the meaning of  
the sentence
to the form he prefers to argue against (however ungrammatical), and  
then he
manages to finish up by accusing *Matt* of forgetting what Matt  
originally said

on the matter.

--
Randall Randall <[EMAIL PROTECTED]>
"Someone needs to invent a Bayesball bat that exists solely for
 smacking people [...] upside the head." -- Psy-Kosh on reddit.com


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=90357410-2b6273


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Randall Randall

On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote:

Matt Mahoney wrote:

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
The problem with the scenarios that people imagine (many of which  
are Nightmare Scenarios) is that the vast majority of them  
involve completely untenable assumptions.  One example is the  
idea that there will be a situation in the world in which there  
are many superintelligent AGIs in the world, all competing with  
each other for power in a souped up version of today's arms race 
(s).  This is extraordinarily unlikely:  the speed of development  
would be such that one would have an extremely large time  
advantage (head start) on the others, and during that time it  
would merge the others with itself, to ensure that there was no  
destructive competition.  Whichever way you try to think about  
this situation, the same conclusion seems to emerge.
As a counterexample, I offer evolution.  There is good evidence  
that every
living thing evolved from a single organism: all DNA is twisted in  
the same

direction.


I don't understand how this relates to the above in any way, never  
mind how it amounts to a counterexample.


If you're actually arguing against the possibility of more than
one individual superintelligent AGI, then you need to either
explain how such an individual could maintain coherence over
indefinitely long delays (speed of light) or just say up front
that you expect magic physics.

If you're arguing that even though individuals will emerge,
there will be no evolution, then Matt's counterexample applies
directly.

--
Randall Randall <[EMAIL PROTECTED]>
"If we have matter duplicators, will each of us be a sovereign
 and possess a hydrogen bomb?" -- Jerry Pournelle


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=89499376-fa3d11


Re: [agi] The AGI Test

2007-03-14 Thread Randall Randall


On Mar 14, 2007, at 4:34 AM, Kevin Peterson wrote:

I don't know, though. It might be interesting to reformulate things in
terms of interacting with a virtual world over the same channels
humans do. Text chatting is just too narrow a channel to tell much.
When an AI can reach max level in comparable playing time to a human
and lead a guild in a MMORPG, hooked up to the computer running the
game client only via a video cable, mouse and keyboard, I'll be very
impressed.


Leading a guild requires a lot of chat -- so much so, in fact,
as to be effectively a Turing test.  However, the "level in
comparable playing time to a human" is not at all hard, and at
least in some systems, doesn't require much intelligence at all.

http://www.wowglider.com/ has such a bot, which plays World of
Warcraft just as a person does with keyboard and mouse input.
This was so effective that Blizzard is suing them now, having
been unable to defeat the bot programmatically, since it just
plays WoW the same way that a human would.

(It might be that wowglider is an elaborate scam and doesn't
actually work as advertised, but the fact that the makers of
WoW are suing suggests it does work.)


--
Randall Randall <[EMAIL PROTECTED]>
"Is it asking too much to be given time [...]
 I'll watch the stars go out." -- Dubstar, Stars



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-11 Thread Randall Randall

On Jan 11, 2007, at 1:29 PM, Philip Goetz wrote:

On 06/01/07, Gary Miller <[EMAIL PROTECTED]> wrote:
This is the way it's going to go in my opinion.  In a house or  
office the
robots would really be dumb actuators - puppets - being controlled  
from a
central AI which integrates multiple systems together.  That way  
you can
keep the cost and maintenance requirements of the robot to a bare  
minimum.
Such a system also future-proofs the robot in a rapidly changing  
software
world, and allows intelligence to be provided as an internet based  
service.


If there's a market for this, then why can't I even buy a thermostat
with a timer on it to turn the temperature down at night and up in the
morning?  The most basic home automation, which could have been built
cheaply 30 years ago, is still, if available at all, so rare that I've
never seen it.


http://www.google.com/search?q=programmable+thermostat

They're extremely common; there's an entire aisle of such things at my
local Home Depot.

--
Randall Randall <[EMAIL PROTECTED]>
"If you are trying to produce a commercial product in
 a timely and cost efficient way, it is not good to have
 somebody's PhD research on your critical path." -- Chip Morningstar


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Heuristics and biases in uncertain inference systems

2006-06-07 Thread Randall Randall


On Jun 7, 2006, at 5:52 PM, Mike Ross wrote:

I think its actually correct to say that (b) is more likely than (a).
Humans dont get this "wrong" because they are bad at reasoning.  They
get this "wrong" because of the ambiguities of natural language.
Unlike mathematical language, human speech has many statements which
are implied.  I think its fair to say that in most conversational
contexts, when (b) is stated, it creates an implied second clause to
option (a):

a. Linda is a bank teller and NOT active in the feminist movement.

In this case, it is correct to say that (b) is more likely than (a).

When annoyingly logical people insist that this is wrong, they are
actually stating that the implied second argument is:

a. Linda is a bank teller and EITHER active OR not active in the
feminist movement.

This may be true on logic tests, but it aint the case in the real
world.  Just something to keep in mind when talking about human
reasoning...


When Richard Loosemore made essentially this same argument on SL4,
Eliezer said that there were lots of experiments designed to show
whether this was the reason for people getting this "wrong", and
that they had showed it wasn't.

That seems odd to me, too. :)

--
Randall Randall <[EMAIL PROTECTED]>
"This is a fascinating question, right up there with whether rocks
fall because of gravity or being dropped, and whether 3+5=5+3
because addition is commutative or because they both equal 8."
  - Scott Aaronson


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]