Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-31 Thread Mark Waser

Mark, my point is that while in the past evolution did the choosing,
now it's *we* who decide,


But the *we* who is deciding was formed by evolution.  Why do you do 
*anything*?  I've heard that there are four basic goals that drive every 
decision:  safety, feeling good, looking good, and being right.  Do you make 
any decisions that aren't decided by one or more of those four?



Another question is that we might like to
change ourselves, to get rid of most of this baggage, but it doesn't
follow that in the limit we will become pure survival maximizers.


Actually, what must follow is that at the limit what will predominate are 
the survival and reproduction maximizers.



By the way, if we want to survive, but we change ourselves to this
end, *what* is it that we want to keep alive?


Exactly!  What are our goals?  I don't think that you're going to get (or 
even want) anything close to a common consensus about specific goals -- so 
what you want is the maximization of individual goals (freedom) without 
going contrary to the survival of society (the destruction of which would 
lead to reduced freedom).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=92147931-4eb559


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-30 Thread Vladimir Nesov
On Jan 29, 2008 10:28 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Ethics only becomes snarled when one is unwilling to decide/declare what the
 goal of life is.

 Extrapolated Volition comes down to a homunculus depending upon the
 definition of wiser or saner.

 Evolution has decided what the goal of life is . . . . but most are
 unwilling to accept it (in part because most do not see it as anything other
 than nature, red in tooth and claw).

 The goal in life is simply continuation and continuity.  Evolution goes
 for continuation of species -- which has an immediate subgoal of
 continuation of individuals (and sex and protection of offspring).
 Continuation of individuals is best served by the construction of and
 continuation of society.

 If we're smart, we should decide that the goal of ethics is the continuation
 of society with an immediate subgoal of the will of individuals (for a large
 variety of reasons -- but the most obvious and easily justified is to
 prevent the defection of said individuals).

 If an AGI is considered a willed individual and a member of society and has
 the same ethics, life will be much easier and there will be a lot less
 chance of the Eliezer-scenario.  There is no enslavement of Jupiter-brains
 and no elimination/suppression of lesser individuals in favor of greater
 individuals -- just a realization that society must promote individuals and
 individuals must promote society.

 Oh, and contrary to popular belief -- ethics has absolutely nothing to do
 with pleasure or pain and *any* ethics based on such are doomed to failure.
 Pleasure is evolution's reward to us when we do something that promotes
 evolution's goals.  Pain is evolution's punishment when we do something
 (or have something done) that is contrary to survival, etc.  And while both
 can be subverted so that they don't properly indicate guidance -- in
 reality, that is all that they are -- guideposts towards other goals.
 Pleasure is a BAD goal because it can interfere with other goals.  Avoidance
 of pain (or infliction of pain) is only a good goal in that it furthers
 other goals.

Mark,

Nature doesn't even have survival as its 'goal', what matters is only
survival in the past, not in the future, yet you start to describe
strategies for future survival. Yes, survival in the future is one
likely accidental property of structures that survived in the past,
but so are other properties of specific living organisms. Nature is
stupid, so design choices left to it are biased towards keeping much
of the historical baggage and resorting to unsystematic hacks, and as
a result its products are not simply optimal survivors.

When we are talking about choice of conditions for humans to live in
(rules of society, morality), we are trying to understand what *we*
would like to choose. We are doing it for ourselves. Better
understanding of *human* nature can help us to estimate how we will
appreciate various conditions. And humans are very complicated things,
with a large burden of reinforcers that push us in different
directions based on idiosyncratic criteria. These reinforcers used to
line up to support survival in the past, but so what?

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91706178-a90dcf


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-30 Thread Mark Waser

Nature doesn't even have survival as its 'goal', what matters is only
survival in the past, not in the future, yet you start to describe
strategies for future survival.


Goal was in quotes for a reason.  In the future, the same tautological 
forces will apply.  Evolution will favor those things that are adapted to 
survive/thrive.



Nature is
stupid, so design choices left to it are biased towards keeping much
of the historical baggage and resorting to unsystematic hacks, and as
a result its products are not simply optimal survivors.


Yes, everything is co-evolving fast enough that evolution is not fast enough 
to produce optimum solutions.  But are you stupid enough to try to fight 
nature and the laws of probability and physics?  We can improve on nature --  
but you're never going to successfully go in a totally opposite direction.



When we are talking about choice of conditions for humans to live in
(rules of society, morality), we are trying to understand what *we*
would like to choose.


What we like (including what we like to choose) was formed by evolution. 
Some of what we like has been overtaken by events and is no longer 
pro-survival but *everything* that we like has served a pro-survival purpose 
in the past (survival meaning survival of offspring and the species -- so 
altruism *IS* an evolutionarily-created like as well).



Better
understanding of *human* nature can help us to estimate how we will
appreciate various conditions.


Not if we can program our own appreciations.  And what do we want our AGI to 
appreciate?



humans are very complicated things,
with a large burden of reinforcers that push us in different
directions based on idiosyncratic criteria.


Very true.  So don't you want a simpler, clearer, non-contradictory set of 
reinforcers for you AGI (that will lead to it and you both being happy).



These reinforcers used to
line up to support survival in the past, but so what?


So . . . I'd like to create reinforcers to support my survival and freedom 
and that of the descendents of the human race.  Don't you?




- Original Message - 
From: Vladimir Nesov [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, January 30, 2008 2:14 PM
Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide



On Jan 29, 2008 10:28 PM, Mark Waser [EMAIL PROTECTED] wrote:


Ethics only becomes snarled when one is unwilling to decide/declare what 
the

goal of life is.

Extrapolated Volition comes down to a homunculus depending upon the
definition of wiser or saner.

Evolution has decided what the goal of life is . . . . but most are
unwilling to accept it (in part because most do not see it as anything 
other

than nature, red in tooth and claw).

The goal in life is simply continuation and continuity.  Evolution goes
for continuation of species -- which has an immediate subgoal of
continuation of individuals (and sex and protection of offspring).
Continuation of individuals is best served by the construction of and
continuation of society.

If we're smart, we should decide that the goal of ethics is the 
continuation
of society with an immediate subgoal of the will of individuals (for a 
large

variety of reasons -- but the most obvious and easily justified is to
prevent the defection of said individuals).

If an AGI is considered a willed individual and a member of society and 
has

the same ethics, life will be much easier and there will be a lot less
chance of the Eliezer-scenario.  There is no enslavement of 
Jupiter-brains
and no elimination/suppression of lesser individuals in favor of 
greater
individuals -- just a realization that society must promote individuals 
and

individuals must promote society.

Oh, and contrary to popular belief -- ethics has absolutely nothing to do
with pleasure or pain and *any* ethics based on such are doomed to 
failure.

Pleasure is evolution's reward to us when we do something that promotes
evolution's goals.  Pain is evolution's punishment when we do 
something
(or have something done) that is contrary to survival, etc.  And while 
both

can be subverted so that they don't properly indicate guidance -- in
reality, that is all that they are -- guideposts towards other goals.
Pleasure is a BAD goal because it can interfere with other goals. 
Avoidance

of pain (or infliction of pain) is only a good goal in that it furthers
other goals.


Mark,

Nature doesn't even have survival as its 'goal', what matters is only
survival in the past, not in the future, yet you start to describe
strategies for future survival. Yes, survival in the future is one
likely accidental property of structures that survived in the past,
but so are other properties of specific living organisms. Nature is
stupid, so design choices left to it are biased towards keeping much
of the historical baggage and resorting to unsystematic hacks, and as
a result its products are not simply optimal survivors.

When we are talking about choice of conditions

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-29 Thread aiguy
Richard Hollerith said:

 If I am found dead with a bag over my head attached to helium or 
 natural gas, please investigate the possibility that it was a 
 murder made to look like a suicide. 
 
 -- 
 Richard Hollerith 
 http://dl4.jottit.com 
 

Same here Richard.   Nitrous Oxide would definately be my first choice.  Not 
that I'm planning anything mind you, quite the contrary.
But if the Men in Black are listening and have me in their sights, Nitrous is 
the way to go. Helium  or Natural Gas are bad form.
Who wants to wake up in the afterlife talking like a chipmunk or with a bad 
headache.

Gary Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91040805-b1248f

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-29 Thread Joshua Fox
When transhumanists talk about indefinite life extension, they often take
care to say it's optional to forestall one common objection.

Yet I feel that most suicides we see should have been prevented -- that the
person should have been taken into custody and treated if possible, even
against their will,

How to reconcile a strong belief in free choice with the belief that suicide
is most often the result of insanity, not the victim's true free will?

Eliezer's Extrapolated Volition suggests that we take into account what
the suicidal person would have wanted if they were wiser or saner. That is
one solution, though it does not quite satisfy me.

This is a basic ethical question, which takes on more relevance in the
context of transhumanism, life extension, and F/AGI theory.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91138496-b91fd4

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-29 Thread Mark Waser
Ethics only becomes snarled when one is unwilling to decide/declare what the 
goal of life is.

Extrapolated Volition comes down to a homunculus depending upon the definition 
of wiser or saner.

Evolution has decided what the goal of life is . . . . but most are unwilling 
to accept it (in part because most do not see it as anything other than 
nature, red in tooth and claw).

The goal in life is simply continuation and continuity.  Evolution goes for 
continuation of species -- which has an immediate subgoal of continuation of 
individuals (and sex and protection of offspring).  Continuation of individuals 
is best served by the construction of and continuation of society.

If we're smart, we should decide that the goal of ethics is the continuation of 
society with an immediate subgoal of the will of individuals (for a large 
variety of reasons -- but the most obvious and easily justified is to prevent 
the defection of said individuals).

If an AGI is considered a willed individual and a member of society and has the 
same ethics, life will be much easier and there will be a lot less chance of 
the Eliezer-scenario.  There is no enslavement of Jupiter-brains and no 
elimination/suppression of lesser individuals in favor of greater 
individuals -- just a realization that society must promote individuals and 
individuals must promote society.

Oh, and contrary to popular belief -- ethics has absolutely nothing to do with 
pleasure or pain and *any* ethics based on such are doomed to failure.  
Pleasure is evolution's reward to us when we do something that promotes 
evolution's goals.  Pain is evolution's punishment when we do something (or 
have something done) that is contrary to survival, etc.  And while both can be 
subverted so that they don't properly indicate guidance -- in reality, that is 
all that they are -- guideposts towards other goals.  Pleasure is a BAD goal 
because it can interfere with other goals.  Avoidance of pain (or infliction of 
pain) is only a good goal in that it furthers other goals.

Suicide is contrary to continuation.  Euthanasia is recognition that, in some 
cases, there is no meaningful continuation.

Life extension should be optional at least as long as there are resource 
constraints.
  - Original Message - 
  From: Joshua Fox 
  To: agi@v2.listbox.com 
  Sent: Tuesday, January 29, 2008 12:46 PM
  Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide


  When transhumanists talk about indefinite life extension, they often take 
care to say it's optional to forestall one common objection. 

  Yet I feel that most suicides we see should have been prevented -- that the 
person should have been taken into custody and treated if possible, even 
against their will, 

  How to reconcile a strong belief in free choice with the belief that suicide 
is most often the result of insanity, not the victim's true free will? 

  Eliezer's Extrapolated Volition suggests that we take into account what the 
suicidal person would have wanted if they were wiser or saner. That is one 
solution, though it does not quite satisfy me.

  This is a basic ethical question, which takes on more relevance in the 
context of transhumanism, life extension, and F/AGI theory.

  Joshua


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91171134-d7a01a

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
  Consider the following subset of possible requirements: the program is 
  correct
  if and only if it halts.

 It's a perfectly valid requirement, and I can write all sorts of
 software that satisfies it. I can't take a piece of software that I
 didn't write and tell you it it satisfies it, but I can write piece of
 software that satisfies it, that also does all sorts of useful stuff.


This would seem to imply that you've solved the halting problem.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90483744-a4b35c


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 28/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
  When your computer can write and debug
  software faster and more accurately than you can, then you should worry.

 A tool that could generate computer code from formal specifications
 would be a wonderful thing, but not an autonomous intelligence.

 A program that creates its own questions based on its own goals, or
 creates its own program specifications based on its own goals, is
 a quite different thing from a tool.


Having written a lot of computer programs, as I suspect many on this
list have, I suspect that fully automatic programming is going to
require the same kind of commonsense reasoning as human have.  When
I'm writing a program I may draw upon diverse ideas derived from what
might be called common knowledge - something which computers
presently don't have.  The alternative is genetic programing, which is
more of a sampled search through the space of all programs, but I
rather doubt that this is what's going on in my mind for the most
part.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90487402-ec9313


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 11:22 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
   Consider the following subset of possible requirements: the program is 
   correct
   if and only if it halts.
 
  It's a perfectly valid requirement, and I can write all sorts of
  software that satisfies it. I can't take a piece of software that I
  didn't write and tell you it it satisfies it, but I can write piece of
  software that satisfies it, that also does all sorts of useful stuff.


 This would seem to imply that you've solved the halting problem.


No it won't. Halting problem is so problematic when we are given an
arbitrary program from outside. On the other hand, there are very
powerful languages that are decidable and also do useful stuff. As one
trivial example, I can take even external arbitrary program (say, a
Turing machine that I can't check in general case), place it on a
dedicated tape in UTM, and add control for termination, so that if it
doesn't terminate in 10^6 tacts, it will be terminated by UTM that
runs it. Resulting thing will be able to do all things that original
machine could in 10^6 tacts, and will also be guaranteed to terminate.

You can try checking out for example this paper (link from LtU
discussion), which presents a rather powerful language for describing
terminating programs:
http://lambda-the-ultimate.org/node/2003

Also see http://en.wikipedia.org/wiki/Total_functional_programming

It's not very helpful in itself, but using sufficiently powerful type
system it should also be possible to construct programs that have
required computational complexity and other properties.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90499378-2cd47f


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 2:08 PM, Bob Mottram [EMAIL PROTECTED] wrote:
 On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
  You can try checking out for example this paper (link from LtU
  discussion), which presents a rather powerful language for describing
  terminating programs:
  http://lambda-the-ultimate.org/node/2003
  Also see http://en.wikipedia.org/wiki/Total_functional_programming

 This seems to address the halting problem by ignoring it (the same
 approach researchers often take to difficult problems in computer
 vision).

Well, what's pejorative with these solutions? You don't really need to
write bad programs, so problem of checking if program is bad is void
if you have a method for writing programs that are guaranteed to be
good.

 For practical purposes timeouts or watchdogs are ok, but
 they're just engineering workarounds rather than solutions.  In
 practice biological intelligence also uses the same hacks, and I think
 Turing himself pointed this out.

Timeout is a trivial answer for a theoretical question, whereas type
systems allow writing normal code without 'hacks' that also has these
properties. But it's not practically feasible to use them currently,
you'll spend too much time proving that program is correct and too
little time actually writing it. Maybe in time tools will catch up...

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90503888-2fa9e5


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
 I can take even external arbitrary program (say, a
 Turing machine that I can't check in general case), place it on a
 dedicated tape in UTM, and add control for termination, so that if it
 doesn't terminate in 10^6 tacts, it will be terminated by UTM that
 runs it.

Yes, you can just add a timeout.


 You can try checking out for example this paper (link from LtU
 discussion), which presents a rather powerful language for describing
 terminating programs:
 http://lambda-the-ultimate.org/node/2003
 Also see http://en.wikipedia.org/wiki/Total_functional_programming

This seems to address the halting problem by ignoring it (the same
approach researchers often take to difficult problems in computer
vision).  For practical purposes timeouts or watchdogs are ok, but
they're just engineering workarounds rather than solutions.  In
practice biological intelligence also uses the same hacks, and I think
Turing himself pointed this out.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90503210-4345b9


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney

--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Jan 28, 2008 4:53 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Consider the following subset of possible requirements: the program is
   correct
if and only if it halts.
   
  
   It's a perfectly valid requirement, and I can write all sorts of
   software that satisfies it. I can't take a piece of software that I
   didn't write and tell you it it satisfies it, but I can write piece of
   software that satisfies it, that also does all sorts of useful stuff.
 
  That is not the hard problem.  Going from a formal specification (actually
 a
  program) to code is just a matter of compilation.  But verifying that the
  result is correct is undecidable.
 
 What do you mean by that? What word 'result' in your last sentence
 refers to? Do you mean result of compilation? There are verified
 stacks, from the ground up. Given enough effort, it should be possible
 to be arbitrarily sure of their reliability.
 
 And anyway, what is undecidable here?

It is undecidable whether a program satisfies the requirements of a formal
specification, which is the same as saying that it is undecidable whether two
programs are equivalent.  The halting problem reduces to it.


  Maybe AGI will solve some of these problems that seem to be beyond the
  capabilities of humans.  But again it is a double edged sword.  There is a
  disturbing trend in attacks.  Attackers used to be motivated by ego, so
 you
  had viruses that played jokes or wiped your files.  Now they are motivated
 by
  greed, so attacks remain hidden while stealing personal information and
  computing resources.  Acquiring resources is the fitness function for
  competing, recursively self improving AGI, so it is sure to play a role.
 
 Now THAT you can't oppose, competition for resources by deception that
 relies on human gullibility. But it's a completely different problem,
 it's not about computer security at all. It's about human phychology,
 and one can't do anything about it, as long as they remain human. It
 probably can be kind of solved by placing generally intelligent
 'personal firewalls' on all input that human receives.

The problem is not human gullibility but human cognitive limits in dealing
with computer complexity.  Twenty years ago ID theft, phishing, botnets, and
spyware were barely a problem.  This problem will only get worse as software
gets more complex.  What you are suggesting is to abdicate responsibility to
the software, pitting ever smarter security against ever smarter intruders. 
This only guarantees that when your computer is hacked, you will never know. 
But I fear this result is inevitable.

Here is an example of cognitive load.  Firefox will pop up a warning if you
visit a known phishing site, but this doesn't work every time.  It also makes
such sites easier to detect because when you hover the mouse over a link, it
shows the true URL because by default Firefox disables Javascript code that
hackers add to write a fake URL to the status bar (which is enabled in IE and
can be enabled in Firefox).  This is not foolproof against creative attacks
such as registering www.paypaI.com (with a capitol I) or attacking routers or
DNS servers to redirect traffic to bogus sites, or sniffing traffic to
legitimate sites, or keyboard loggers capturing your passwords, or taking
advantage of users who use the same password on more than one site to reduce
their cognitive load (something you would never do, right?)

I use Firefox because I think it is more secure than IE, even though there
seems to be a new attack discovered about once a week. 
http://www.mozilla.org/projects/security/known-vulnerabilities.html
Do you really expect users to keep up with this, plus all their other
software?  No.  You will rely on AGI to do it for you, and when it fails you
will never know.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90580840-9cbff8


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney

--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  It is undecidable whether a program satisfies the requirements of a formal
  specification, which is the same as saying that it is undecidable whether
 two
  programs are equivalent.  The halting problem reduces to it.
 
 Yes it is, if it's an arbitrary program. But you can construct a
 program that doesn't have this problem and also prove that it doesn't.
 You can check if program satisfies specification if it's written in a
 special way (for example, it's annotated with types that guarantee
 required conditions).

It is easy to construct programs that you can prove halt or don't halt.

There is no procedure to verify that a program is equivalent to a formal
specification (another program).  Suppose there was.  Then I can take any
program P and tell if it halts.  I construct a specification S from P by
replacing the halting states with states that transition to themselves in an
infinite loop.  I know that S does not halt.  I ask if S and P are equivalent.
 If they are, then P does not halt, otherwise it does.

 If computer cannot be hacked, it won't be.

If I turn off my computer, it can't be hacked.  Otherwise there is no
guarantee.  AGI is not a magic bullet.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90619751-b7cda9


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Kaj Sotala
On 1/24/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Theoretically yes, but behind my comment was a deeper analysis (which I
 have posted before, I think) according to which it will actually be very
 difficult for a negative-outcome singularity to occur.

 I was really trying to make the point that a statement like The
 singularity WILL end the human race is completely ridiculous.  There is
 no WILL about it.

Richard,

I'd be curious to hear your opinion of Omohundro's The Basic AI
Drives paper at
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
(apparently, a longer and more technical version of the same can be
found at 
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
, but I haven't read it yet). I found the arguments made relatively
convincing, and to me, they implied that we do indeed have to be
/very/ careful not to build an AI which might end up destroying
humanity. (I'd thought that was the case before, but reading the paper
only reinforced my view...)




-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90642622-a4687d


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 7:41 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 It is easy to construct programs that you can prove halt or don't halt.

 There is no procedure to verify that a program is equivalent to a formal
 specification (another program).  Suppose there was.  Then I can take any
 program P and tell if it halts.  I construct a specification S from P by
 replacing the halting states with states that transition to themselves in an
 infinite loop.  I know that S does not halt.  I ask if S and P are equivalent.
  If they are, then P does not halt, otherwise it does.

Yes, it's what I was telling all along.


  If computer cannot be hacked, it won't be.

 If I turn off my computer, it can't be hacked.  Otherwise there is no
 guarantee.  AGI is not a magic bullet.

Exactly. That's why it can't hack provably correct programs. This race
isn't symmetric. Let's stop at that (unless you have something new to
say), everything was repeated at least three times.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90631134-afef0e


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 It is undecidable whether a program satisfies the requirements of a formal
 specification, which is the same as saying that it is undecidable whether two
 programs are equivalent.  The halting problem reduces to it.

Yes it is, if it's an arbitrary program. But you can construct a
program that doesn't have this problem and also prove that it doesn't.
You can check if program satisfies specification if it's written in a
special way (for example, it's annotated with types that guarantee
required conditions).


  Now THAT you can't oppose, competition for resources by deception that
  relies on human gullibility. But it's a completely different problem,
  it's not about computer security at all. It's about human phychology,
  and one can't do anything about it, as long as they remain human. It
  probably can be kind of solved by placing generally intelligent
  'personal firewalls' on all input that human receives.

 The problem is not human gullibility but human cognitive limits in dealing
 with computer complexity.

The same thing, but gullibility is there too, and is a problem.


 Twenty years ago ID theft, phishing, botnets, and
 spyware were barely a problem.  This problem will only get worse as software
 gets more complex.  What you are suggesting is to abdicate responsibility to
 the software, pitting ever smarter security against ever smarter intruders.
 This only guarantees that when your computer is hacked, you will never know.
 But I fear this result is inevitable.

If computer cannot be hacked, it won't be.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90586814-8bc9a2


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
 Exactly. That's why it can't hack provably correct programs.

Which is useless because you can't write provably correct programs that aren't
extremely simple.  *All* nontrivial properties of programs are undecidable.
http://en.wikipedia.org/wiki/Rice%27s_theorem

And good luck translating human goals expressed in ambiguous and incomplete
natural language into provably correct formal specifications.

 This race isn't symmetric.

Yes it is, because every security tool can be used by both sides.  Here is one
more example: http://www.virustotal.com/
This would be handy if I wanted to write a virus and make sure it isn't
detected.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90866991-a570cd


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Lukasz Stafiniak
On Jan 29, 2008 12:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Vladimir Nesov [EMAIL PROTECTED] wrote:
  Exactly. That's why it can't hack provably correct programs.

 Which is useless because you can't write provably correct programs that aren't
 extremely simple.  *All* nontrivial properties of programs are undecidable.
 http://en.wikipedia.org/wiki/Rice%27s_theorem

This is false. You can write nontrivial programs for which you can
prove nontrivial properties. Rice's theorem tells that you cannot
prove nontrivial properties for programs written in Turing-complete
languages and given unbounded resources and handed to you by an
adversary.

 And good luck translating human goals expressed in ambiguous and incomplete
 natural language into provably correct formal specifications.

This is true.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90871958-149830


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Vladimir Nesov
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 Software correctness is undecidable -- the halting problem reduces to it.
 Computer security isn't going to be magically solved by AGI.  The problem will
 actually get worse, because complex systems are harder to get right.


Computer security can be solved by more robust rights management and
by avoiding bugs that lead to security vulnerabilities. AGI can help
with both.

Software correctness IS decidable: you just don't write general
algorithms, you write algorithms that satisfy your requirements.
Fundamental problem with software correctness is that you can forget
about many requirements or get requirements wrong. Practical problem
with software correctness is that it's very costly to actually check
correctness, and it gets worse as requirements and software in
question get more complex. These problems can be dealt with if we have
fast (=cheap) and competent general intelligence.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90341036-e11cab


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  Software correctness is undecidable -- the halting problem reduces to it.
  Computer security isn't going to be magically solved by AGI.  The problem
 will
  actually get worse, because complex systems are harder to get right.
 
 
 Computer security can be solved by more robust rights management and
 by avoiding bugs that lead to security vulnerabilities. AGI can help
 with both.

Security tools are double edged swords.  The knowledge required to protect
against attacks is the same as the knowledge required to launch attacks.  AGI
just continues the arms race.  We will have smarter intrusion detection
systems and smarter intruders.  If you look at number of attacks per year, it
is clear we are going in the wrong direction.

 Software correctness IS decidable: you just don't write general
 algorithms, you write algorithms that satisfy your requirements.
 Fundamental problem with software correctness is that you can forget
 about many requirements or get requirements wrong. Practical problem
 with software correctness is that it's very costly to actually check
 correctness, and it gets worse as requirements and software in
 question get more complex. These problems can be dealt with if we have
 fast (=cheap) and competent general intelligence.

Consider the following subset of possible requirements: the program is correct
if and only if it halts.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90386028-831db4


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Vladimir Nesov
On Jan 28, 2008 1:15 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Vladimir Nesov [EMAIL PROTECTED] wrote:

  On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
  
   Software correctness is undecidable -- the halting problem reduces to it.
   Computer security isn't going to be magically solved by AGI.  The problem
  will
   actually get worse, because complex systems are harder to get right.
  
 
  Computer security can be solved by more robust rights management and
  by avoiding bugs that lead to security vulnerabilities. AGI can help
  with both.

 Security tools are double edged swords.  The knowledge required to protect
 against attacks is the same as the knowledge required to launch attacks.  AGI
 just continues the arms race.  We will have smarter intrusion detection
 systems and smarter intruders.  If you look at number of attacks per year, it
 is clear we are going in the wrong direction.

You don't NEED intrusion detection if intrusion cannot be done. If
your software doesn't read anything from outside, it's not possible to
attack it. If it reads that data and correctly does nothing with it,
it's not possible to attack it. If it reads that data and correctly
processes it, it's not possible to attack it.

It's not currently practically feasible to write usual software
without bugs, but it's theoretically possible (more on that below). So
this race is not symmetrical: you can't attack perfect software even
if you are an omniscent oracle.

  Software correctness IS decidable: you just don't write general
  algorithms, you write algorithms that satisfy your requirements.
  Fundamental problem with software correctness is that you can forget
  about many requirements or get requirements wrong. Practical problem
  with software correctness is that it's very costly to actually check
  correctness, and it gets worse as requirements and software in
  question get more complex. These problems can be dealt with if we have
  fast (=cheap) and competent general intelligence.

 Consider the following subset of possible requirements: the program is correct
 if and only if it halts.


It's a perfectly valid requirement, and I can write all sorts of
software that satisfies it. I can't take a piece of software that I
didn't write and tell you it it satisfies it, but I can write piece of
software that satisfies it, that also does all sorts of useful stuff.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90388774-03a036


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread William Pearson
On 27/01/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Vladimir Nesov [EMAIL PROTECTED] wrote:

  On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
  
   Software correctness is undecidable -- the halting problem reduces to it.
   Computer security isn't going to be magically solved by AGI.  The problem
  will
   actually get worse, because complex systems are harder to get right.
  
 
  Computer security can be solved by more robust rights management and
  by avoiding bugs that lead to security vulnerabilities. AGI can help
  with both.

 Security tools are double edged swords.  The knowledge required to protect
 against attacks is the same as the knowledge required to launch attacks.  AGI
 just continues the arms race.  We will have smarter intrusion detection
 systems and smarter intruders.  If you look at number of attacks per year, it
 is clear we are going in the wrong direction.


What I am working on is a type of system that is a type of
programmable computer hardware (much like modern computers), that has
a sense of goal or purpose built in. It is designed so that it will
self-moderate the programs within it to give control to only those
that fulfil that goal/purpose.

I personally believe that this is a necessary step for human level
AGI, as self-control and allocation of resources to the problems
important to the system are importance facets of an intelligence. But
I also suspect it will be used in a lot less smart systems as well
before we crack AGI. As such I see the computer systems of the future
moving away from the mono-culture we have currently as they will be
tailored to the users goals, making cracking them less trivial and
repeatable and more done on a case by case business.

Don't expect computer science to stand still. It is really still very young.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90392692-7480ed


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Ben Goertzel
  Google
 already knows more than any human,

This is only true, of course, for specific interpretations of the word
know ... and NOT for the standard ones...

and can retrieve the information faster,
 but it can't launch a singularity.

Because, among other reasons, it is not an intelligence, but only
a very powerful tool for intelligences to use...

 When your computer can write and debug
 software faster and more accurately than you can, then you should worry.

A tool that could generate computer code from formal specifications
would be a wonderful thing, but not an autonomous intelligence.

A program that creates its own questions based on its own goals, or
creates its own program specifications based on its own goals, is
a quite different thing from a tool.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90465940-5ffd85


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-26 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
No computer is going to start writing and debugging software faster and 
more accurately than we can UNLESS we design it to do so, and during the 
design process we will have ample opportunity to ensre that the machine 
will never be able to pose a danger of any kind.


Perhaps, but the problem is like trying to design a safe gun.


It is 100% NOT like trying to design a safe gun.  There is no 
resemblance whatsoever to that problem.



Maybe you can
program it with a moral code, so it won't write malicious code.  But the two
sides of the security problem require almost identical skills.  Suppose you
ask the AGI to examine some operating system or server software to look for
security flaws.  Is it supposed to guess whether you want to fix the flaws or
write a virus?


If it has a moral code (it does) then why on earth would it have to 
guess whether you want it fix the flaws or fix the virus?  By asking 
that question you are implicitly assuming that this AGI is not an AGI 
at all, but something so incredibly stupid that it cannot tell the 
difference between these two  so if you make that assumption we have 
nothing to worry about, because it would be too stupid to be a general 
intlligence and therefore not even potentially dangerous.





Suppose you ask it to write a virus for the legitimate purpose of testing the
security of your system.  It downloads copies of popular software from the
internet and analyzes it for vulnerabilities, finding several.  As instructed,
it writes a virus, a modified copy of itself running on the infected system. 
Due to a bug, it continues spreading.  Oops...  Hard takeoff.


Again, you implicitly assume that this AGI is so stupid that it makes 
a copy of itself and inserts it into a virus when asked to make an 
experimental virus.  Any system that stupid does not have a general 
intelligence, and will never cause a hard takeoff because an absolute 
prerequisite for hard takeoff is that the system have the wits to know 
about these kind of no-brainer [:-)] questions.


This kind of Stupid-AGI scenario comes up all the time - the SL4 list 
was absolutely them, when last I was wasting my time over there, and 
when I last encountered anyone from SIAI they were still spouting them 
all the time without the slightest understandng of the incoherence of 
what they were saying.






Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90241804-cdba1c


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-26 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Matt Mahoney wrote:
  Maybe you can
  program it with a moral code, so it won't write malicious code.  But the
 two
  sides of the security problem require almost identical skills.  Suppose
 you
  ask the AGI to examine some operating system or server software to look
 for
  security flaws.  Is it supposed to guess whether you want to fix the flaws
 or
  write a virus?
 
 If it has a moral code (it does) then why on earth would it have to 
 guess whether you want it fix the flaws or fix the virus?  By asking 
 that question you are implicitly assuming that this AGI is not an AGI 
 at all, but something so incredibly stupid that it cannot tell the 
 difference between these two  so if you make that assumption we have 
 nothing to worry about, because it would be too stupid to be a general 
 intlligence and therefore not even potentially dangerous.

If I hired you as a security analyst to find flaws in a piece of software, and
I didn't tell you what I was going to do with the information, how would you
know?

  Suppose you ask it to write a virus for the legitimate purpose of testing
 the
  security of your system.  It downloads copies of popular software from the
  internet and analyzes it for vulnerabilities, finding several.  As
 instructed,
  it writes a virus, a modified copy of itself running on the infected
 system. 
  Due to a bug, it continues spreading.  Oops...  Hard takeoff.
 
 Again, you implicitly assume that this AGI is so stupid that it makes 
 a copy of itself and inserts it into a virus when asked to make an 
 experimental virus.  Any system that stupid does not have a general 
 intelligence, and will never cause a hard takeoff because an absolute 
 prerequisite for hard takeoff is that the system have the wits to know 
 about these kind of no-brainer [:-)] questions.

Mistakes happen. http://en.wikipedia.org/wiki/Morris_worm

If you perform 1000 security tests and 999 of them shut down when they are
supposed to, then you have still failed.

Software correctness is undecidable -- the halting problem reduces to it. 
Computer security isn't going to be magically solved by AGI.  The problem will
actually get worse, because complex systems are harder to get right.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90306957-bdd0f5


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

This whole scenario is filled with unjustified, unexamined assumptions.

For example, you suddenly say I foresee a problem when the collective 
computing power of the network exceeds the collective computing power of 
the humans that administer it.  Humans will no longer be able to keep up 
with the complexity of the system...


Do you mean collective intelligence?  Because if you mean collective 
computing power I cannot see what measure you are using (my laptop has 
greater computing power than me already, because it can do more 
arithmetic sums in one second than I have done in my life so far).  And 
either way, this comes right after a great big AND THEN A MIRACLE 
HAPPENS step ...!  You were talking about lots of dumb, specialized 
agents distributed around the world, and then all of a sudden you start 
talking as if they could be intelligent.  Why should anyone believe they 
would spontaneously do that?  First they are agents, then all of a 
sudden they are AGIs and they leave us behind:  I see no reason to allow 
that step in the argument.


In short, it looks like an even bigger non-sequiteur than before.


Yes, I mean collective intelligence.  The miracle is that any interface to
the large network of simple machines will appear intelligent, in the same way
that Google can make a person appear to know a lot more than they do.  It is
hard to predict what this collective intelligence will do, in the same way as
it is hard to predict human behavior by studying individual neurons.

I don't know if my outline for an infrastructure for AGI will be built as I
designed it, but I believe something like it WILL be built, probably ad-hoc
and very complex, because it has economic value.


This argument is *exactly* the same as an old, old argument that 
appeared in science fiction stories back in the early 20th century: 
some people believed that the telephone network might get one connection 
too many and suddenly wake up and be intelligent.


I do not believe you have any more justification for assuming that a set 
of dumb computers will suddenly become more than the sum of thir 
collective dumbness.


The brain consists of many dumb neurons that, collectively, make 
something intelligent.  But it is not the mere fact of them being all in 
the same place at the same time that makes the collective intelligent, 
it is their organization.  Organization is everything.  You must 
demonstrate some reason why the collective net of dumb computers will be 
intelligent:  it is not enough to simply assert that they will, or 
might, become intelligent.


If you had some specific line of reasoning to show that the right 
organization could be given to them, then I will show you that the same 
organization will be put into some other set of computers, deliberately, 
under the control of the factors that I previously described, and that 
this will happen long before the general network gets that organization.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89898115-135d06


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 This whole scenario is filled with unjustified, unexamined assumptions.
 
 For example, you suddenly say I foresee a problem when the collective 
 computing power of the network exceeds the collective computing power of 
 the humans that administer it.  Humans will no longer be able to keep up 
 with the complexity of the system...
 
 Do you mean collective intelligence?  Because if you mean collective 
 computing power I cannot see what measure you are using (my laptop has 
 greater computing power than me already, because it can do more 
 arithmetic sums in one second than I have done in my life so far).  And 
 either way, this comes right after a great big AND THEN A MIRACLE 
 HAPPENS step ...!  You were talking about lots of dumb, specialized 
 agents distributed around the world, and then all of a sudden you start 
 talking as if they could be intelligent.  Why should anyone believe they 
 would spontaneously do that?  First they are agents, then all of a 
 sudden they are AGIs and they leave us behind:  I see no reason to allow 
 that step in the argument.
 
 In short, it looks like an even bigger non-sequiteur than before.

Yes, I mean collective intelligence.  The miracle is that any interface to
the large network of simple machines will appear intelligent, in the same way
that Google can make a person appear to know a lot more than they do.  It is
hard to predict what this collective intelligence will do, in the same way as
it is hard to predict human behavior by studying individual neurons.

I don't know if my outline for an infrastructure for AGI will be built as I
designed it, but I believe something like it WILL be built, probably ad-hoc
and very complex, because it has economic value.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89895239-3ad383


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
You must 
demonstrate some reason why the collective net of dumb computers will be 
intelligent:  it is not enough to simply assert that they will, or 
might, become intelligent.


The intelligence comes from an infrastructure that routes messages to the
right experts.  I know it is hard to imagine because distributed search
engines haven't been built yet, but it is similar to the way that Google makes
people appear smarter.  In my thesis I investigated whether distributed search
scales to large networks, and it does. http://cs.fit.edu/~mmahoney/thesis.html


Your analogy to people appearing smarter because they can use Google 
simply does not apply to the case you propose.


You suggest that a collection of *sub-intelligent* (this is crucial) 
computer programs can ad up to full intelligence just in virtue of their 
existence.


This is not the same as a collection of *already-intelligent* humans 
appearing more intelligent because they have access to a lot more 
information than they did before.


[dumb machine] + Google = dumb machine.

[smart human] + Google = smarter human.

1) There is every reason to believe that a human intelligence could 
become smarter as a result of having quick access to an internet 
knowledgebase.


2) There is absolutely no reason to believe that a bunch of 
sub-intelligent computers will get up over the threshold and become 
intelligent, just because they have access to an internet knowledgebase.


You have work to do (a lot of work!) to persuade us to accept the idea 
contained in (2).


This is similar to the machine-translation fiasco in the 1960s:  they 
believed that the only thing standing in the way of a full-up 
translation system was lots of good dictionary lookup.  It simply was 
not true:  a dictionary maketh not a mind.


As for you last comment that The intelligence comes from an 
infrastructure that routes messages to the right experts  this 
simply begs the question. If the infrastructure were smart enough to 
always know how to find the right expert, the infrastructure would BE 
the intelligence, and the experts hat it finds would just be a bunch 
of dictionaries or subcomponents.  You are implicitly assuming 
intelligence in that infrastructure, without showing where the 
intelligence comes from.  Certainly you give no reason why we should 
believe that the infrastructure would spontaneously become intelligent 
without us doing a lot of work.


If we knew how to put the intelligence into that infrastructure we 
would know how to put it into other places, and then (once again) we are 
back to the scenario that I discussed, where someone has explicitly 
figured out how to build an intelligence, and then deliberately chooses 
what to do with it (i.e., there is no accidental emergence, beyond human 
control).



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89931136-e22764


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 You suggest that a collection of *sub-intelligent* (this is crucial) 
 computer programs can ad up to full intelligence just in virtue of their 
 existence.
 
 This is not the same as a collection of *already-intelligent* humans 
 appearing more intelligent because they have access to a lot more 
 information than they did before.
 
 [dumb machine] + Google = dumb machine.
 
 [smart human] + Google = smarter human.

My point of concern is when individual machines (not the whole network) exceed
individual brains in intelligence.  They can't yet, but they will.  Google
already knows more than any human, and can retrieve the information faster,
but it can't launch a singularity.  When your computer can write and debug
software faster and more accurately than you can, then you should worry.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89960966-ec355b


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
You suggest that a collection of *sub-intelligent* (this is crucial) 
computer programs can ad up to full intelligence just in virtue of their 
existence.


This is not the same as a collection of *already-intelligent* humans 
appearing more intelligent because they have access to a lot more 
information than they did before.


[dumb machine] + Google = dumb machine.

[smart human] + Google = smarter human.


My point of concern is when individual machines (not the whole network) exceed
individual brains in intelligence.  They can't yet, but they will.  Google
already knows more than any human, and can retrieve the information faster,
but it can't launch a singularity.  When your computer can write and debug
software faster and more accurately than you can, then you should worry.


I think this conversation is going nowhere:  your above paragraph once 
again ignores everything I have said up to now.


No computer is going to start writing and debugging software faster and 
more accurately than we can UNLESS we design it to do so, and during the 
design process we will have ample opportunity to ensre that the machine 
will never be able to pose a danger of any kind.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90009288-64a72b


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which are 
Nightmare Scenarios) is that the vast majority of them involve 
completely untenable assumptions.  One example is the idea that there 
will be a situation in the world in which there are many 
superintelligent AGIs in the world, all competing with each other for 
power in a souped up version of today's arms race(s).  This is 
extraordinarily unlikely:  the speed of development would be such that 
one would have an extremely large time advantage (head start) on the 
others, and during that time it would merge the others with itself, to 
ensure that there was no destructive competition.  Whichever way you try 
to think about this situation, the same conclusion seems to emerge.


As a counterexample, I offer evolution.  There is good evidence that every
living thing evolved from a single organism: all DNA is twisted in the same
direction.


I don't understand how this relates to the above in any way, never mind 
how it amounts to a counterexample.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89455607-506b44


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Randall Randall

On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote:

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which  
are Nightmare Scenarios) is that the vast majority of them  
involve completely untenable assumptions.  One example is the  
idea that there will be a situation in the world in which there  
are many superintelligent AGIs in the world, all competing with  
each other for power in a souped up version of today's arms race 
(s).  This is extraordinarily unlikely:  the speed of development  
would be such that one would have an extremely large time  
advantage (head start) on the others, and during that time it  
would merge the others with itself, to ensure that there was no  
destructive competition.  Whichever way you try to think about  
this situation, the same conclusion seems to emerge.
As a counterexample, I offer evolution.  There is good evidence  
that every
living thing evolved from a single organism: all DNA is twisted in  
the same

direction.


I don't understand how this relates to the above in any way, never  
mind how it amounts to a counterexample.


If you're actually arguing against the possibility of more than
one individual superintelligent AGI, then you need to either
explain how such an individual could maintain coherence over
indefinitely long delays (speed of light) or just say up front
that you expect magic physics.

If you're arguing that even though individuals will emerge,
there will be no evolution, then Matt's counterexample applies
directly.

--
Randall Randall [EMAIL PROTECTED]
If we have matter duplicators, will each of us be a sovereign
 and possess a hydrogen bomb? -- Jerry Pournelle


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89499376-fa3d11


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  The problem with the scenarios that people imagine (many of which are 
  Nightmare Scenarios) is that the vast majority of them involve 
  completely untenable assumptions.  One example is the idea that there 
  will be a situation in the world in which there are many 
  superintelligent AGIs in the world, all competing with each other for 
  power in a souped up version of today's arms race(s).  This is 
  extraordinarily unlikely:  the speed of development would be such that 
  one would have an extremely large time advantage (head start) on the 
  others, and during that time it would merge the others with itself, to 
  ensure that there was no destructive competition.  Whichever way you try 
  to think about this situation, the same conclusion seems to emerge.
  
  As a counterexample, I offer evolution.  There is good evidence that every
  living thing evolved from a single organism: all DNA is twisted in the
 same
  direction.
 
 I don't understand how this relates to the above in any way, never mind 
 how it amounts to a counterexample.

Because recursive self improvement is a competitive evolutionary process even
if all agents have a common ancestor.  An agent making modified copies of
itself cannot be sure that the copies will be better adapted to future
environments, because the parent cannot perfectly predict those environments. 
The process must therefore be experimental.  Evolution will favor agents that
are better at acquiring computational resources, regardless of what initial
goals we give them.  Maybe the first million generations will be friendly, but
that might only be a few hours.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89506017-bf2878


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore

Randall Randall wrote:

On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote:

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which 
are Nightmare Scenarios) is that the vast majority of them involve 
completely untenable assumptions.  One example is the idea that 
there will be a situation in the world in which there are many 
superintelligent AGIs in the world, all competing with each other 
for power in a souped up version of today's arms race(s).  This is 
extraordinarily unlikely:  the speed of development would be such 
that one would have an extremely large time advantage (head start) 
on the others, and during that time it would merge the others with 
itself, to ensure that there was no destructive competition.  
Whichever way you try to think about this situation, the same 
conclusion seems to emerge.
As a counterexample, I offer evolution.  There is good evidence that 
every
living thing evolved from a single organism: all DNA is twisted in 
the same

direction.


I don't understand how this relates to the above in any way, never 
mind how it amounts to a counterexample.


If you're actually arguing against the possibility of more than
one individual superintelligent AGI, then you need to either
explain how such an individual could maintain coherence over
indefinitely long delays (speed of light) or just say up front
that you expect magic physics.

If you're arguing that even though individuals will emerge,
there will be no evolution, then Matt's counterexample applies
directly.


I was talking about early development of AGI on this planet, and I was 
specifically addressing the idea (frequently repeated) that there will 
be a phase when all kinds of groups will separately develop AGIs that 
make it to full human+ intelligence.  The assumption attached to this 
idea is that these AGIs will each obey their own imperatives, working in 
a competitive way for themselves or their owners, thereby landing us in 
a situation where these things would be duking it out with one another.


That is not the same as the situation you raise, which is the question 
of what comes much later when the AGI(s) on this planet (if there are 
any) start moving outward to other bodies.


1)  Considering my scenario first, the argument rests on (A) how fast 
the AGIs will develop, and (B) whether they will be driven by the same 
forces that lead to evolutionary pressure.


A) Development curve.  In the case of all human-drive arms races, the 
curve of development is driven by intelligences that are all 
approximately the same level (i.e. humans), and feeding on roughly the 
same pool of knowledge.  Because of this the development curves are very 
close to one another and have roughly the same slope:  nobody ever gets 
a killer advantage that lets them overcome all competition in one 
sudden coup.


However, in the case of AGI development the situation is completely 
different because the intelligences that are the drivers of 
technological progress are not all at the same level.  If country or 
organization A gets an AGI program operating five years before B gets 
theirs started, and if the program yields an AGI that starts to go 
superintelligent over the course of a few months in (say) 2010, then the 
rival B program is rendered completely invalid if its own peak is not 
due to occur until a few years later:  by the time A has gone thourgh 
its peak, the drivers of its technology will be 1000 times faster than 
B's, so long before B can catch up, A's AGI system will quietly take 
over B's programme.  This is all to do with the shape of the development 
curve:  a sudden spike to 1000x intelligence is something that has NEVER 
occurred in the history of human arms races.


There are many other arguments that bear on the question of what the 
first AGI look like, so for the sake of brevity I will just sketch what 
I believe to be the conclusion from all those other arguments:  the 
simplest AGI design will be the one that gets there first, and the 
simplest design that is actually capable of understanding the design of 
intelligent systems (an absolute prerequisite for an AGI to recursively 
self-improve) is one that has the most balanced, human-empathic 
motivational system.  The conclusion is that the first AGI will almost 
certainly be one that is in tune with human motivations in a broad-based 
way ... willingly locked into a state in which its morals and desires 
(and everything else that matters for the friendliness question) are in 
sync with those of the human species as a whole.  As a result, this 
first AGI will naturally move to ensure that other AGI projects do not 
yield dangerous AGI systems.


B) Evolution.  For evolutionary pressure to manifest itself, there are 
some prerequisites.  The individuals must compete for resources 
according to some criterion that captures their degree of success in 
this competition.  The 

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which are 
Nightmare Scenarios) is that the vast majority of them involve 
completely untenable assumptions.  One example is the idea that there 
will be a situation in the world in which there are many 
superintelligent AGIs in the world, all competing with each other for 
power in a souped up version of today's arms race(s).  This is 
extraordinarily unlikely:  the speed of development would be such that 
one would have an extremely large time advantage (head start) on the 
others, and during that time it would merge the others with itself, to 
ensure that there was no destructive competition.  Whichever way you try 
to think about this situation, the same conclusion seems to emerge.

As a counterexample, I offer evolution.  There is good evidence that every
living thing evolved from a single organism: all DNA is twisted in the

same

direction.
I don't understand how this relates to the above in any way, never mind 
how it amounts to a counterexample.


Because recursive self improvement is a competitive evolutionary process even
if all agents have a common ancestor.


As explained in parallel post:  this is a non-sequiteur.


An agent making modified copies of
itself cannot be sure that the copies will be better adapted to future
environments


Adaptation?  What adaptation?  See parallel post.

because the parent cannot perfectly predict those environments. 
The process must therefore be experimental.  Evolution will favor 


Evolution will not apply.  See parallel post.


agents that
are better at acquiring computational resources


Nonsense.  Only if 'acquiring more computational resources' conveys 
advantage in a competitive environment.  Even if there were some 
competition (which there would not be), there is no reason to believe 
that acquiring more computational resources would be the success measure.



regardless of what initial
goals we give them.  Maybe the first million generations will be friendly, but
that might only be a few hours.


Everything you say is built on wild and completely unexamined 
assumptions, all of which (on examination) turn out to be deeply 
implausible.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89543617-c48129


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Matt Mahoney wrote:
  Because recursive self improvement is a competitive evolutionary process
 even
  if all agents have a common ancestor.
 
 As explained in parallel post:  this is a non-sequiteur.

OK, consider a network of agents, such as my proposal,
http://www.mattmahoney.net/agi.html
The design is an internet-wide system of narrow, specialized agents and an
infrastructure that routes (natural language) messages to the right experts. 
Cooperation with humans and other agents is motivated by an economy that
places negative value on information.  Agents that provide useful services and
useful information (in the opinion of other agents) gain storage space and
network bandwidth by having their messages stored and forwarded.  Although
agents compete for resources, the network is cooperative in the sense of
sharing knowledge.

Security is a problem in any open network.  I addressed some of these issues
in my proposal.  To prevent DoS attacks and vandalism, the protocol does not
provide a means to delete or modify messages once they are posted.  Agents
will be administered by humans who independently establish policies on which
messages to accept or ignore.  A likely policy is to ignore messages from
agents whose return address can't be verified, or messages unrelated to the
interests of the owner (as determined by keyword matching).  There is an
economic incentive to not send spam, viruses, false information, etc., because
malicious agents will tend to be blocked and isolated.  Agents will share
knowledge about other agents and gain a reputation by consensus.

I foresee a problem when the collective computing power of the network exceeds
the collective computing power of the humans that administer it.  Humans will
no longer be able to keep up with the complexity of the system.  When your
computer says please run this program to protect your computer from the
Singularity worm, how do you know you aren't actually installing the worm?

I would be interested in alternative AGI proposals that solve this problem of
humans being left behind, but I am not hopeful that there is a solution.  When
machines achieve superhuman intelligence, humans will lack the cognitive power
to communicate with them effectively.  An AGI talking to you would be like you
talking to your dog.  I suppose that uploading and brain augmentation would be
solutions, but then we wouldn't really be human anymore.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89629023-4b3a41


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

Matt Mahoney wrote:

Because recursive self improvement is a competitive evolutionary process

even

if all agents have a common ancestor.

As explained in parallel post:  this is a non-sequiteur.


OK, consider a network of agents, such as my proposal,
http://www.mattmahoney.net/agi.html
The design is an internet-wide system of narrow, specialized agents and an
infrastructure that routes (natural language) messages to the right experts. 
Cooperation with humans and other agents is motivated by an economy that

places negative value on information.  Agents that provide useful services and
useful information (in the opinion of other agents) gain storage space and
network bandwidth by having their messages stored and forwarded.  Although
agents compete for resources, the network is cooperative in the sense of
sharing knowledge.

Security is a problem in any open network.  I addressed some of these issues
in my proposal.  To prevent DoS attacks and vandalism, the protocol does not
provide a means to delete or modify messages once they are posted.  Agents
will be administered by humans who independently establish policies on which
messages to accept or ignore.  A likely policy is to ignore messages from
agents whose return address can't be verified, or messages unrelated to the
interests of the owner (as determined by keyword matching).  There is an
economic incentive to not send spam, viruses, false information, etc., because
malicious agents will tend to be blocked and isolated.  Agents will share
knowledge about other agents and gain a reputation by consensus.

I foresee a problem when the collective computing power of the network exceeds
the collective computing power of the humans that administer it.  Humans will
no longer be able to keep up with the complexity of the system.  When your
computer says please run this program to protect your computer from the
Singularity worm, how do you know you aren't actually installing the worm?

I would be interested in alternative AGI proposals that solve this problem of
humans being left behind, but I am not hopeful that there is a solution.  When
machines achieve superhuman intelligence, humans will lack the cognitive power
to communicate with them effectively.  An AGI talking to you would be like you
talking to your dog.  I suppose that uploading and brain augmentation would be
solutions, but then we wouldn't really be human anymore.


This whole scenario is filled with unjustified, unexamined assumptions.

For example, you suddenly say I foresee a problem when the collective 
computing power of the network exceeds the collective computing power of 
the humans that administer it.  Humans will no longer be able to keep up 
with the complexity of the system...


Do you mean collective intelligence?  Because if you mean collective 
computing power I cannot see what measure you are using (my laptop has 
greater computing power than me already, because it can do more 
arithmetic sums in one second than I have done in my life so far).  And 
either way, this comes right after a great big AND THEN A MIRACLE 
HAPPENS step ...!  You were talking about lots of dumb, specialized 
agents distributed around the world, and then all of a sudden you start 
talking as if they could be intelligent.  Why should anyone believe they 
would spontaneously do that?  First they are agents, then all of a 
sudden they are AGIs and they leave us behind:  I see no reason to allow 
that step in the argument.


In short, it looks like an even bigger non-sequiteur than before.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89707823-78502b


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Charles D Hixson

Richard Loosemore wrote:

Matt Mahoney wrote:

...


Matt,

...
As for your larger point, I continue to vehemently disagree with your 
assertion that a singularity will end the human race.


As far as I can see, the most likely outcome of a singularity would be 
exactly the opposite.  Rather than the end of the human race, just 
some changes to the human race that most people would be deleriously 
happy about.



Richard Loosemore


*Some* forms of the singularity would definitely end the human race.  
Others definitely would not, though many of them would dramatically 
change it.  Which one will appear is not certain.  Even among those 
forms of the singularity that are caused by an AGI, this remains true.


It's also true that just which forms fall into which category depends 
partially on what you are willing to acknowledge as human, but even 
taking the most conservative normal meaning of the term the above 
statements remain true.


OTOH, there are many events that we would not consider singularity, such 
as a strike by a giant meteor, that would also end the human race.  So 
that is not a distinction of either the technological singularity or of AGI.


To me it appears that the best hope for the future is to work towards a 
positive singularity outcome.  There are certain to be many working on 
projects that may result in a singularity without seriously considering 
whether it will or will not be positive.  And others working towards a 
destructive singularity, but planning to control it.  I may not think I 
have much chance of success, but I can at least be *trying* to yield a 
positive outcome.
(Objectively, I rate my chances of success as minimal.  I'm hoping to 
come up with an intelligent assistant that will have a mode of 
operation similar to Eliza [but with *much* deeper understanding, that's 
not asking for much] in the sense of being a conversationalist...someone 
that one can talk things over with.  Totally loyal to the employer...but 
with a moral code.  So far I haven't done very well, but if I am 
successful, perhaps I can decrease the percentage of sociopaths.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89096315-c5d818


Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Richard Loosemore

Charles D Hixson wrote:

Richard Loosemore wrote:

Matt Mahoney wrote:

...


Matt,

...
As for your larger point, I continue to vehemently disagree with your 
assertion that a singularity will end the human race.


As far as I can see, the most likely outcome of a singularity would be 
exactly the opposite.  Rather than the end of the human race, just 
some changes to the human race that most people would be deleriously 
happy about.



Richard Loosemore


*Some* forms of the singularity would definitely end the human race.  
Others definitely would not, though many of them would dramatically 
change it.  Which one will appear is not certain.  Even among those 
forms of the singularity that are caused by an AGI, this remains true.


Theoretically yes, but behind my comment was a deeper analysis (which I 
have posted before, I think) according to which it will actually be very 
difficult for a negative-outcome singularity to occur.


I was really trying to make the point that a statement like The 
singularity WILL end the human race is completely ridiculous.  There is 
no WILL about it.


The problem with the scenarios that people imagine (many of which are 
Nightmare Scenarios) is that the vast majority of them involve 
completely untenable assumptions.  One example is the idea that there 
will be a situation in the world in which there are many 
superintelligent AGIs in the world, all competing with each other for 
power in a souped up version of today's arms race(s).  This is 
extraordinarily unlikely:  the speed of development would be such that 
one would have an extremely large time advantage (head start) on the 
others, and during that time it would merge the others with itself, to 
ensure that there was no destructive competition.  Whichever way you try 
to think about this situation, the same conclusion seems to emerge.


This argument needs more detail, but the important point is that there 
*is* an argument.




Richard Loosemore.






It's also true that just which forms fall into which category depends 
partially on what you are willing to acknowledge as human, but even 
taking the most conservative normal meaning of the term the above 
statements remain true.


OTOH, there are many events that we would not consider singularity, such 
as a strike by a giant meteor, that would also end the human race.  So 
that is not a distinction of either the technological singularity or of 
AGI.


To me it appears that the best hope for the future is to work towards a 
positive singularity outcome.  There are certain to be many working on 
projects that may result in a singularity without seriously considering 
whether it will or will not be positive.  And others working towards a 
destructive singularity, but planning to control it.  I may not think I 
have much chance of success, but I can at least be *trying* to yield a 
positive outcome.
(Objectively, I rate my chances of success as minimal.  I'm hoping to 
come up with an intelligent assistant that will have a mode of 
operation similar to Eliza [but with *much* deeper understanding, that's 
not asking for much] in the sense of being a conversationalist...someone 
that one can talk things over with.  Totally loyal to the employer...but 
with a moral code.  So far I haven't done very well, but if I am 
successful, perhaps I can decrease the percentage of sociopaths.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89271807-f5ddfa


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 The problem with the scenarios that people imagine (many of which are 
 Nightmare Scenarios) is that the vast majority of them involve 
 completely untenable assumptions.  One example is the idea that there 
 will be a situation in the world in which there are many 
 superintelligent AGIs in the world, all competing with each other for 
 power in a souped up version of today's arms race(s).  This is 
 extraordinarily unlikely:  the speed of development would be such that 
 one would have an extremely large time advantage (head start) on the 
 others, and during that time it would merge the others with itself, to 
 ensure that there was no destructive competition.  Whichever way you try 
 to think about this situation, the same conclusion seems to emerge.

As a counterexample, I offer evolution.  There is good evidence that every
living thing evolved from a single organism: all DNA is twisted in the same
direction.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89298338-52a11f


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Matt Mahoney
--- Samantha Atkins [EMAIL PROTECTED] wrote:
  In http://www.mattmahoney.net/singularity.html I discuss how a  
  singularity
  will end the human race, but without judgment whether this is good  
  or bad.
  Any such judgment is based on emotion.
 
 Really?  I can think of arguments why this would be a bad thing  
 without even referencing the fact that I am human and do not wish to  
 die.   That wish is not equivalent to an emotion if you consider it,  
 as you appear to have done above, as one of your deepest goals.  Goal  
 per se do not equate to emotion.

I was equating emotion to those goals which are programmed into your brain, as
opposed to learned subgoals.  For example, hunger is an emotion, but the
desire for money to buy food is not.  In that context, you cannot distinguish
between good and bad without reference to hardcoded goals, such as fear of
death.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88196831-fdebcc


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Mark Waser

For example, hunger is an emotion, but the
desire for money to buy food is not


Hunger is a sensation, not an emotion.

The sensation is unpleasant and you have a hard-coded goal to get rid of it.

Further, desires tread pretty close to the line of emotions if not actually 
crossing over . . . .



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88198905-b31742


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Richard Loosemore

Matt Mahoney wrote:

--- Samantha Atkins [EMAIL PROTECTED] wrote:
In http://www.mattmahoney.net/singularity.html I discuss how a  
singularity
will end the human race, but without judgment whether this is good  
or bad.

Any such judgment is based on emotion.
Really?  I can think of arguments why this would be a bad thing  
without even referencing the fact that I am human and do not wish to  
die.   That wish is not equivalent to an emotion if you consider it,  
as you appear to have done above, as one of your deepest goals.  Goal  
per se do not equate to emotion.


I was equating emotion to those goals which are programmed into your brain, as
opposed to learned subgoals.  For example, hunger is an emotion, but the
desire for money to buy food is not.  In that context, you cannot distinguish
between good and bad without reference to hardcoded goals, such as fear of
death.


Matt,

This usage of emotion is idiosyncratic and causes endless confusion.

Hunger is not an emotion but a motivation.  It is certainly true that 
there is a grey area between the two, but in the case that you are 
discussing here, it is clear that you are talking about motivations or 
drives.


As for your larger point, I continue to vehemently disagree with your 
assertion that a singularity will end the human race.


As far as I can see, the most likely outcome of a singularity would be 
exactly the opposite.  Rather than the end of the human race, just some 
changes to the human race that most people would be deleriously happy about.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88201613-566b59


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt,
 
 This usage of emotion is idiosyncratic and causes endless confusion.

You're right.  I didn't mean for the discussion to devolve into a disagreement
over definitions.

 As for your larger point, I continue to vehemently disagree with your 
 assertion that a singularity will end the human race.
 
 As far as I can see, the most likely outcome of a singularity would be 
 exactly the opposite.  Rather than the end of the human race, just some 
 changes to the human race that most people would be deleriously happy about.

These are the same thing.  Happiness is just a matter of reprogramming the
brain.

Or maybe we disagree on what is human?

A singularity is an optimization process whose utility function is the
acquisition of computing resources.  It could be a Dyson sphere with atomic
level computing elements.  It may or may not have a copy of your memories.  It
won't always be happy, because happiness is not fitness.




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88283106-657d3a


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Eliezer S. Yudkowsky

Joshua Fox wrote:

  Turing also committed suicide.
And Chislenko. Each of these people had different circumstances, and
suicide strikes everywhere, but I wonder if there is a common thread.


Ramanujan, like many other great mathematicians and achievers, died 
young. There are on the other hand many great mathematicians and 
achievers that lived to old age. I dare not say whether it is 
dangerous to be a genius without access to more complete statistics.

-- Kai-Mikael Jää-Aro
- http://www.nada.kth.se/~kai/lectures/geb.html

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87869011-a6e042


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Daniel Allen
Regarding the suicide rates of geniuses or those with high intelligence, I
wouldn't be concerned:

  Berman says that the intelligence study is less useful than those that
 point to *risk factors like divorce or unemployment*. ''It's not as if I'm
 going to get more worried about my less intelligent patients versus my more
 intelligent patients.''

 After all, the ''Comprehensive Textbook of Suicidology,'' published in
 2000 and coedited by Berman, lists at least *62 independent risk factors
 for suicide*, including mental disorders, alcoholism, substance abuse,
 social isolation, poor problem-solving, problems with aggression and rage, a
 sense of worthlessness, and a sense of hopelessness.

 *Most of these factors stem from beliefs people hold about their lives and
 the world but--crucially--not from intelligence.* ''IQ can't be changed
 significantly,'' said Thomas Ellis, a psychology professor at Marshall
 University. ''But with therapy, many of these other risk factors can.


 http://www.boston.com/news/globe/ideas/articles/2005/03/20/suicidal_tendencies/?page=2


In the case of Turing, I think it's safe to say the bigger issue was the
chemical castration and it's horrible side effects.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87950169-e7f58c

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Matt Mahoney

--- Mike Dougherty [EMAIL PROTECTED] wrote:

 On Jan 19, 2008 8:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  --- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 

http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
 
  Turing also committed suicide.
 
 That's a personal solution to the Halting problem I do not plan to exercise.
 
  Building a copy of your mind raises deeply troubling issues.  Logically,
 there
 
 Agreed.  If that mind is within acceptable tolerance for human life at
 peak load of 30%(?) of capacity, can it survive hard takeoff?  I
 consider myself reasonably intelligent and perhaps somewhat wise - but
 I would not expect the stresses of thousand-fold improvement in
 throughput would scale out/up.  Even the simplest human foible can
 become an obsessive compulsion that could destabilize the integrity of
 an expanding mind.  I understand this to be related to the issue of
 Friendliness (am I wrong?)

That is not the issue.  There is a philosophical barrier to AGI, not just a
technical one.  The developers kill themselves.  Understanding the mind as a
program is deeply disturbing.  It leads to logical conclusions that conflict
with our most basic instincts.  But how else can you build AGI?

The problem is only indirectly related to friendliness.  Evolution has solved
the NGI (natural general intelligence) problem by giving you the means to make
slightly modified copies of yourself but with no need to understand or control
the process.  This process is not friendly because it satisfies the evolved
supergoal of propagating your DNA, not the subgoals programmed into your brain
like hunger, pain avoidance, sex drive, etc.  NGI is not supposed to make YOU
happy.

Humans are driven by their subgoals to build AGI to (1) serve us and (2)
upload to achieve immortality.  Maybe you can see an ethical dilemma already. 
Does one type of machine have a consciousness and the other not?  If you think
about the problem, you will encounter other difficult questions.  There is a
logical answer, but you won't like it.

 Given a directive to maintain life, hopefully the AI-controlled life
 support system keeps perspective on such logical conclusions.  An AI
 in a nuclear power facility should have the same directive.  I don't
 expect that it shouldn't be allowed to self-terminate (that gives rise
 to issues like slavery) but that it gives notice and transfers
 responsibilities before doing so.

Again, I am referring to the threat to the human builder, not the machine.  If
AGI is developed through recursive self improvement in a competitive,
evolutionary environment, then it will evolve a stable survival instinct. 
Humans have this instinct, but most humans don't think of their brains as
computers, so they never encounter the fundamental conflicts between logic and
emotion.

  In http://www.mattmahoney.net/singularity.html I discuss how a singularity
  will end the human race, but without judgment whether this is good or bad.
  Any such judgment is based on emotion.  Posthuman emotions will be
  programmable.
 
 ... and arbitrary?  Aren't we currently able to program emotions
 (albeit in a primitive pharmaceutical way)?
 
 Who do you expect will have control of that programming?  Certainly
 not the individual.

Correct, because they are weeded out by evolution.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87966276-94a0d6


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Tyson
I believe that humans have the emotions that we do because of the
environment we evolved in. The more selfish/fearful/emotional you are, the
more likely you are to survive and reproduce. For humans, I think logic is a
sort of tool used to help us achieve happiness. Happiness is the
top-priority goal.

If an AGI emerged from an evolutionary environment similar to the one we
came from, I can understand how these anti-human type ethical problems might
arise.

However, if an AGI were to arise from a different environment, such as one
where AI's who accomplish certain goals are the most successful, then I
believe that emotions, if they will be there at all in the sense that we
think of them, will be used as a sort of tool to assist logic. Accomplishing
those certain goals would be the top-priority goal.

Human suicide happens when continuing with life is too painful for them.
This is because emotions are top priority. They feel as if continuing with
their life would just cause more and more pain, forever. So they kill
themselves. Death gets rid of the pain.

An AGI that does not have emotions as a top priority might see this is as
foolish. Sure there is no reason to live, but there is also no reason to
die. If an AGI were to die, it would not be able to work towards
accomplishing its goals. Thus, dying would be a stupid thing to do.

On Jan 20, 2008 3:59 PM, Matt Mahoney  [EMAIL PROTECTED] wrote:


 --- Mike Dougherty [EMAIL PROTECTED] wrote:

  On Jan 19, 2008 8:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
   --- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
  
 

 http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
  
   Turing also committed suicide.
 
  That's a personal solution to the Halting problem I do not plan to
 exercise.
 
   Building a copy of your mind raises deeply troubling issues.
  Logically,
  there
 
  Agreed.  If that mind is within acceptable tolerance for human life at
  peak load of 30%(?) of capacity, can it survive hard takeoff?  I
  consider myself reasonably intelligent and perhaps somewhat wise - but
  I would not expect the stresses of thousand-fold improvement in
  throughput would scale out/up.  Even the simplest human foible can
  become an obsessive compulsion that could destabilize the integrity of
  an expanding mind.  I understand this to be related to the issue of
  Friendliness (am I wrong?)

 That is not the issue.  There is a philosophical barrier to AGI, not just
 a
 technical one.  The developers kill themselves.  Understanding the mind as
 a
 program is deeply disturbing.  It leads to logical conclusions that
 conflict
 with our most basic instincts.  But how else can you build AGI?

 The problem is only indirectly related to friendliness.  Evolution has
 solved
 the NGI (natural general intelligence) problem by giving you the means to
 make
 slightly modified copies of yourself but with no need to understand or
 control
 the process.  This process is not friendly because it satisfies the
 evolved
 supergoal of propagating your DNA, not the subgoals programmed into your
 brain
 like hunger, pain avoidance, sex drive, etc.  NGI is not supposed to make
 YOU
 happy.

 Humans are driven by their subgoals to build AGI to (1) serve us and (2)
 upload to achieve immortality.  Maybe you can see an ethical dilemma
 already.
 Does one type of machine have a consciousness and the other not?  If you
 think
 about the problem, you will encounter other difficult questions.  There is
 a
 logical answer, but you won't like it.

  Given a directive to maintain life, hopefully the AI-controlled life
  support system keeps perspective on such logical conclusions.  An AI
  in a nuclear power facility should have the same directive.  I don't
  expect that it shouldn't be allowed to self-terminate (that gives rise
  to issues like slavery) but that it gives notice and transfers
  responsibilities before doing so.

 Again, I am referring to the threat to the human builder, not the machine.
  If
 AGI is developed through recursive self improvement in a competitive,
 evolutionary environment, then it will evolve a stable survival instinct.
 Humans have this instinct, but most humans don't think of their brains as
 computers, so they never encounter the fundamental conflicts between logic
 and
 emotion.

   In http://www.mattmahoney.net/singularity.html I discuss how a
 singularity
   will end the human race, but without judgment whether this is good or
 bad.
   Any such judgment is based on emotion.  Posthuman emotions will be
   programmable.
 
  ... and arbitrary?  Aren't we currently able to program emotions
  (albeit in a primitive pharmaceutical way)?
 
  Who do you expect will have control of that programming?  Certainly
  not the individual.

 Correct, because they are weeded out by evolution.


 -- Matt Mahoney, [EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Daniel Allen
Regarding AIG research as potentially psychologically disturbing, there are
so many other ways to be pscyhologically disturbed in a postmodern world
that it may not matter :)

It's already hard for a lot of people to have a healthy level of self-esteem
or self-indentity, and nihilism is not in short supply in our society.

More positively, the Buddhists have been working on these issues for over
5,000 years:

*The paradox is that what we take to be so real, our selves, is constructed
out of a reaction against just what we do not wish to acknowledge. We tense
up around that which we are denying, and we experience ourselves through our
tensions...*

*Thoughts without a
Thinkerhttp://www.webheights.net/lovethyself/mepstein/methink.htm
page 19 *

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88020441-f3e76f

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Samantha Atkins


On Jan 19, 2008, at 5:24 PM, Matt Mahoney wrote:


--- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:




http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

Turing also committed suicide.


In his case I understand that the British government saw fit to  
sentence him to heavy hormonal medication because they couldn't deal  
with the fact that he was gay.  Arguably that unhinged his libido and  
other aspects of his psychology, was very upsetting and set up his  
suicide.   In his case I think he was slowly murdered by intolerance  
backed by force of law and primitive medicine.





Building a copy of your mind raises deeply troubling issues.   
Logically, there
is no need for it to be conscious; it only needs to appear to other  
to be
conscious.  Also, it need not have the same goals that you do; it is  
easier to
make it happy (or appear to be happy) by changing its goals.   
Happiness does
not depend on its memories; you could change them arbitrarily or  
just delete
them.  It follows logically that there is no reason to live, that  
death is

nothing to fear.



Those of us who have meditated a bit (and/or experimented with  
conscious in other ways in our youth) are aware of how much of our  
vaunted self can be seen as construct and phantasm.   Rarely does  
seeing that alone drive someone over the edge.


Of course your behavior is not governed by this logic.  If you were  
building
an autonomous robot, you would not program it to be happy.  You  
would program
it to satisfy goals that you specify, and you would not allow it to  
change its

own goals, or even to want to change them.


That would depend greatly on how deeply autonomous I wanted it to be.


 One goal would be a self
preservation instinct.  It would fear death, and it would experience  
pain when
injured.  To make it intelligent, you would balance this utility  
against a
desire to explore or experiment by assigning positive utility to  
knowledge.
The resulting behavior would be indistinguishable from free will,  
what we call

consciousness.



I don't think simply avoiding death or injury as counterposed with  
exploring and experimenting is sufficient to arrive at what we  
generally term free will.



This is how evolution programmed your brain.  Your assigned  
supergoal is to

propagate your DNA, then die.  Understanding AI means subverting this
supergoal.



That is a bit blunt and very inaccurate seen analogously to giving  
goals to an AI.  Besides this is not an assigned supergoal.  It is  
just the fitness function applied to a naturally occurring wild GA.   
There is reason to read more into it than that.


In http://www.mattmahoney.net/singularity.html I discuss how a  
singularity
will end the human race, but without judgment whether this is good  
or bad.

Any such judgment is based on emotion.


Really?  I can think of arguments why this would be a bad thing  
without even referencing the fact that I am human and do not wish to  
die.   That wish is not equivalent to an emotion if you consider it,  
as you appear to have done above, as one of your deepest goals.  Goal  
per se do not equate to emotion.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88044290-bafa52


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Ben Goertzel
Well, Lenat survives...

But he paid people to build his database (Cyc)

What's depressing is trying to get folks to build a commonsense KB for
free ... then you
get confronted with the absolute stupidity of what they enter, and the
poverty and
repetitiveness of their senses of humor... ;-p

ben

On Jan 19, 2008 4:42 PM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

 I guess the moral here is Stay away from attempts to hand-program a
 database of common-sense assertions.

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87836600-bf128b


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Bob Mottram
Some thoughts of mine on the article.

   http://streebgreebling.blogspot.com/2008/01/singh-and-mckinstry.html



On 19/01/2008, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

 I guess the moral here is Stay away from attempts to hand-program a
 database of common-sense assertions.

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87839319-a934a8


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Bob Mottram
Quality is an issue, but it's really all about volume.  Provided that
you have enough volume the signal stands out from the noise.

The solution is probably to make the knowledge capture into a game or
something that people will do as entertainment.  Possibly the Second
Life approach will provide a new avenue for acquiring commonsense.


On 19/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
 What's depressing is trying to get folks to build a commonsense KB for
 free ... then you
 get confronted with the absolute stupidity of what they enter, and the
 poverty and
 repetitiveness of their senses of humor... ;-p

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87840198-fc844a


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Lukasz Kaiser
 This thread has nothing to do with artificial general intelligence -
 please close this thread. Thanks

Sorry, but I have to say that I strongly disagree. There are
many aspects of agi that are non-technical and organizing
one's own live while doing ai is certainly one of them. That's
why I think this article is very on topic here.

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87841840-203828


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Ben Goertzel
On Jan 19, 2008 5:53 PM, a [EMAIL PROTECTED] wrote:
 This thread has nothing to do with artificial general intelligence -
 please close this thread. Thanks

IMO, this thread is close enough to AGI to be list-worthy.

It is certainly true that knowledge-entry is not my preferred
approach to AGI ... I think that it is at best peripheral to any
really serious AGI approach.

However, some serious AGI thinkers, such as Doug Lenat,
believe otherwise.

And, this list is about AGI in general, not about any specific
approaches to AGI.

So, the thread can stay...

-- Ben Goertzel, list owner




 Bob Mottram wrote:
  Quality is an issue, but it's really all about volume.  Provided that
  you have enough volume the signal stands out from the noise.
 
  The solution is probably to make the knowledge capture into a game or
  something that people will do as entertainment.  Possibly the Second
  Life approach will provide a new avenue for acquiring commonsense.
 
 
  On 19/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  What's depressing is trying to get folks to build a commonsense KB for
  free ... then you
  get confronted with the absolute stupidity of what they enter, and the
  poverty and
  repetitiveness of their senses of humor... ;-p
 
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87842518-105d7f


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread a
This thread has nothing to do with artificial general intelligence - 
please close this thread. Thanks


Bob Mottram wrote:

Quality is an issue, but it's really all about volume.  Provided that
you have enough volume the signal stands out from the noise.

The solution is probably to make the knowledge capture into a game or
something that people will do as entertainment.  Possibly the Second
Life approach will provide a new avenue for acquiring commonsense.


On 19/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
  

What's depressing is trying to get folks to build a commonsense KB for
free ... then you
get confronted with the absolute stupidity of what they enter, and the
poverty and
repetitiveness of their senses of humor... ;-p



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87841259-88017e


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread J Storrs Hall, PhD
Breeds There a Man...? by Isaac Asimov

On Saturday 19 January 2008 04:42:30 pm, Eliezer S. Yudkowsky wrote:
 
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
 
 I guess the moral here is Stay away from attempts to hand-program a 
 database of common-sense assertions.
 
 -- 
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87842867-40e15f


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Stephen Reed
The article on the fate of the two AI researchers was interesting.  Perhaps 
many here share their belief that AGI will vastly change the world.  It is 
however unfortunate that they did not seek medical help for their symptoms of 
depression - no one needs to suffer that kind of pain.  They were so young.

Regarding the striking similarity between their approach to AI, MindPixel was 
commercial so I never looked at it, but I did look at the OpenMind/ConceptNet 
content while at Cycorp for possible import into Cyc.  The chief error that 
OpenMind made was that the web forms did not perform a semantic analysis of the 
input, and therefore it was not possible to filter out the ill-formed, 
sarcastic, or false statements.  In my own work, I hope to motive a multitude 
of volunteers to interact with a compelling, intelligent English dialog system. 
 My work will acquire knowledge and skills as logical statements based upon the 
ontology of OpenCyc.  Meta assertions can attach an optional belief probability 
when appropriate.

The positive, confirming result the I take away from both MindPixel and 
OpenMind is that volunteers performed several million interactions with their 
rudimentary interfaces.   I will be following down that path too.

I'll make a further announcement about my dialog system in a separate post to 
keep this thread on topic.

-Steve
 
Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, January 19, 2008 3:49:55 PM
Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide

 Well, Lenat survives...

But he paid people to build his database (Cyc)

What's depressing is trying to get folks to build a commonsense KB for
free ... then you
get confronted with the absolute stupidity of what they enter, and the
poverty and
repetitiveness of their senses of humor... ;-p

ben

On Jan 19, 2008 4:42 PM, Eliezer S. Yudkowsky [EMAIL PROTECTED]  wrote:
  
 http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

 I guess the moral here is Stay away from attempts to hand-program a
 database of common-sense assertions.

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on  Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;







  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87846884-b52355

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Matt Mahoney
--- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:


http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

Turing also committed suicide.

Building a copy of your mind raises deeply troubling issues.  Logically, there
is no need for it to be conscious; it only needs to appear to other to be
conscious.  Also, it need not have the same goals that you do; it is easier to
make it happy (or appear to be happy) by changing its goals.  Happiness does
not depend on its memories; you could change them arbitrarily or just delete
them.  It follows logically that there is no reason to live, that death is
nothing to fear.

Of course your behavior is not governed by this logic.  If you were building
an autonomous robot, you would not program it to be happy.  You would program
it to satisfy goals that you specify, and you would not allow it to change its
own goals, or even to want to change them.  One goal would be a self
preservation instinct.  It would fear death, and it would experience pain when
injured.  To make it intelligent, you would balance this utility against a
desire to explore or experiment by assigning positive utility to knowledge. 
The resulting behavior would be indistinguishable from free will, what we call
consciousness.

This is how evolution programmed your brain.  Your assigned supergoal is to
propagate your DNA, then die.  Understanding AI means subverting this
supergoal.

In http://www.mattmahoney.net/singularity.html I discuss how a singularity
will end the human race, but without judgment whether this is good or bad. 
Any such judgment is based on emotion.  Posthuman emotions will be
programmable.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87851001-9a466b


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Mike Dougherty
On Jan 19, 2008 8:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

 Turing also committed suicide.

That's a personal solution to the Halting problem I do not plan to exercise.

 Building a copy of your mind raises deeply troubling issues.  Logically, there

Agreed.  If that mind is within acceptable tolerance for human life at
peak load of 30%(?) of capacity, can it survive hard takeoff?  I
consider myself reasonably intelligent and perhaps somewhat wise - but
I would not expect the stresses of thousand-fold improvement in
throughput would scale out/up.  Even the simplest human foible can
become an obsessive compulsion that could destabilize the integrity of
an expanding mind.  I understand this to be related to the issue of
Friendliness (am I wrong?)

 It follows logically that there is no reason to live, that death is nothing 
 to fear.

Given a directive to maintain life, hopefully the AI-controlled life
support system keeps perspective on such logical conclusions.  An AI
in a nuclear power facility should have the same directive.  I don't
expect that it shouldn't be allowed to self-terminate (that gives rise
to issues like slavery) but that it gives notice and transfers
responsibilities before doing so.

 In http://www.mattmahoney.net/singularity.html I discuss how a singularity
 will end the human race, but without judgment whether this is good or bad.
 Any such judgment is based on emotion.  Posthuman emotions will be
 programmable.

... and arbitrary?  Aren't we currently able to program emotions
(albeit in a primitive pharmaceutical way)?

Who do you expect will have control of that programming?  Certainly
not the individual.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87858522-76fadd


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Joshua Fox
  Turing also committed suicide.
And Chislenko. Each of these people had different circumstances, and
suicide strikes everywhere, but I wonder if there is a common thread.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87868032-5840d5