Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Joshua Fox
 > Turing also committed suicide.
And Chislenko. Each of these people had different circumstances, and
suicide strikes everywhere, but I wonder if there is a common thread.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87868032-5840d5


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Mike Dougherty
On Jan 19, 2008 8:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:
> http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
>
> Turing also committed suicide.

That's a personal solution to the Halting problem I do not plan to exercise.

> Building a copy of your mind raises deeply troubling issues.  Logically, there

Agreed.  If that mind is within acceptable tolerance for human life at
peak load of 30%(?) of capacity, can it survive hard takeoff?  I
consider myself reasonably intelligent and perhaps somewhat wise - but
I would not expect the stresses of thousand-fold "improvement" in
throughput would scale out/up.  Even the simplest human foible can
become an obsessive compulsion that could destabilize the integrity of
an expanding mind.  I understand this to be related to the issue of
Friendliness (am I wrong?)

> It follows logically that there is no reason to live, that death is nothing 
> to fear.

Given a directive to maintain life, hopefully the AI-controlled life
support system keeps perspective on such logical conclusions.  An AI
in a nuclear power facility should have the same directive.  I don't
expect that it shouldn't be allowed to self-terminate (that gives rise
to issues like slavery) but that it gives notice and transfers
responsibilities before doing so.

> In http://www.mattmahoney.net/singularity.html I discuss how a singularity
> will end the human race, but without judgment whether this is good or bad.
> Any such judgment is based on emotion.  Posthuman emotions will be
> programmable.

... and arbitrary?  Aren't we currently able to program emotions
(albeit in a primitive pharmaceutical way)?

Who do you expect will have control of that programming?  Certainly
not the individual.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87858522-76fadd


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Matt Mahoney
--- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:

>
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

Turing also committed suicide.

Building a copy of your mind raises deeply troubling issues.  Logically, there
is no need for it to be conscious; it only needs to appear to other to be
conscious.  Also, it need not have the same goals that you do; it is easier to
make it happy (or appear to be happy) by changing its goals.  Happiness does
not depend on its memories; you could change them arbitrarily or just delete
them.  It follows logically that there is no reason to live, that death is
nothing to fear.

Of course your behavior is not governed by this logic.  If you were building
an autonomous robot, you would not program it to be happy.  You would program
it to satisfy goals that you specify, and you would not allow it to change its
own goals, or even to want to change them.  One goal would be a self
preservation instinct.  It would fear death, and it would experience pain when
injured.  To make it intelligent, you would balance this utility against a
desire to explore or experiment by assigning positive utility to knowledge. 
The resulting behavior would be indistinguishable from free will, what we call
consciousness.

This is how evolution programmed your brain.  Your assigned supergoal is to
propagate your DNA, then die.  Understanding AI means subverting this
supergoal.

In http://www.mattmahoney.net/singularity.html I discuss how a singularity
will end the human race, but without judgment whether this is good or bad. 
Any such judgment is based on emotion.  Posthuman emotions will be
programmable.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87851001-9a466b


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Stephen Reed
The article on the fate of the two AI researchers was interesting.  Perhaps 
many here share their belief that AGI will vastly change the world.  It is 
however unfortunate that they did not seek medical help for their symptoms of 
depression - no one needs to suffer that kind of pain.  They were so young.

Regarding the striking similarity between their approach to AI, MindPixel was 
commercial so I never looked at it, but I did look at the OpenMind/ConceptNet 
content while at Cycorp for possible import into Cyc.  The chief error that 
OpenMind made was that the web forms did not perform a semantic analysis of the 
input, and therefore it was not possible to filter out the ill-formed, 
sarcastic, or false statements.  In my own work, I hope to motive a multitude 
of volunteers to interact with a compelling, intelligent English dialog system. 
 My work will acquire knowledge and skills as logical statements based upon the 
ontology of OpenCyc.  Meta assertions can attach an optional belief probability 
when appropriate.

The positive, confirming result the I take away from both MindPixel and 
OpenMind is that volunteers performed several million interactions with their 
rudimentary interfaces.   I will be following down that path too.

I'll make a further announcement about my dialog system in a separate post to 
keep this thread on topic.

-Steve
 
Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Saturday, January 19, 2008 3:49:55 PM
Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide

 Well, Lenat survives...

But he paid people to build his database (Cyc)

What's depressing is trying to get folks to build a commonsense KB for
free ... then you
get confronted with the absolute stupidity of what they enter, and the
poverty and
repetitiveness of their senses of humor... ;-p

ben

On Jan 19, 2008 4:42 PM, Eliezer S. Yudkowsky <[EMAIL PROTECTED]>  wrote:
>  
> http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
>
> I guess the moral here is "Stay away from attempts to hand-program a
> database of common-sense assertions."
>
> --
> Eliezer S. Yudkowsky  http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


"We are on the edge of change comparable to the rise of human life on  Earth."
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;







  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87846884-b52355

[agi] Texai bootstrap dialog system design

2008-01-19 Thread Stephen Reed
I've posted a brief design document for the Texai bootstrap dialog system on my 
blog.
http://texai.org/blog/2008/01/20/bootstrap-dialog-system-design
 
-Steve


Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87846379-72fb92

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread J Storrs Hall, PhD
"Breeds There a Man...?" by Isaac Asimov

On Saturday 19 January 2008 04:42:30 pm, Eliezer S. Yudkowsky wrote:
> 
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
> 
> I guess the moral here is "Stay away from attempts to hand-program a 
> database of common-sense assertions."
> 
> -- 
> Eliezer S. Yudkowsky  http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 
> 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87842867-40e15f


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Ben Goertzel
On Jan 19, 2008 5:53 PM, a <[EMAIL PROTECTED]> wrote:
> This thread has nothing to do with artificial general intelligence -
> please close this thread. Thanks

IMO, this thread is close enough to AGI to be list-worthy.

It is certainly true that knowledge-entry is not my preferred
approach to AGI ... I think that it is at best peripheral to any
really serious AGI approach.

However, some serious AGI thinkers, such as Doug Lenat,
believe otherwise.

And, this list is about AGI in general, not about any specific
approaches to AGI.

So, the thread can stay...

-- Ben Goertzel, list owner


>
>
> Bob Mottram wrote:
> > Quality is an issue, but it's really all about volume.  Provided that
> > you have enough volume the signal stands out from the noise.
> >
> > The solution is probably to make the knowledge capture into a game or
> > something that people will do as entertainment.  Possibly the Second
> > Life approach will provide a new avenue for acquiring commonsense.
> >
> >
> > On 19/01/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >> What's depressing is trying to get folks to build a commonsense KB for
> >> free ... then you
> >> get confronted with the absolute stupidity of what they enter, and the
> >> poverty and
> >> repetitiveness of their senses of humor... ;-p
> >>
> >
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
> >
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


"We are on the edge of change comparable to the rise of human life on Earth."
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87842518-105d7f


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Lukasz Kaiser
> This thread has nothing to do with artificial general intelligence -
> please close this thread. Thanks

Sorry, but I have to say that I strongly disagree. There are
many aspects of agi that are non-technical and organizing
one's own live while doing ai is certainly one of them. That's
why I think this article is very on topic here.

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87841840-203828


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread a
This thread has nothing to do with artificial general intelligence - 
please close this thread. Thanks


Bob Mottram wrote:

Quality is an issue, but it's really all about volume.  Provided that
you have enough volume the signal stands out from the noise.

The solution is probably to make the knowledge capture into a game or
something that people will do as entertainment.  Possibly the Second
Life approach will provide a new avenue for acquiring commonsense.


On 19/01/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote:
  

What's depressing is trying to get folks to build a commonsense KB for
free ... then you
get confronted with the absolute stupidity of what they enter, and the
poverty and
repetitiveness of their senses of humor... ;-p



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87841259-88017e


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Bob Mottram
Quality is an issue, but it's really all about volume.  Provided that
you have enough volume the signal stands out from the noise.

The solution is probably to make the knowledge capture into a game or
something that people will do as entertainment.  Possibly the Second
Life approach will provide a new avenue for acquiring commonsense.


On 19/01/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> What's depressing is trying to get folks to build a commonsense KB for
> free ... then you
> get confronted with the absolute stupidity of what they enter, and the
> poverty and
> repetitiveness of their senses of humor... ;-p

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87840198-fc844a


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Bob Mottram
Some thoughts of mine on the article.

   http://streebgreebling.blogspot.com/2008/01/singh-and-mckinstry.html



On 19/01/2008, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:
> http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
>
> I guess the moral here is "Stay away from attempts to hand-program a
> database of common-sense assertions."
>
> --
> Eliezer S. Yudkowsky  http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87839319-a934a8


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Ben Goertzel
Well, Lenat survives...

But he paid people to build his database (Cyc)

What's depressing is trying to get folks to build a commonsense KB for
free ... then you
get confronted with the absolute stupidity of what they enter, and the
poverty and
repetitiveness of their senses of humor... ;-p

ben

On Jan 19, 2008 4:42 PM, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:
> http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
>
> I guess the moral here is "Stay away from attempts to hand-program a
> database of common-sense assertions."
>
> --
> Eliezer S. Yudkowsky  http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


"We are on the edge of change comparable to the rise of human life on Earth."
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87836600-bf128b


[agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Eliezer S. Yudkowsky

http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

I guess the moral here is "Stay away from attempts to hand-program a 
database of common-sense assertions."


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87836028-f6311f


[agi] OpenCog's business model

2008-01-19 Thread YKY (Yan King Yin)
How much of OpenCog will be finally opensource?  100%?  Or will it be
partially open?

IMO the partial-open model still has the problems of both open and closed
source models:  the open parts cannot make much money, and the closed parts
cannot recieve public input.  Though, I appreciate that Ben is trying to
explore these directions.

IMO, we need to have an environment where people can exchange ideas as well
as play mix-and-match with various AGI modules (real code).  My virtual
credit system, which is about to be lanuched, aims to provide that.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=87769395-f6ba97