rob levy wrote:
>> This is wishful thinking.
> I definitely agree, however we lack a convincing model or plan of any sort 
> for the construction of systems demonstrating subjectivity, 

Define subjectivity. An objective decision might appear subjective to you only 
because you aren't intelligent enough to understand the decision process.

> Therefore it is reasonable to consider symbiosis

How does that follow?

> as both a safe design 

How do you know that a self replicating organism that we create won't evolve to 
kill us instead? Do we control evolution?

> and potentially the only possible design 

It is not the only possible design. It is possible to create systems that are 
more intelligent than a single human but less intelligent than all of humanity, 
without the capability to modify itself or reproduce without the collective 
permission of the billions of humans that own and maintain control over it. An 
example would be the internet.

 -- Matt Mahoney, matmaho...@yahoo.com




________________________________
From: rob levy <r.p.l...@gmail.com>
To: agi <agi@v2.listbox.com>
Sent: Sun, June 27, 2010 2:37:15 PM
Subject: Re: [agi] Questions for an AGI

I definitely agree, however we lack a convincing model or plan of any sort for 
the construction of systems demonstrating subjectivity, and it seems plausible 
that subjectivity is functionally necessary for general intelligence. Therefore 
it is reasonable to consider symbiosis as both a safe design and potentially 
the only possible design (at least at first), depending on how creative and 
resourceful we get in cog sci/ AGI in coming years.


On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney <matmaho...@yahoo.com> wrote:

This is wishful thinking. Wishful thinking is dangerous. How about instead of 
hoping that AGI won't destroy the world, you study the problem and come up with 
a safe design.
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
>
>
>
________________________________
From: rob levy <r.p.l...@gmail.com>
>To: agi <agi@v2.listbox.com>
>Sent: Sat, June 26, 2010 1:14:22 PM
>Subject: Re: [agi]
> Questions for an AGI
>
>
>>>why should AGIs give a damn about us?
>>>
>>
>I like to think that they will give a damn because humans have a unique way of 
>experiencing reality and there is no reason to not take advantage of that 
>precious opportunity to create astonishment or bliss. If anything is important 
>in the universe, its insuring positive experiences for all areas in which it 
>is conscious, I think it will realize that. And with the resources available 
>in the solar system alone, I don't think we will be much of a burden. 
>
>
>I like that idea.  Another reason might be that we won't crack the problem of 
>autonomous general intelligence, but the singularity will proceed regardless 
>as a symbiotic relationship between life and AI.  That would be beneficial to 
>us as a form of intelligence expansion, and beneficial to the artificial 
>entity a way of being alive and having an experience of the world.  
>>
>agi | Archives  > | Modify > Your Subscription  
>>
>agi | Archives  > | Modify > Your Subscription  

agi | Archives  | Modify Your Subscription  


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to