Everything has to happen before the singularity because there is no after.

I meant when machines take over technological evolution.

That is easy. Eliminate all laws.

I would prefer a surveillance state. I should say impossible to get away
with if conducted in public.

Is there a difference between enhancing our intelligence by uploading and
creating killer robots? Think about it.

Well yes, we're not all bad but I think you read me wrong because
thats basically my worry.

Assume we succeed. People want to be happy. Depending on how our minds are
implemented, it's either a matter of rewiring our neurons or rewriting our
software. Is that better than a gray goo accident?

Are you asking if changing your hardware or software ends your
true existence like a grey goo accident would? Assuming the goo
is unconscious, it would be worse because there is the potential for a
peaceful experience free from the power struggle for limited resources even
if humans don't truly exist or not. Does anyone else worry about how we're
going to keep this machine's unprecedented resourcefulness from being abused
by an elite few to further protect and advance their social superiority? To
me it seems like if we can't create a democratic society where people have
real choices concerning the issues that affect them most and it  just ends
up being a continuation of the class war we have today, then maybe grey goo
would be the better option before we start "promoting democracy" throughout
the universe.

On Sun, Jun 27, 2010 at 2:43 PM, Matt Mahoney <matmaho...@yahoo.com> wrote:

> Travis Lenting wrote:
> > I don't like the idea of enhancing human intelligence before the
> singularity.
>
> The singularity is a point of infinite collective knowledge, and therefore
> infinite unpredictability. Everything has to happen before the singularity
> because there is no after.
>
> > I think crime has to be made impossible even for an enhanced humans
> first.
>
> That is easy. Eliminate all laws.
>
> > I would like to see the singularity enabling AI to be as least like a
> reproduction machine as possible.
>
> Is there a difference between enhancing our intelligence by uploading and
> creating killer robots? Think about it.
>
> > Does it really need to be a general AI to cause a singularity? Can it not
> just stick to scientific data and quantify human uncertainty?  It seems like
> it would be less likely to ever care about killing all humans so it can rule
> the galaxy or that its an omnipotent servant.
>
> Assume we succeed. People want to be happy. Depending on how our minds are
> implemented, it's either a matter of rewiring our neurons or rewriting our
> software. Is that better than a gray goo accident?
>
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
> ------------------------------
> *From:* Travis Lenting <travlent...@gmail.com>
>
> *To:* agi <agi@v2.listbox.com>
> *Sent:* Sun, June 27, 2010 5:21:24 PM
>
> *Subject:* Re: [agi] Questions for an AGI
>
> I don't like the idea of enhancing human intelligence before the
> singularity. I think crime has to be made impossible even for an enhanced
> humans first. I think life is too adapt to abusing opportunities if
> possible. I would like to see the singularity enabling AI to be as least
> like a reproduction machine as possible. Does it really need to be a general
> AI to cause a singularity? Can it not just stick to scientific data and
> quantify human uncertainty?  It seems like it would be less likely to ever
> care about killing all humans so it can rule the galaxy or that its
> an omnipotent servant.
>
> On Sun, Jun 27, 2010 at 11:39 AM, The Wizard <key.unive...@gmail.com>wrote:
>
>> This is wishful thinking. Wishful thinking is dangerous. How about instead
>> of hoping that AGI won't destroy the world, you study the problem and come
>> up with a safe design.
>>
>>
>> Agreed on this dangerous thought!
>>
>> On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney <matmaho...@yahoo.com>wrote:
>>
>>> This is wishful thinking. Wishful thinking is dangerous. How about
>>> instead of hoping that AGI won't destroy the world, you study the problem
>>> and come up with a safe design.
>>>
>>>
>>> -- Matt Mahoney, matmaho...@yahoo.com
>>>
>>>
>>> ------------------------------
>>> *From:* rob levy <r.p.l...@gmail.com>
>>> *To:* agi <agi@v2.listbox.com>
>>> *Sent:* Sat, June 26, 2010 1:14:22 PM
>>> *Subject:* Re: [agi] Questions for an AGI
>>>
>>>  why should AGIs give a damn about us?
>>>
>>>
>>>> I like to think that they will give a damn because humans have a unique
>>> way of experiencing reality and there is no reason to not take advantage of
>>> that precious opportunity to create astonishment or bliss. If anything is
>>> important in the universe, its insuring positive experiences for all areas
>>> in which it is conscious, I think it will realize that. And with the
>>> resources available in the solar system alone, I don't think we will be much
>>> of a burden.
>>>
>>>
>>> I like that idea.  Another reason might be that we won't crack the
>>> problem of autonomous general intelligence, but the singularity will proceed
>>> regardless as a symbiotic relationship between life and AI.  That would be
>>> beneficial to us as a form of intelligence expansion, and beneficial to the
>>> artificial entity a way of being alive and having an experience of the
>>> world.
>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> Carlos A Mejia
>>
>> Taking life one singularity at a time.
>> www.Transalchemy.com
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to