On Wed, Feb 12, 2020, 8:06 PM James Bowery <[email protected]> wrote:

> On Wed, Feb 12, 2020 at 2:44 PM Matt Mahoney <[email protected]>
> wrote:
>
>>
>>
>> Here is versions 1.2.0.87b of my recursively self improving AGI. This
>> hopefully fixes a bug in the module that detects when it is about to launch
>> an unfriendly singularity. Just to be safe, be sure to run it in a virtual
>> sandbox on a machine not connected to the internet. (Link to source code).
>>
>
> You left out:
>
> And don't do anything it asks you to do even if it's your dearly departed
> grandmother authenticated by telling you a secret only you and she know but
> now she's burning in quantum Hell and can only get out to Repent and go to
> Heaven if you do it.  Especially in that case, Just Don't.
>

Corwin's AI boxing experiments posted on SL4 in 2002 are telling.
http://sl4.org/archive/0207/4935.html

Fortunately it is all hypothetical because intelligence depends on
knowledge and computing power, and a self improving program gains neither.

The most intelligent system today, which is gaining both, is the internet.
But it is already out of the box.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T11f5dc3052b454b3-M4d872f8bcfffc131ba98c2b1
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to