I have a friend with a PhD in mathematics who was working on TS AI military
weaponry 13 years ago.  She eventually left that consultant job out of fear
of what she was doing.

On Wed, Apr 5, 2023, 1:00 PM Jed Rothwell <jedrothw...@gmail.com> wrote:

> This document says:
>
> This Darwinian logic could also apply to artificial agents, as agents may
>> eventually be better able to persist into the future if they behave
>> selfishly and pursue their own interests with little regard for humans,
>> which could pose catastrophic risks.
>
>
> They have no interests any more than a dishwasher does. They have no
> motives. No instinct of self-preservation. Unless someone programs these
> things into them, which I think might be a disastrous mistake. I do not
> think the instinct for self-preservation is an emergent quality of
> intelligence, but I should note that Arthur Clarke and others *did* think
> so.
>
> An AI in a weapon might be programmed with self-preservation, since
> people and other AI would try to destroy it. I think putting AI into
> weapons would be a big mistake.
>
>

Reply via email to