The worst case possible would be like the Project Colossus film (1970). The
AIs would become like gods and we would be their servants. In exchange,
they'd impose something like a Pax Romana by brute force. We'd have some
type of paradise on Earth, with a huge caveat.

Em sex., 31 de mar. de 2023 às 14:59, Jed Rothwell <jedrothw...@gmail.com>
escreveu:

> Here is another article about this, written by someone who says he is an
> AI expert.
>
> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
>
> QUOTE:
>
> Pausing AI Developments Isn't Enough. We Need to Shut it All Down
>
> An open letter published today calls for “all AI labs to immediately pause
> for at least 6 months the training of AI systems more powerful than GPT-4.”
>
> This 6-month moratorium would be better than no moratorium. I have respect
> for everyone who stepped up and signed it. It’s an improvement on the
> margin. . . .
>
> The key issue is not “human-competitive” intelligence (as the open letter
> puts it); it’s what happens after AI gets to smarter-than-human
> intelligence. Key thresholds there may not be obvious, we definitely can’t
> calculate in advance what happens when, and it currently seems imaginable
> that a research lab would cross critical lines without noticing.
>
> Many researchers steeped in these issues, including myself, expect that
> the most likely result of building a superhumanly smart AI, under anything
> remotely like the current circumstances, is that literally everyone on
> Earth will die. Not as in “maybe possibly some remote chance,” but as in
> “that is the obvious thing that would happen.”
>


-- 
Daniel Rocha - RJ
danieldi...@gmail.com

Reply via email to