I don't think anybody knows more about the science of AI than Ilya
Sutskever, at least nobody human, and apparently he believes that
Artificial General Intelligence is boring old hat stuff because he now says
"Superintelligence is within reach". That fact has scared the hell out of
him. Sutskever saw something last November that spooked him and he's the
one who orchestrated the attempt to kick out Sam Altman because he didn't
think he was emphasizing safety enough. His attempt to get rid of Altman
failed, so he left OpenAI and today started his own company called "Safe
Superintelligence".

  Ilya Sutskever says "Superintelligence is within reach"
<https://www.youtube.com/watch?v=KI3wIUDcIgM&list=WL&index=1&t=64s>

John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
sia

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv21Ea02GJ9wLX3HbG_BwW%2BYJ%2BvH1pVx0wr5wrh5r2vF8w%40mail.gmail.com.

Reply via email to