[FRIAM] Singularians, Less Wrong, and Roki's Basilisk

2013-12-23 Thread glen
http://rationalwiki.org/wiki/LessWrong#Roko.27s_Basilisk In July of 2010, Roko (a top contributor at the time) wondered if a future Friendly AI would punish people who didn't do everything in their power to further the AI research from which this AI originated, by at the very least

Re: [FRIAM] Singularians, Less Wrong, and Roki's Basilisk

2013-12-23 Thread Roger Critchlow
Which reminds me of the research I posted about growing socially intelligent agents by embedding them in an environment where they're forced to play prisoners dilemma with each other over and over. I wondered how they would feel about having been subjected to thousands of generations of this

Re: [FRIAM] Singularians, Less Wrong, and Roki's Basilisk

2013-12-23 Thread Owen Densmore
Funny, this got some air on KCRW radio: http://www.kcrw.com/news/programs/in/in131218our_final_invention A book started the conversation: Our Final Invention: Artificial Intelligence and the End of the Human Era http://www.amazon.com/dp/0312622376/ The KCRW page mentions other

Re: [FRIAM] Singularians, Less Wrong, and Roki's Basilisk

2013-12-23 Thread glen
On 12/23/2013 04:23 PM, Roger Critchlow wrote: I wondered how they would feel about having been subjected to thousands of generations of this torture when they realized how we had grown them. There are two questions, of course: whether it's moral to torture pre-sentients to bring them to