Which reminds me of the research I posted about growing socially intelligent agents by embedding them in an environment where they're forced to play prisoners dilemma with each other over and over. I wondered how they would feel about having been subjected to thousands of generations of this torture when they realized how we had grown them. There are two questions, of course: whether it's moral to torture pre-sentients to bring them to sentience; and whether the resulting super-sentient will forgive you when it becomes the master.
-- rec -- On Mon, Dec 23, 2013 at 5:08 PM, glen <g...@ropella.name> wrote: > > http://rationalwiki.org/wiki/LessWrong#Roko.27s_Basilisk > > > In July of 2010, Roko (a top contributor at the time) wondered if a > future Friendly AI would punish people who didn't do everything in their > power to further the AI research from which this AI originated, by at the > very least donating all they have to it. > > Sorry if y'all have seen this. I just stumbled on it and thought it was > funny enough to pass on. > > -- > =><= glen > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com