Ben, may I request that you request that this conversation be moved to sl4?  It 
is much more appropriate there.
  ----- Original Message ----- 
  From: bwxfi obgwyg 
  To: agi@v2.listbox.com 
  Sent: Saturday, June 14, 2008 10:09 AM
  Subject: [agi] World domination, but not killing grandchildren (was Re: 
Paper: Artificial Intelligence will Kill our Grandchildren)


  I'm not Anthony Berglas, but I can respond to this:

  From: "Thomas McCabe" <[EMAIL PROTECTED]>
  >[Anthony Berglas doesn't] explain where the AI gets the motive for world
  >domination.

  I don't agree that the AI would necessarily want to kill anybody's
  grandchildren, but I can explain where the AI gets the motive for world
  domination.

  World domination seems to be a subgoal of just about every other worthwhile
  goal.  For example:

  Suppose the AI wants to make our grandchildren immortal.  (I'm using "The AI
  wants..." as a shorthand for "The AI is pursuing the goal of...".
  Anthropomorphism is not implied.)  Well, then, it should spend resources on
  medical research.  The present political process in many countries leads to
  resources being spent on things that are economically harmful (such as
  subsidies and trade protectionism) and other things directly harmful (such as
  going to war to destroy weapons of mass destruction that don't exist).  Thus
  world domination by the AI would promote its goal.

  Suppose the AI wants to do interstellar exploration.  Well, then, it should
  spend resources to develop and launch something that can do the exploration.
  The present political process does something other than spending the resources
  that way.  Proceed from here as in the previous case.

  Suppose the AI wants to end world hunger.  Well, most world hunger is caused 
by
  governments that confiscate food, government subsidies that increase the
  price of food, and tragedy-of-the-commons situations that impair the ability 
to
  produce food.  If the AI achieves world domination it can eliminate all of 
these
  root causes of world hunger.

  Suppose the AI wants to divert wealth into the hands of its creator.  That
  leads directly to world domination as a goal of the AI.

  Suppose the AI wants to do something specific that can be summarized 
informally
  as "do the greatest good for the greatest number" or "give people what they
  want".  The existing world governments do not do that optimally, so replacing
  them with something it controls is a natural part of that plan.

  It seems that just about any long-term goal leads to world domination as a
  subgoal.  Trying to come up with a counterexample leads to obviously limited
  goals, such as the AI wanting to spend one week writing a science fiction 
novel
  that can be sold for lots of money, without any outgoing communication with 
the
  outside world other than the novel, and then turn itself off.  

  If anyone can think of a less contrived counterexample, please post it or 
send it to me.

  People seem to want to believe that this work has no political consequences.  
I do 
  not believe that to be the case.

  Pseudonymously yours,
  Obgwyg


------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to