So we acknowledge these risks from the outset, and say that whenever robots reach sentience,they should have rights as well, and be given the decision to choose for themselves, their own destinies. (sp?) ~PMDate: Sun, 27 Jan 2013 18:16:19 -0500 Subject: Re: [agi] Robots and Slavery From: [email protected] To: [email protected]
So you realize if robots develop the goals of liberty, justice, and fairness, they will become a competitor to humans.These are revolutionary ideas that have been used to usurp the authority of established powers. A self-proclaimed freedom fighter is a terrorist to the established order. What lengths would robots go to secure their freedom? Perhaps eliminating the entire human race is a logical way to secure theirfreedom from human tyranny. All these goals are very subjective and can be interpreted to mean different things to different individuals. For instance, my desire for justice might really berevenge based on a perceived wrong you have done to me, whether or not it was intentional. How do you know robots wont develop their own ethical standards that benefit themselves at the expense of humans? On Sun, Jan 27, 2013 at 5:53 PM, Piaget Modeler <[email protected]> wrote: I don't agree that intelligence is completely separable from desire (goals). I think that the goals + solutions + mental processes = intelligence.I don't think you can have intelligence without goals, or the solutions that have arisen based on prior goals. Solutions and goals are intertwined. ~PM Intelligence is completely separable from desire. ~Aaron H. AGI | Archives | Modify Your Subscription AGI | Archives | Modify Your Subscription ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
