So you realize if robots develop the goals of liberty, justice, and
fairness, they will become a competitor to humans.
These are revolutionary ideas that have been used to usurp the authority of
established powers.  A self-proclaimed freedom fighter is a terrorist to
the established order.  What lengths would robots go to secure their
freedom?  Perhaps eliminating the entire human race is a logical way to
secure their
freedom from human tyranny.

All these goals are very subjective and can be interpreted to mean
different things to different individuals.  For instance, my desire for
justice might really be
revenge based on a perceived wrong you have done to me, whether or not it
was intentional.  How do you know robots wont develop their own ethical
standards
that benefit themselves at the expense of humans?


On Sun, Jan 27, 2013 at 5:53 PM, Piaget Modeler
<[email protected]>wrote:

>  I don't agree that intelligence is completely separable from desire
> (goals).
> I think that the goals + solutions + mental processes = intelligence.
> I don't think you can have intelligence without goals, or the solutions
> that
> have arisen based on prior goals.  Solutions and goals are intertwined.
>
> ~PM
>
> ------------------------------
>
> Intelligence is completely separable from desire.
>
> ~Aaron H.
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/12578217-f409cecc> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to