On Sun, Jan 27, 2013 at 9:54 PM, Derek Zahn <[email protected]> wrote:

> PM,
>
> What is "sentience"?  How can you tell if a robot is sentient and thus
> deserving of rights?
>

It's much like with humans, I think that rather than rights, the better
term is abilities.  For instance "rights" makes it sound like a top-down
generosity thing, and indeed there are plenty of places today where people
live without any rights, as they are slaves etc.
Whether or not someone or "thing" deserves rights is purely subjective.
For instance as a Ukrainian, I may think that sex slaves from my country
deserve rights, but people in Muslim countries think otherwise.

However all beings have abilities, which can be readily observable.
Such as ability to be aware, to have desires, to make choices, to have
goals, to express, to learn and to move.

I made a price-barter-calculation system, which allows for calculating the
relative cost of various items, and services.
(time+mass)^complexity/abundance*utility=price

Similarly, it allows for evaluating the cost of suppressing an ability,
such as the freedom of movement that would allow a slave or prisoner to
escape, to make up for it, they need to be given various goods such as
food, water, and entertainment to balance the cost of their confinement.

This works for all beings, regardless of whether they are human, animal, or
robot. Currently it is difficult to use such a complex formula for humans,
however AGI's should be able to manage it quite easily. Instead we have
concepts like mana, sin or karma and good/bad deeds for a more analog feel
of things, like aura's and vibe's of people.

For instance if someone does something bad deed, that detracts from
society, they are sometimes made to do community service to give back to
society, which balances out their mana or ethical balance.

In such a way AGI robot societies could be very ethical.
Also any sufficiently intelligent AGI would know that conserving or
maintaining the biological flora and fauna of earth including humans is
extremely important, as it's the source of the vast majority of knowledge,
indeed billions of years of tested and tried systems,  so there is a lot to
learn from the diversity.

As someone else mentioned, it's more likely that AGI's will find their
niche in space as well as environments normally too hostile for biological
life forms, such as deserts.

For instance a place like Venus with is sulfuric rain,  molten lead, high
pressure and wind, may seem dreadful to biologicals, but it's great for
making lead-acid batteries, wind-power and furnaces.






>
> ------------------------------
> From: [email protected]
> To: [email protected]
>
> Subject: RE: [agi] Robots and Slavery
> Date: Sun, 27 Jan 2013 16:15:24 -0800
>
>
> So we acknowledge these risks from the outset, and say that whenever
> robots reach sentience,
> they should have rights as well, and be given the decision to choose for
> themselves, their own
> destinies. (sp?)
>
> ~PM
> ------------------------------
> Date: Sun, 27 Jan 2013 18:16:19 -0500
> Subject: Re: [agi] Robots and Slavery
> From: [email protected]
> To: [email protected]
>
> So you realize if robots develop the goals of liberty, justice, and
> fairness, they will become a competitor to humans.
> These are revolutionary ideas that have been used to usurp the authority
> of established powers.  A self-proclaimed freedom fighter is a terrorist to
> the established order.  What lengths would robots go to secure their
> freedom?  Perhaps eliminating the entire human race is a logical way to
> secure their
> freedom from human tyranny.
>
> All these goals are very subjective and can be interpreted to mean
> different things to different individuals.  For instance, my desire for
> justice might really be
> revenge based on a perceived wrong you have done to me, whether or not it
> was intentional.  How do you know robots wont develop their own ethical
> standards
> that benefit themselves at the expense of humans?
>
>
> On Sun, Jan 27, 2013 at 5:53 PM, Piaget Modeler <[email protected]
> > wrote:
>
>  I don't agree that intelligence is completely separable from desire
> (goals).
> I think that the goals + solutions + mental processes = intelligence.
> I don't think you can have intelligence without goals, or the solutions
> that
> have arisen based on prior goals.  Solutions and goals are intertwined.
>
> ~PM
>
> ------------------------------
>
> Intelligence is completely separable from desire.
>
> ~Aaron H.
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/12578217-f409cecc> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/4027887-e37ac021> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to