When people discuss the ethics of the treatment of artificial intelligent
agents, it's almost always with the presumption that the key issue is the
subjective level of suffering of the agent.  This isn't the only possible
consideration.

One other consideration is our stance relative to that agent.  Are we just
acting in a selfish way, using the agent as simply a means to achieve our
goals?  I'll just leave that idea open as there are traditions that see
value in de-emphasizing greed and personal acquisitiveness.

Another consideration is the inherent value of self-determination.  This
is above any suffering that might be caused by being a completely
controlled subject.  One of the problems of slavery was just that it
simply works better if you let people decide things for themselves. 
Similarly, just letting an artificial agent have autonomy for its own sake
may just be a more effective thing than having it simply be a controlled
subject.

So I don't even think the "consciousness" of an artificial intelligent
agent is completely necessary in considering the ethics of our stance
towards it.  We can consider our own emotional position and the inherent
value of independence of thinking.
andi



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to