Brad and Josh,

On 5/19/08, Brad Paulsen <[EMAIL PROTECTED]> wrote:

> Hey, Gang!
>
> Hmmm.  Sure is quiet around here lately.  Maybe people are actually getting
> some (more) work done?  Catching up on their respective AI reading lists?
> ;-)  I know I am.  Reading Josh's book ("Beyond AI - Creating the Conscience
> of the Machine").


This brings to mind discussions about ethics and conscience that the group
of us behind Dr. Eliza had several years ago. Our discussions finally boiled
down to something like:

There are RULES of conduct, which hopefully will make others less fearful of
us/machines.

There are GOALS, which if carefully drafted will have machines working for
rather than against us.

Then there are ethics, which seem to vary from person to person, cause much
strife, and don't generally appear to work to a positive result. Things in
this category are often misfiled, as they should be rules or goals, or
sometimes discarded altogether.

An then there is conscience - the antithisis of the "perfect motive theory",
which says that however screwed up our decisions might be, that since we are
doing the best we can, that there is absolutely no reason to ever look back
except to learn from the past.

Hence, I am suspicious about ANYTHING that might intentionally bring
apparently flawed concepts into a future AGI.

Note the (relative) success of Oregon's health care system, that according
to the usual metrics is outperforming most other such systems. Their secret:
Figure out who isn't worth saving, and simply set them die! By most ethical
standards it sucks, and whose conscience could carry the weight of leaving
people to die when good medical help is available? However, with
all-too-finite resources, their approach maximizes the days/dollar survival
of their population, which after all, seems to be the metric to maximize.



> Ripping good read.  Josh, you can write real good, dude! It's a real
> page-turner (which is not easy to do in non-fiction).


Could I get you to entice me with some discussion or examples? I am very
suspicious of this subject, yet am still quite open to arguments that I
might not have heard or considered.

Steve Richfield

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to