On 26/05/07, John Ku <[EMAIL PROTECTED]> wrote:

So far my work in philosophy has been on the fundamental questions of ethics
and reasons more generally. I think I've basically reached fairly definitive
answers on what reasons are and how an objective (enough) morality (as well
as reasons for actions, beliefs, desires and emotions) can be grounded in
psychological facts. I've mostly been working with my coauthor on presenting
this work to other academic philosophers, but at some point, I would really
like to present this and other work on more applied moral theory to those
thinking about the question of Friendly AI. There is of course, a big step
from saying what reasons we humans have to saying what reasons we should
program a Strong AI to have, but clearly the former will greatly influence
the latter. If you are interested, I have tried to condense my view on the
fundamental abstract questions of reasons and ethics to a pamphlet as well
as a somewhat longer paper that will hopefully be fairly accessible to
non-philosophers:

  
http://www.umich.edu/~jsku/reasons.html<http://www.umich.edu/%7Ejsku/reasons.html>


What if the normative governance system includes doing terrible things?


--
Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to