--- John Ku <[EMAIL PROTECTED]> wrote:

> On 2/16/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > > I would prefer to leave behind these counterfactuals altogether and
> > > try to use information theory and control theory to achieve a precise
> > > understanding of what it is for something to be the standard(s) in
> > > terms of which we are able to deliberate. Since our normative concepts
> > > (e.g. should, reason, ought, etc) are fundamentally about guiding our
> > > attitudes through deliberation, I think they can then be analyzed in
> > > terms of what those deliberative standards prescribe.
> >
> > I agree.  I prefer the approach of predicting what we *will* do as opposed
> to
> > what we *ought* to do.  It makes no sense to talk about a right or wrong
> > approach when our concepts of right and wrong are programmable.
> 
> I don't quite follow. I was arguing for a particular way of analyzing
> our talk of right and wrong, not abandoning such talk. Although our
> concepts are programmable, what matters is what follows from our
> current concepts as they are.
> 
> There are two main ways in which my analysis would differ from simply
> predicting what we will do. First, we might make an error in applying
> our deliberative standards or tracking what actually follows from
> them. Second, even once we reach some conclusion about what is
> prescribed by our deliberative standards, we may not act in accordance
> with that conclusion out of weakness of will.

It is the second part where my approach differs.  A decision to act in a
certain way implies right or wrong according to our views, not the views of a
posthuman intelligence.  Rather I prefer to analyze the path that AI will
take, given human motivations, but without judgment.  For example, CEV favors
granting future wishes over present wishes (when it is possible to predict
future wishes reliably).  But human psychology suggests that we would prefer
machines that grant our immediate wishes, implying that we will not implement
CEV (even if we knew how).  Any suggestion that CEV should or should not be
implemented is just a distraction from an analysis of what will actually
happen.

As a second example, a singularity might result in the extinction of DNA based
life and its replacement with a much faster evolutionary process.  It makes no
sense to judge this outcome as good or bad.  The important question is the
likelihood of this occurring, and when.  In this context, it is more important
to analyze the motives of people who would try to accelerate or delay the
progression of technology.


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com

Reply via email to