On 10/27/07, Samantha  Atkins <[EMAIL PROTECTED]> wrote:
> On Oct 27, 2007, at 1:55 AM, Aleksei Riikonen wrote:
>
>> You seem to have a need to personally give a final answer to "What
>> is 'good'?" -- an answer to what moral rules the universe should be
>> governed by. If you think that your answer is better than what the
>> "surveying" process that CEV is would produce, I think your attitude
>> amounts to delusions of grandeur.
>
> I do not find it very credible to simply claim that the CEV answer
> will be significantly better.  Yeah you can argue it "by construction"
> simply because the entire thing is made up to by definition be the
> very best at this particular job.   But that it is achievable and will
> be best  is not provable.  As long as it is not then calling anyone's
> opinion that they or some other human or group of human's could do
> better "delusions of grandeur" is not justified.

At no point have a claimed that CEV is achievable, or even very
useful. But all the reasons it might not be, apply just as much (or
more) to the model Stefan Pernar proposes.

In other words: if we ever get to a point where the model advocated by
Stefan Pernar could be implemented, we are at a point where
implementing CEV is also possible! And in the course of implementing
CEV, we would have people smarter than us considering Stefan Pernar's
writings, so it is not possible for CEV to be worse than his
suggestions!

>> The first fallacy one runs into there is this: "The question what
>> friendliness means however existed before all of those problems, is a
>> separate one and needs to be answered before the creation of a
>> friendly AI can be attempted."
>
> What is "friendly"?  That is a good question.  However it is not
> exactly at all crisp.

You can check the CEV page for one way of breaking down the
Friendliness problem into 3 separate parts.

The part of the problem that we are talking about here, is "Choosing
something nice to do with the AI". Choosing this presupposes that we
have succeeded in creating a superintelligent AI, that didn't
automatically wipe us out, and understands the nice-thing-to-do that
we are trying to communicate to it, either in formal or informal
language. (Both Stefan Pernar's suggestion and CEV are currently
informal language, not code.)

When discussing "Choosing something nice to do with the AI", it is a
simple matter of logic that Stefan Pernar's suggestion cannot be
better than CEV, as I've repeatedly explained.

> This is a bit of a long con.  These "people smarter than us" are
> totally hypothetical.

They are no more hypothetical than getting to a point where Stefan
Pernar's model could be implemented. (By this, I of course do not
dispute that they are very hypothetical.)

> Here in the real world right now I think we darn well better come up
> with the best notion of "friendliness" we can and steer toward that.
> That very much includes not shutting people down for attempting to
> make some hopefully relevant suggestions.

It is a matter of understanding a couple of points of simple logic,
that Stefan Pernar's model can't make the problem of creating a
Friendly AI any easier.

> What we do now with our limited intelligence (but of  necessity all the
> intelligence we can work with)  determines whether there ever will be
> greatly smarter humans with or without a CEV.

Yes. And what Stefan Pernar has presented can't help at all in
confronting the difficult parts of the challenge that we haven't yet
solved.

> We can't steer our course N years ahead of the bit of road right
> in front of us or leave it to our hypothetical betters or to the CEV
> dream machine.

You haven't understood at all what I've been saying in this
discussion, if you think what you say here hasn't been obvious to me
all along.

>> Even if I accepted that you are the brightest philosopher who has ever
>> lived, and have come up with a solution that has eluded all that have
>> come before, don't you see that the humans surveyed by CEV would be
>> aware of what you have written, and would come to the same conclusion
>> if it really is that smart? How then could your proposal be better
>> than CEV, when CEV would result in the exact same thing?
>
> This is mental masturbation.  It don't see that it does one whit of
> good.  It doesn't give any real sort of guidance for doing that in the
> present that it likely to get us to a better tomorrow.

Obviously I don't claim that what I state here helps in solving the
currently unsolved parts of the Friendly AI problem. But I am trying
to make those that still don't realize it understand, that for the
parts of the Friendly AI problem that Stefan Pernar purports to solve,
we already have a solution that is at least as good (and better if
Stefan Pernar's proposition can be improved upon by vastly smarter
humans). Hence it is Stefan Pernar's proposition that amounts to
mental masturbation, and a distraction away from solving the currently
unsolved parts of the problem.

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58281115-90684d

Reply via email to