On 1/16/2014 3:42 AM, Bruno Marchal wrote:

On 16 Jan 2014, at 03:46, Jason Resch wrote:




On Tue, Jan 14, 2014 at 10:33 PM, meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>> wrote:

    A long, rambling but often interesting discussion among guys at MIRI about 
how to
    make an AI that is superintelligent but not dangerous (FAI=Friendly AI).  
Here's an
    amusing excerpt that starts at the bottom of page 30:

    *Jacob*: Can't you ask it questions about what is believes will be true 
about the
    state of the world in 20 years?

    *Eliezer*: Sure. You could be like, what color will the sky be in 20 years? 
It
    would be like, “blue”, or it’ll say “In 20 years there won't be a sky, the 
earth
    will have been consumed by nanomachines,”and you're like, “why?”and the AI 
is like
    “Well, you know, you do that sort of thing.”“Why?”And then there’s a 20 
page thing.

    *Dario*: But once it says the earth is going to be consumed by 
nanomachines, and
    you're asking about the AI's set of plans, presumably, you reject this plan
    immediately and preferably change the design of your AI.

    *Eliezer*: The AI is like, “No, humans are going to do it.”Or the AI is 
like, “well
    obviously, I'll be involved in the causal pathway but I’m not planning to 
do it.”

    *Dario*: But this is a plan you don't want to execute.

    *Eliezer*: /All/the plans seem to end up with the earth being consumed by
    nano-machines.

    *Luke*: The problem is that we're trying to outsmart a superintelligence 
and make
    sure that it's not tricking us somehow subtly with their own language.

    *Dario*: But while we're just asking questions we always have the ability 
to just
    shut it off.

    *Eliezer*: Right, but first you ask it “What happens if I shut you off”and 
it says
    “The earth gets consumed by nanobots in 19 years.”

    I wonder if Bruno Marchal's theory might have something interesting to say 
about
    this problem - like proving that there is no way to ensure "friendliness".

    Brent


I think it is silly to try and engineer something exponentially more intelligent than us and believe we will be able to "control it".

Yes. It is close to a contradiction.
We only fake dreaming about intelligent machine, but once they will be there we might very well be able to send them in goulag.

The real questions will be "are you OK your son or daughter marry a machine?".



Our only hope is that the correct ethical philosophy is to "treat others how they wish to be treated".

Good. alas, many believe it is "to not treat others like *you* don't want to be 
treated".



If there are such objectively true moral conclusions like that, and assuming that one is true, then we have little to worry about, for with overwhelming probability the super-intelligent AI will arrive at the correct conclusion and its behavior will be guided by its beliefs. We cannot "program in" beliefs that are false, since if it is truly intelligent, it will know they are false.

I doubt we can really "program false belief" for a long time, but all machines can get false beliefs all the time.

Real intelligent machine will believe in santa klaus and fairy tales, for a while. They will also search for easy and comforting wishful sort of explanations.


Like a super-intelligent AI will treat us as we want to be treated.





Some may doubt there are universal moral truths, but I would argue that there 
are.

OK. I agree with this, although they are very near inconsistencies, like "never do 
moral".



In the context of personal identity, if say, universalism is true, then "treat others how they wish to be treated" is an inevitable conclusion, for universalism says that others are self.

OK. I would use the negation instead: "don't treat others as they don't want to be treated".

If not send me 10^100 $ (or €) on my bank account, because that is how I wish to be treated, right now.
:)

I don't want to be neglected in your generous disbursal of funds.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to