--- Matt Mahoney <[EMAIL PROTECTED]> wrote:

> 
> --- Tom McCabe <[EMAIL PROTECTED]> wrote:
> 
> > 
> > --- Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > Language and vision are prerequisites to AGI. 
> > 
> > No, they aren't, unless you care to suggest that
> > someone with a defect who can't see and can't form
> > sentences (eg, Helen Keller) is unintelligent.
> 
> Helen Keller had language.  One could argue that
> language alone is sufficient
> for AI, as Turing did.  But everyone has a different
> opinion on what is AGI
> and what isn't.

Helen Keller at ~8 didn't have language, as she hadn't
learned sign language and there was no other real
means for her to learn grammar and sentence structure.
Yet she was still clearly intelligent. If Babelfish
was perfect- could pick up on every single grammatical
detail and nuance- would it start learning French
cooking or write a novel or learn how to drive or do
any of that other stuff we associate with
intelligence.

> > Any future Friendly AGI isn't going to obey us
> exactly
> > in every respect, because it's *more moral* than
> we
> > are. Should an FAI obey a request to blow up the
> > world?
> 
> That is what worries me.  I think it is easier to
> program an AGI for blind
> obedience (its top level goal is to serve humans)
> than to program it to make
> moral judgments in the best interest of humans,
> without specifying what that
> means.

True. But we're still just as dead either way.

> I gave this example on Digg.  Suppose the
> AGI (being smarter than us)
> figures out that consciousness and free will are
> illusions of our biologically
> programmed brains, and that there is really no
> difference between a human
> brain and a simulation of a brain on a computer.  We
> may or may not have the
> technology for uploading, but suppose the AGI
> decides (for reasons we don't
> understand) that it doesn't need it.  Therefore it
> is in our best interest (or
> irrelevant) to destroy the human race.

This is a failure scenario because the AI had a bad
definition of "what humans want".

> We cannot rule out this possibility because a lesser
> intelligence cannot
> predict what a greater intelligence will do.  If you
> measure intelligence
> using algorithmic complexity, then Legg proved this
> formally. 
> http://www.vetta.org/documents/IDSIA-12-06-1.pdf

A lesser intelligence can predict that an intelligence
will follow its goals, because that's what
intelligence *means* (in part). So an AI designed to
turn the galaxy into iron is not going to turn it into
nickel.

> Or maybe an analogy would be more convincing. 
> Humans acting in the best
> interests of their pets may put them down when they
> have a terminal disease,
> or for other reasons they can't comprehend.  Who
> should make this decision?

In a situation like this, an intelligent human would
*agree* with the AI, that we would rather be put down
than endure horrible suffering.

> What will happen when the AGI is as advanced over
> humans as humans are over
> dogs or insects or bacteria?  Perhaps the smarter it
> gets, the less relevant
> human life will be.

To the vast majority of AIs, human life is just a
collection of carbon atoms. An AI of any intelligence
should have the relevance of human life built in.

> 
> -- Matt Mahoney, [EMAIL PROTECTED]
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



 
____________________________________________________________________________________
The fish are biting. 
Get more visitors on your site using Yahoo! Search Marketing.
http://searchmarketing.yahoo.com/arp/sponsoredsearch_v2.php

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to