--- Matt Mahoney <[EMAIL PROTECTED]> wrote:

> 
> --- Tom McCabe <[EMAIL PROTECTED]> wrote:
> 
> > 
> > --- Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > 
> > > I posted some comments on DIGG and looked at the
> > > videos by Thiel and
> > > Yudkowsky.  I'm not sure I understand the push
> to
> > > build AGI with private
> > > donations when companies like Google are already
> > > pouring billions into the
> > > problem.
> > 
> > Private companies like Google are, as far as I am
> > aware, spending exactly $0 on AGI. The things
> Google
> > is interested in, such as how humans process
> > information and how they decide what is relevant,
> are
> > very specific subsets of this goal in the same way
> > that "fire" and "iron" are very specific subsets
> of
> > the internal combustion engine.
> 
> Language and vision are prerequisites to AGI. 

No, they aren't, unless you care to suggest that
someone with a defect who can't see and can't form
sentences (eg, Helen Keller) is unintelligent. Any
sufficiently intelligent AGI would be capable of
learning language and vision, because it would have
full power to rewrite its own source code, but it
doesn't flow the other way; a full understanding of
language and vision is so insufficient for AGI that
you might as well invent the steam engine and use it
to claim a patent on the jumbo jet.

> Google has an interest in
> improving search results.  It already does a pretty
> good job with natural
> language questions.  They would also like to return
> relevant images, video,
> and podcasts without requiring humans to label them.
>  They want to filter porn
> and spam.  They want to deliver relevant and
> personalized ads.  These are all
> AI problems.  Google has billions to spend on these
> problems.

Even if Google did all of these automatically on a 486
with ten times more accuracy than any human, it would
only get us marginally closer to AGI, because these
things are all only small bits and pieces of a working
AGI. One of the most important things about an
intelligence is its ability to learn, not just new
data, but new algorithms; nobody is born knowing
chess. Do you seriously think that a Google algorithm
is going to be able to learn chess from scratch,
without having it programmed in beforehand? That's the
caliber of capability you need.

> Google already have enough computing problem to do a
> crude simulation of a
> human brain, but of course that is not what they are
> trying to do.  Why would
> they want to copy human motivations?

Even if we simulated a human brain with perfect
fidelity, it wouldn't get us AGI instantly or even
easily, because we'd still have zero idea how the
darned thing worked.

> > > Doing this well requires human
> > > capabilities such as language
> > > and vision, but does not require duplicating the
> > > human motivational system. 
> > > The top level goal of humans is to propagate
> their
> > > DNA.  The top level goal of
> > > machines should be to serve humans.
> > 
> > You do realize how hard a time you're going to
> have
> > defining that? Remember Asimov's First Law: A
> robot
> > shall not harm a human or through inaction allow a
> > human to come to harm? Well, humans are always
> hurting
> > themselves through wars and such, and so the
> logical
> > result is totalitarianism, which most of us would
> > consider very bad.
> 
> I realize the problem will get harder as machines
> get smarter.  But right now
> I don't see any prospect of a general solution.

Then we're all screwed, because without a general
solution somebody someday is going to push The Button
and kill us all.

> It
> will have to be solved for
> each new machine.  But there is nothing we can do
> about human evil.

If we had a superintelligence powerful enough, we
could stop every evil act before it started.

> If
> someone wants to build a machine to kill people,
> well that is already a
> problem.  The best we can do is try to prevent
> accidental harm.
> 
> > > We have always
> > > built machines this way.
> > 
> > Do I really need to explain what's wrong with the
> > "we've always done it that way" argument? It
> hasn't
> > gotten any better since the South used it to
> justify
> > slavery.
> 
> I phrased it in the past tense because I can't
> predict the future.  What I
> should say is that there is no reason to build
> machines to disobey their
> owners, and I don't expect that we will do so in the
> future.

Any future Friendly AGI isn't going to obey us exactly
in every respect, because it's *more moral* than we
are. Should an FAI obey a request to blow up the
world?

> 
> 
> -- Matt Mahoney, [EMAIL PROTECTED]
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



       
____________________________________________________________________________________You
 snooze, you lose. Get messages ASAP with AutoCheck
in the all-new Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_html.html

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to