--- Tom McCabe <[EMAIL PROTECTED]> wrote:

> 
> --- Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > I posted some comments on DIGG and looked at the
> > videos by Thiel and
> > Yudkowsky.  I'm not sure I understand the push to
> > build AGI with private
> > donations when companies like Google are already
> > pouring billions into the
> > problem.
> 
> Private companies like Google are, as far as I am
> aware, spending exactly $0 on AGI. The things Google
> is interested in, such as how humans process
> information and how they decide what is relevant, are
> very specific subsets of this goal in the same way
> that "fire" and "iron" are very specific subsets of
> the internal combustion engine.

Language and vision are prerequisites to AGI.  Google has an interest in
improving search results.  It already does a pretty good job with natural
language questions.  They would also like to return relevant images, video,
and podcasts without requiring humans to label them.  They want to filter porn
and spam.  They want to deliver relevant and personalized ads.  These are all
AI problems.  Google has billions to spend on these problems.

Google already have enough computing problem to do a crude simulation of a
human brain, but of course that is not what they are trying to do.  Why would
they want to copy human motivations?

> > Doing this well requires human
> > capabilities such as language
> > and vision, but does not require duplicating the
> > human motivational system. 
> > The top level goal of humans is to propagate their
> > DNA.  The top level goal of
> > machines should be to serve humans.
> 
> You do realize how hard a time you're going to have
> defining that? Remember Asimov's First Law: A robot
> shall not harm a human or through inaction allow a
> human to come to harm? Well, humans are always hurting
> themselves through wars and such, and so the logical
> result is totalitarianism, which most of us would
> consider very bad.

I realize the problem will get harder as machines get smarter.  But right now
I don't see any prospect of a general solution.  It will have to be solved for
each new machine.  But there is nothing we can do about human evil.  If
someone wants to build a machine to kill people, well that is already a
problem.  The best we can do is try to prevent accidental harm.

> > We have always
> > built machines this way.
> 
> Do I really need to explain what's wrong with the
> "we've always done it that way" argument? It hasn't
> gotten any better since the South used it to justify
> slavery.

I phrased it in the past tense because I can't predict the future.  What I
should say is that there is no reason to build machines to disobey their
owners, and I don't expect that we will do so in the future.



-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to