--- Kaj Sotala <[EMAIL PROTECTED]> wrote:

> On 3/26/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Interesting that you are predicting AGI in 50 years.  That is the same
> > prediction Turing made about AI in 1950.
> 
> My prediction's mostly based on other people's estimates about the
> time it'll take for nanotechnology to develop. I don't know what
> exactly Turing was basing his estimates on, so I cannot compare.

Turing didn't say.  He did predict that a computer with 10^9 bits of memory
but no faster than current technology would solve AI by 2000.  His forecast of
memory sizes was remarkably accurate, considering it predated Moore's law by
over a decade.  I am guessing that Turing considered the information content
of language that a person processes by adulthood.  He may also have considered
the speed of neurons, compared with vacuum tubes and relays.  But this stuff
was all very new.  Shannon's paper on information theory and Hebb's proposal
that synapses could change state were both published in 1949.

> > One problem with such predictions is we won't know A(G)I when we have it. 
> How
> > do you say if something is more or less intelligent than a human?  We
> aren't
> > trying to duplicate the human brain.  First, there are no economic
> incentives
> > to reproduce human weaknesses that seem to be necessary to pass the Turing
> > test (e.g. deliberately introduce arithmetic errors and slow down the
> 
> This seems to me a "I can't give you an exact definition, but we'll
> both know it when we see it" type of thing. By the time one has, as I
> put it in my essay,  "a generic mind template ... which could be made
> to learn all the knowledge required to do the job of a doctor, a
> lawyer or an engineer in a matter of months", then I'd say it was
> pretty close to human-level intelligence. (Of course, one could
> nitpick and suggest that that might as well be achieved with a very
> sophisticated expert system that wouldn't be a real AI - but I think
> you'll get my point.)

But there is a problem with this definition.  We will likely end up with
systems that can do some aspect of these jobs much better than humans, but
worse in others.  For example, you might have an AGI doctor that can read
medical journals, perform surgery, and diagose diseases from medical images
better than humans, but can't ask a patient where it hurts.  Would this be
AGI?

I don't think we will know AGI when we see it.  We have a long history of
solving AI problems, then as soon as we do, it is no longer AI.  If you write
a program that can beat you in chess, then it is not AI because it is just
executing an algorithm that very roughly models your thought process when you
play, only faster.  If Google can answer natural language questions better
than you can, then it is not AI.  It is just matching words and phrases to
documents.  We will eventually have very intelligent systems that will not be
AI because we built them and understand how they work, and they will be
somehow different than human.  We will still say "it is not AGI because it
can't fall in love", or something to that effect.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to