On 3/26/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Interesting that you are predicting AGI in 50 years.  That is the same
prediction Turing made about AI in 1950.

My prediction's mostly based on other people's estimates about the
time it'll take for nanotechnology to develop. I don't know what
exactly Turing was basing his estimates on, so I cannot compare.

One problem with such predictions is we won't know A(G)I when we have it.  How
do you say if something is more or less intelligent than a human?  We aren't
trying to duplicate the human brain.  First, there are no economic incentives
to reproduce human weaknesses that seem to be necessary to pass the Turing
test (e.g. deliberately introduce arithmetic errors and slow down the

This seems to me a "I can't give you an exact definition, but we'll
both know it when we see it" type of thing. By the time one has, as I
put it in my essay,  "a generic mind template ... which could be made
to learn all the knowledge required to do the job of a doctor, a
lawyer or an engineer in a matter of months", then I'd say it was
pretty close to human-level intelligence. (Of course, one could
nitpick and suggest that that might as well be achieved with a very
sophisticated expert system that wouldn't be a real AI - but I think
you'll get my point.)

response, as in Turing's original paper).  Second, I think the only way to
produce the full range of human experiences needed to train a human-like AGI
is to put it in a human body.

Depends on your definition of human-like, and the tasks you'd want it
to do. While I'm far from an expert in AI design, to me it wouldn't
seem necessary to give an engineer mind the full range of experiences
of a human body. Just feed it a university's course books and, say,
the last ten years of suitable technical journals.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to