On Tue, Mar 27, 2007 at 07:46:07PM -0700, Matt Mahoney wrote:

> Turing didn't say.  He did predict that a computer with 10^9 bits of memory
> but no faster than current technology would solve AI by 2000.  His forecast of

I'm a big fan of Turing, but he had really no business making such predictions.
He had absolutely no hard data on human CNS.

> memory sizes was remarkably accurate, considering it predated Moore's law by

What is particularly accurate about a single data point (100 MByte RAM in 2000)?
Moravec had a nice linear semi-log plot across many system sizes which Kurzweil
later picked up. But Moore is not about system performance, and benchmarks talk
a different language. TOP500 is admittedly an anomaly, due to economies of
scale.

> over a decade.  I am guessing that Turing considered the information content
> of language that a person processes by adulthood.  He may also have considered
> the speed of neurons, compared with vacuum tubes and relays.  But this stuff

A lot of early computer people assumed 1 transistor is more than enough for
one neuron. Based on... absolutely no hard data. They just thought it was
completely obvious and straightforward. It become a pattern since.

> was all very new.  Shannon's paper on information theory and Hebb's proposal
> that synapses could change state were both published in 1949.

For some reason, lots of people still think that a neuron is a unit of 
computation.
Despite of plenty evidence to the contrary.
 
> But there is a problem with this definition.  We will likely end up with
> systems that can do some aspect of these jobs much better than humans, but
> worse in others.  For example, you might have an AGI doctor that can read
> medical journals, perform surgery, and diagose diseases from medical images
> better than humans, but can't ask a patient where it hurts.  Would this be

A medical practitioner is the very opposite of an idiot savant. If such
a system can't change its modus operandi to adapt to an open-ended problem
(medicine is that), then it's definitely more an idiot than savant.

> AGI?
> 
> I don't think we will know AGI when we see it.  We have a long history of

You will know it when it hits the job market.

> solving AI problems, then as soon as we do, it is no longer AI.  If you write

Isolated insular skills are not human-level AI. Current AI is all about
deficits, with a few strong facility peaks interim.

> a program that can beat you in chess, then it is not AI because it is just
> executing an algorithm that very roughly models your thought process when you

What makes you think it models your thought process? We don't know a lot
of what the brain does when it's playing chess. A lot of it lights up, though.

> play, only faster.  If Google can answer natural language questions better
> than you can, then it is not AI.  It is just matching words and phrases to

In order to understand the human language all the time the system has to
pass the Turing test. I think you will know when any systems would pass that.

> documents.  We will eventually have very intelligent systems that will not be
> AI because we built them and understand how they work, and they will be

I don't see how you can expect to understand what an human-level AI does. 
There's way too much state and activity going on in there. Intelligence
is always surprising, almost by definition.

> somehow different than human.  We will still say "it is not AGI because it
> can't fall in love", or something to that effect.

Why can't an artificial system not fall in love and make babies? If it's
evolutionary, it'd damn better.

-- 
Eugen* Leitl <a href="http://leitl.org";>leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820            http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to