--- Kaj Sotala <[EMAIL PROTECTED]> wrote:

> My first attempt at writing something Singularity-related that
> somebody might actually even take seriously. Comments appreciated.
> 
> --------------------------------
> 
> http://www.saunalahti.fi/~tspro1/artificial.html

Interesting that you are predicting AGI in 50 years.  That is the same
prediction Turing made about AI in 1950.

One problem with such predictions is we won't know A(G)I when we have it.  How
do you say if something is more or less intelligent than a human?  We aren't
trying to duplicate the human brain.  First, there are no economic incentives
to reproduce human weaknesses that seem to be necessary to pass the Turing
test (e.g. deliberately introduce arithmetic errors and slow down the
response, as in Turing's original paper).  Second, I think the only way to
produce the full range of human experiences needed to train a human-like AGI
is to put it in a human body.

We already have intelligences that are superior to humans in some ways but
inferior in others, such as Google, when you type in a natural language
question.  In fact, we had this in 1950 with regard to arithmetic.  So how do
you know if a system is intelligent or not?

I suppose that you could draw the line at programs capable of recursive self
improvement.  But again this is not so clear cut.  The Internet already has
the computational power of thousands or millions of human brains (depending on
the problem you want to solve).  Suppose that you have a program capable of
writing and debugging software that discovers a complex set of security flaws
over hundreds of applications and writes a worm to distribute itself across
millions of computers within minutes.  Would this count as a singularity?



-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to