On 10/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
> If you look at what I actually wrote, you'll see that I don't claim
> (natural) evolution has any role in AGI/robotic evolution. My point is that
> you wouldn't dream of speculating so baselessly about the future of natural
> evolution, why speculate baselessly about AGI evolution?

If you wanted to predict what we would look like in 100K years
(assuming no civilization), you could simply extrapolate the trends of
the past four million years, and land somewhere near the mark.

> I should explain that I am not against all speculation, and I am certainly
> excited about the future.
>
> But I would contrast the speculation that has gone on here with that of the
> guy who mainly started it - Kurzweil. His argument for the Singularity is
> grounded in reality - the relentless growth of computing power, which he
> documents. And broadly I buy his argument that that growth in power will
> continue much as he outlines. I don't buy his conclusion about the timing of
> the Singularity, because building an artificial brain with as much power and
> as many PARTS as the human brain or much greater, and building a SYSTEM of
> mind (creating and integrating possibly thousands of cognitive
> departments ), are two different things. Nevertheless he is pointing out
> something real and important even if his conclusions are wrong - and it's
> certainly worth thinking about a Singularity.
>
> When you and others speculate about the future emotional systems of AGI's
> though - that is not in any way based on any comparable reality. There are
> no machines with functioning emotional systems at the moment on which you
> can base predictions.

There is, currently, no such thing as a self-replicating molecular
manufacturing machine. Yet we can predict how they will function, to a
high degree of accuracy, because they are built out of atoms (which we
do understand). Speculating on the specifics of "AGI" is useless
because of the huge range of possible designs, but for a particular
design (eg, Novamente), there's nothing inherently impossible or even
exceptionally difficult about behavior prediction. A true AGI should
be able to predict its *own* behavior, at least under certain
conditions.

> And when Ben speculates about it being possible to build a working AGI with
> a team of ten or so programmers, that too is not based on any reality.
> There's no assessment of the nature and the size of the task, and no
> comparison with any actual comparable tasks that have already been achieved.
> It's just tossing figures out in the air.

Software development in general is fairly well understood; once you
have a technical specification for an AGI component, you can give a
reasonable estimate for how long it will take using a given set of
resources, and then double it to get an accurate figure (Hofstadter's
Law).

> And when people speculate about the speed of an AGI take-off, that too is
> not based on any real, comparable take-off's of any kind.

We're *living in* a comparable takeoff- it's commonly known as human
civilization. That's what you get when you introduce intelligence,
even a low-grade hackneyed intelligence, to a world which has never
seen it before.

> You guys are much too intelligent to be engaging in basically pointless
> exercises like that. (Of course, Ben seems to be publicly committed now to
> proving me wrong by the end of next year with some kind of functional AGI,
> but even if he were probably the first AI person ever to fulfil such a
> commitment, it still wouldn't make his prediction as presented any more
> grounded).
>
> P.S. Re your preaching friendly, non-aggressive AGI's,  may I recommend a
> brilliant article by Adam Gopnik - mainly on Popper but featuring other
> thinkers too. Here's a link to the Popper part:
>
> http://www.sguez.com/cgi-bin/ceilidh/peacewar/?C392b45cc200A-4506-483-00.htm
>
> But you may find a fuller version elsewhere. The gist of the article is:
>
> "The Law of the Mental Mirror Image. We write what we are not. It is not
> merely that we fail to live up to our best ideas but that our best ideas,
> and the tone that goes with them, tend to be the opposite of our natural
> temperament" --Adam Gopnik on Popper in The New Yorker
>
> It's worth thinking about.
>
>
>
>
>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

 - Tom

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57204287-1997c5

Reply via email to