If you look at what I actually wrote, you'll see that I don't claim (natural) evolution has any role in AGI/robotic evolution. My point is that you wouldn't dream of speculating so baselessly about the future of natural evolution, why speculate baselessly about AGI evolution?

I should explain that I am not against all speculation, and I am certainly excited about the future.

But I would contrast the speculation that has gone on here with that of the guy who mainly started it - Kurzweil. His argument for the Singularity is grounded in reality - the relentless growth of computing power, which he documents. And broadly I buy his argument that that growth in power will continue much as he outlines. I don't buy his conclusion about the timing of the Singularity, because building an artificial brain with as much power and as many PARTS as the human brain or much greater, and building a SYSTEM of mind (creating and integrating possibly thousands of cognitive departments ), are two different things. Nevertheless he is pointing out something real and important even if his conclusions are wrong - and it's certainly worth thinking about a Singularity.

When you and others speculate about the future emotional systems of AGI's though - that is not in any way based on any comparable reality. There are no machines with functioning emotional systems at the moment on which you can base predictions.

And when Ben speculates about it being possible to build a working AGI with a team of ten or so programmers, that too is not based on any reality. There's no assessment of the nature and the size of the task, and no comparison with any actual comparable tasks that have already been achieved. It's just tossing figures out in the air.

And when people speculate about the speed of an AGI take-off, that too is not based on any real, comparable take-off's of any kind.

You guys are much too intelligent to be engaging in basically pointless exercises like that. (Of course, Ben seems to be publicly committed now to proving me wrong by the end of next year with some kind of functional AGI, but even if he were probably the first AI person ever to fulfil such a commitment, it still wouldn't make his prediction as presented any more grounded).

P.S. Re your preaching friendly, non-aggressive AGI's, may I recommend a brilliant article by Adam Gopnik - mainly on Popper but featuring other thinkers too. Here's a link to the Popper part:

http://www.sguez.com/cgi-bin/ceilidh/peacewar/?C392b45cc200A-4506-483-00.htm

But you may find a fuller version elsewhere. The gist of the article is:

"The Law of the Mental Mirror Image. We write what we are not. It is not merely that we fail to live up to our best ideas but that our best ideas, and the tone that goes with them, tend to be the opposite of our natural temperament" --Adam Gopnik on Popper in The New Yorker

It's worth thinking about.






Richard: You could start by noticing that I already pointed out that evolution
cannot play any possible role.

I rather suspect that the things that you call "speculation" and "fantasy" are only seeming that way to you because you have not understood them, since, in fact, you have not addressed any of the specifics of those proposals ..... and when people do not address the specifics, but immediately start to slander the whole idea as "fantasy" they usually do this because they cannot follow the arguments.

Sorry to put it so bluntly, but I just talked so *very* clearly about why evolution cannot play a role, and you ignored every single word of that explanation and instead stated, baldly, that evolution was the most important aspect of it. I would not criticise your remarks so much if you had not just demonstrated such a clear inability to pay any attention to what is going on in this discussion.


Richard Loosemore





Mike Tintner wrote:
Every speculation on this board about the nature of future AGI's has been pure fantasy. Even those which try to dress themselves up in some semblance of scientific reasoning. All this speculation, for example, about the friendliness and emotions of future AGI's has been non-sense - and often from surprisingly intelligent people.

Why? Because until we have a machine that even begins to qualify as an AGI - that has the LEAST higher adaptivity - until IOW AGI's EXIST- we can't begin seriously to predict how they will evolve, let alone whether they will "take off." And until we've seen a machine that actually has functioning emotions and what purpose they serve, ditto we can't predict their future emotions.

So how can you cure yourself if you have this apparently incorrigible need to produce speculative fantasies with no scientific basis in reality whatsoever?

I suggest : first speculate about the following:

what will be the next stage of HUMAN evolution? What will be the next significant advance in the form of the human species - as significant, say, as the advance from apes, or - ok - some earlier form like Neanderthals?

Hey, if you are prepared to speculate about fabulous future AGI's, predicting that relatively small evolutionary advance shouldn't be too hard. But I suggest that if you do think about future human evolution your mind will start clamming up. Why? Because you will have a sense of physical/ evolutionary constraints (unlike AGI where people seem to have zero sense of technological constraints), - an implicit recognition that any future human form will have to evolve from the present form - and to make predictions, you will have to explain how. And you will know that anything you say may only serve to make an ass of yourself. So any prediction you make will have to have SOME basis in reality and not just in science fiction. The same should be true here.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.503 / Virus Database: 269.15.8/1089 - Release Date: 10/23/2007 7:39 PM




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57165918-489145

Reply via email to