Kaj Sotala wrote:
My first attempt at writing something Singularity-related that
somebody might actually even take seriously. Comments appreciated.

--------------------------------

http://www.saunalahti.fi/~tspro1/artificial.html

In recent years, some thinkers have raised the issue of a so-called
"superintelligence" being developed within our lifetimes and radically
revolutionizing society. A case has been made (see, for instance,
[Vinge, 1993] [Bostrom, 2000] [Yudkowsky, 2006]) that once we have a
human-equivalent artificial intelligence, it will soon develop to
become much more intelligent than humans - with unpredictable results.

Often, people seem to have less trouble with the idea of machine
superiority than with the idea of us actually developing an artificial
intelligence within our lifetimes - to most people, true machine
intelligence currently seems very remote. This text will attempt to
argue that there are several different ways by which artificial
intelligence may be developed in the near future, and that the
probability of this happening is high enough that the possibility
needs to be considered when making plans for the future.

Kaj,

My first thoughts are that my reasons for thinking it will happen are so extremely local, where the ones you cite are global.

What I mean is that you list a number of very general reasons why, but in each of your cases I see a conflict between the optimism of your summary and the reality on the ground. This makes me very nervous.

For example, you cite the interdisciplinary nature of this field. Well, in fact, I am looking at it from the inside (having crossed disciplines, and having been hanging out with people in the AI/CogSci field ever since I graduated, and what I see is *lip-sevice* interdisciplinary work which is always a shotgun marriage at best. I see little fragments of ideas going across, but always with distortion and simplification, and almost always in such a way that the idea is *appropriated* rather than used. To be blunt about it, people take some phrase x-y-z from the field across the fence, find an excuse to say that they are doing x-y-z by incorporating some shadow of the real x-y-z, and then they get cool points from all the folks in their own field, who don't really know what x-y-z is, but are impressed by the sound of it.

I exagerate slightly, but you get the general idea. To an outsider this might sound like cynicism on my part -- people think, hey these are scientists, right? They wouldn't be that crummy, surely? -- but the horrible, horrible truth is that this really is the way that things happen. You would not believe the extent to which science these days is a matter of personal spin and marketing. And this is especially pronounced in the case of "interdisciplinary" interactions.

In cognitive psychology, for example, interaction with AI folks used to be a big thing 30 years ago. Then it gradually died out. Today, it hardly happens at all, except with the rump of the old-school AI folks.

The same story can be applied separately to each of the fields you talk about. Brain imaging in particular.

But now, on a more positive note, I think that none of what you say will make any difference BUT some progress will come out of left field, and that in the fullness of time it will turn out that the one thing that made the singularity happen was a single set of discoveries of a theoretical or practical nature.

I happen to believe that I am working in precisely that direction, but I have to try to keep my self-confidence in check for fear of giving the wrong impression. But it doesn't have to be me that gets it to work, it could be you, or someone else that none of us has heard of yet.

Summary: I take everything that the big guns are doing with a pinch of salt, but I have enormous faith in the creativity of the girls and boys out there who are trying to think outside the box. They are the ones who are going to make it happen.


Richard Loosemore












-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to