Joshua Fox wrote:
Singularitarians often accuse the AI establishment of a certain
close-mindedness. I always suspected that this was the usual biased
accusation of rebels against the old guard.

Although I have nothing new to add, I'd like to give some confirmatory
evidence on this from one who is not involved in AGI research.

When I recently met a PhD in AI from one of the top three programs in
the world, I expected some wonderful nuggets of knowledge on the
future of AGI, if only as speculations, but I just heard the
establishment line as described by Kurzweil et al.: AI will continue
to solve narrow problems, always working in tandem with humans who
will handle important part of the tasks.There is no need, and no
future, for human-level AI. (I responded, by the way, that there is
obviously a demand for human-level intelligence, given the salaries
that we knowledge-workers are paid.)

I was quite surprised, even though I had been prepped for exactly this.


Joshua,

I have been noticing this problem for some time, and have started trying to draw people's attention to it.

One reason that I have become sensitized to it is that I am in the (relatively) unusual position of having been a fully paid-up member of five different scientific/technical professions -- physics, cognitive science, parapsychology, artificial intelligence and software engineering -- and I have recently come to realize that there are quite extraordinary differences in attitude among people trained in these areas. Differences, mind you, that make a huge difference in the way that research gets carried out.

Even within AI research there was a schism that occurred roughly 20 years ago, when most of the old guard were thrown out. Those revolutionaries now control the field, and it was their attitude that you encountered.

Here is my analysis of what was going on in the mind that made those remarks to you.

First, the current AI folks (who used to call themselves the "Neats," in contrast to their enemies, who were the "Scruffs") are brought up on a diet of mathematics. They love the cleanliness and beauty of math in a way that is something close to an obsessive personality flaw. What that means in practice is that they have a profound need for intelligent systems to be clean and formal and elegant.

Note carefully: they *need* it to be that way. This is non-negotiable, as far as they are concerned. If someone came up with evidence to show that intelligence probably could not, sadly, be captured as a formal system with an elegant, provable, mathematical core, they would not be flexible enough to accept that evidence, they would fight it all the way down to the wire.

[I say this as a plug, of course: I have presented just such evidence, in (among other places, the AGIRI workshop last year) and it has been met with astonishing outbursts of irrational scorn. I have never seen anyone make so many lame excuses to try to destroy an argument].

But now, even if they do have that attitude, why don't they just believe that a nice, mathematical approach will eventually yield a true AGI, or human-level intelligence?

The answer to that question, I believe, is threefold:

1) They do not have much of a clue about how to solve some aspects of the AGI puzzle (unsupervised learning mechanisms that can ground their systems in a completely autonomous way, for example), so they see an immense gulf between where they are and where we would like them to get to. They don't know how to cross that gulf.

2) They try to imagine some of their favorite AI mechanisms being extended in order to cope with AGI, and they immediately come up against problems that seem insurmountable. A classic case of this is the Friendliness Problem that is the favorite obsession of the SIAI crowd: if you buy the standard AI concept of how to drive an AI (the goals and supergoals method, or what I have referred to before as the "Goal Stack" approach), what you get is an apparently dangerous situation: the AI could easily go berserk, and they cannot see any way to fix it.

3) Part of the philosophy of at least some of these Neat-AI folks is that human cognitive systems are trash. This ought not to make any difference (after all, they could ignore human cognition and still believe that AGI is achievable by other means), but I suspect that it exerts a halo effect: they associate the idea of building a complete human-level AI with the thought of having to cope with all the messy details involved in actually getting a real system to work, and that horrifies them. It isn't math. It's dirty.

Put these things together and you can see why they believe that AGI is either not going to happen, or should not happen.

Overall, I believe this attitude problem is killing progress. Quite seriously, I think that someone is eventually going to put money behind an AGI project that is specifically NOT staffed with mathematicians, and all of a sudden everyone is going to wonder why that project is making rapid progress.

I mentioned some other fields earlier. Most of these suffer from analogous problems. In cognitive psychology, for example, there is an accepted way of doing things, and if you do not follow the orthodoxy, you are dead in the water. That orthodoxy involves a very tight experiment-and-theory loop, in which pretty much every experiment must have a mini-theory to explain its own data, but where the mini-theories do not have to mesh together to make a whole (leading some theories to be absurdly useless outside their tiny domain). Also, if a piece of work is presented as theory only, with no immediate experimental data to back it, it is generally regarded as worthless and unpublishable.

These sciences (and engineering/mathematical disciplines) have become rigid, conservative and moribund. Rewards are given to those who find ways to play the game and spread their name across large numbers of published papers. People are also fiercely protective of their domains.

Unfortunately, this is a deep potential well we are in. It will take more than one renegade researcher to dig us out of it.




Richard Loosemore.

































-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to