Matt Mahoney wrote:
Sorry, but IMO large databases, fast hardware, and cheap memory ain't got nothing to do with it.

Yes it does. The human brain has vastly more computing power, memory and knowledge than all the computers we have been doing AI experiments on. We have been searching for decades for shortcuts to fit our machines and found none. If any existed, then why hasn't human intelligence evolved in insect sized brains?


If the brain used its hardware in such a way that (say) a million neurons were required to implement a function that, on a computer, required a few hundred gates, your comparisons would be meaningless. We do not know one way or the other what the equivalence ratio is, but your statement implicitly assumes a particular (and unfavorable) ratio: you simply cannot make that statement without good reasons for assuming a ratio. There are no such reasons, so the statement is meaningless.

"We have been searching for decades to find shortcuts to fit our machines"? When you send a child into her bedroom to search for a missing library book, she will often come back 15 seconds after entering the room with the statement "I searched for it and it's not there." Drawing definite conclusions is about equally reliable, in both these cases.

It used to be a standing joke in AI that researchers would claim there was nothing wrong with their basic approach, they just needed more computing power to make it work. That was two decades ago: has this lesson been forgotten already?



As long as knowledge accumulates exponentially (as it has been doing for centuries) and Moore's law holds (which it has since the 1950s), you can be sure that machines will catch up with human brains. When that happens, a lot of AI problems that have been stagnant for a
long time will be solved all at once.

A hundred dog's dinners maketh not a feast.


Having a clue about just what a complex thing intelligence is, has everything to do with it.

Which is why we will not be able to control AI after we produce it. It is not possible, even in theory, for a machine (your brain) to simulate or predict another machine with more states or greater Kolmogorov complexity [1]. The best you can do is duplicate the architecture and learning mechanisms of the brain and feed it data you can't examine because there is too much of it. You will have built AI but you won't know how it works.

[1] Legg, Shane, (2006), Is There an Elegant Universal Theory of
Prediction?,  Technical Report
IDSIA-12-06, IDSIA / USI-SUPSI, Dalle Molle
Institute for Artificial Intelligence, Galleria 2, 6928 Manno, Switzerland.

A completely spurious argument. You would not necessarily *need* to "simulate or predict" the AI, because the kind of "simulation" and "prediction" you are talking about is low-level, exact state prediction (this is inherent in the nature of proofs about Kolmogorov complexity).

It is entirely possible to build an AI in such a way that the general course of its behavior is as reliable as the behavior of an Ideal Gas: can't predict the position and momentum of all its particles, but you sure can predict such overall characteristics as temperature, pressure and volume.

Richard Loosemore












-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to