On Jun 28, 2004, at 7:08 PM, Brad Wyble wrote:
The more one studies the specifics of the solutions the brain uses, the more one realizes the incredible varieties of strategies that could have been used, yes economically, and yes in our universe.


But what is so extraordinary about this? This is just another expression of the Church-Turing thesis.

The brain, as far as I can discern, is a pretty good approximation of a non-axiomatic pattern (NAP) computer. The brain looks like such a system, and more importantly, shows all the strengths and weaknesses of such a system. Coincidence? I doubt it.

Every possible algorithm and strategy that is possible is expressible on extremely simple machinery e.g. SK combinators. Adaptivity and tuning for multiple strategies would actually be expected, and optimal behavior, if it was a NAP computing model.


Yes, I understood your point. I do have a reasonable understanding of the underlying problem. The difference is simply one of perspective. I see limitless solutions, you see a narrow track.


I too see limitless theoretical solutions. But since I am interested in solutions on real hardware, the choices are greatly reduced. It is still a rich space, but it has well-defined boundaries that must be respected as a matter of practical engineering. The human brain, from everything I've gleaned down to how individual neurons are wired and interact with each other, appears to fall within that space. But unlike many, I started with the math and arrived at biology, not the other way around.

To recycle an analogy, the design space of airplanes is both rich and extremely bounded. The Wright brothers' first flyer and the very advanced F-22 occupy the same narrow design space even though the technology to express that design space has improved dramatically.


I've been involved with the hard AI, the weak AI and the neurophys approaches to these problems at an academic level. There is very little agreement there.


Hrmmm.... let me rephrase.

For any theoretical approach that starts from the ground and works up in mathematics, there is very little disagreement. A model that isn't grounded even loosely in basic theoretical principles isn't much of a model at all. Most AI research, hard or weak, has revolved around making adjustments to the epicycles of the intelligence solar system.

In my not so humble opinion, THE historical problem with AI research and research on intelligence in general is that it has been a random walk through the phase space. There is no good mathematical reasons to buy into most of the models proposed because almost all of them amount to throwing things at the wall to see what sticks in implementation. You'll have a hard time finding the solution if you haven't specified what the solution will look like in refined and rigorous terms.


So by knowledge you mean essentially empirical facts, that 1 + 2 = 3.

The hard part in AGI is not finding that knowledge, but in developing an agent that can distill that knowledge. That's what we do, as people.


No, knowledge is derivative patterns. 1+2=3 is a simple low-order pattern. That pattern has no meaning. When that pattern mixes and interacts with other patterns, you may discover other useful patterns. Knowledge is the derived relationships and associations between patterns (all of which are themselves patterns).


That's certainly not true. I can buy a chemistry textbook full of essentially mathematical knowledge of how atoms interact. That knowledge doesn't have to be learned either, I can use it straight from the book.


Nonsense. A chemistry textbook is a collection symbols and patterns. One has to have a not insignificant set of patterns that one already knows to be able to extract useful derivative patterns from the chemistry text, as a chemistry textbook requires that one has already learned a rather vast amount of prior patterns that serve as context.

Data is the raw information stream. Knowledge is the result of the inductive process on that stream.


This perspective is all dependent on your view that there is essentially only one solution to the problem.


No, it is dependent on my view that there is only one *efficient* solution to the general problem. Or more accurately, it is a peak in the phase space and that you have to be pretty near that peak to realize AI in practice.


From my perspective, if you imagine 10 different companies developing AGi
in different directions, they'll be discovering different kinds of knowledge about the regularities of economic reality. So while company X's AGI performs well in market X, company Y's AGI cleans up market Y.


Heh. This is a narrow AI perspective rather than a General Intelligence perspective.

For simple and extremely well-specified domains, narrow domain implementations will outperform GI, in this you are correct. But for everything else, the ability of a GI to adaptively tailor itself to the patterns of complex domains will generally outperform narrow intelligence.

Any GI worth a damn should be able to thoroughly extract all the patterns in an economic reality from the low-order to the high-order, rather than selectively like you suggest. If it can't do this, it isn't "general".


j. andrew rogers

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to