Ah! That makes your position much clearer, thanks. To paraphrase to make
sure I understand you, the reason you don't regard human readability as a
critical feature is that you're of the "seed AI" school of thought that says
we don't need to do large-scale engineering, we just need to solve the
scientific problem of how to create a small core that can then auto-learn
the large body of required knowledge.

I spent a lot of time on every known variant of that idea and some AFAIK
hitherto unknown ones, before coming to the conclusion that I had been
simply fooling myself with wishful thinking; it's the perpetual motion
machine of our field. Admittedly biology did it, but even with a whole
planet for workspace it took four billion years and "I don't know about you
gentlemen, but that's more time than I'm prepared to devote to this
enterprise". When we try to program that way, we find there's an awful lot
of prep work to generate a very small special-purpose program A to do one
task, then to generate small program B for another task is a whole new
project in its own right, and A and B can never be subsequently integrated
or even substantially upgraded, so there's a hard threshold on the amount of
complexity that can be produced this way, and that threshold is tiny
compared to the complexity of Word or Firefox let alone Google let alone
anything with even a glimmer of general intelligence.

One of the arguments against this position, of course, is that We Don't
Care, because if we went to enough trouble we could 'hand-build' a
complete system, or get it up above some threshold of completeness
beyond which it would have enough intelligence to be able pick up the
learning ball and go on to build new knowledge in a viable way (Doug
Lenat said this explicitly in his Google lecture, IIRC).


Oh no, I don't believe that. I don't believe a complete system can be
hand-built; Google wasn't, after all, most of what it knows was auto-learned
(admittedly from other human-generated material, but not as part of the same
project or organization). Conversely (depending on how you look at it)
either there is no completeness threshold, or it's so far beyond anything we
can coherently imagine today that there might as well not be one, so the
seed AI approach can't work either.

In reality, both software engineering and (above a minimum adequacy
threshold) auto-learning are both going to stay important all the way up so
we have to cater for both. And from the software engineering viewpoint
(which is what I'm talking about here)... well, would Google ever have
worked if they used things like foo_A1 and foo_A27 for all their variable
names? No. QED :)

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to