The problem with a truly general intelligence is that the search spaces
are too large. So one uses specializing hueristics to cut down the
amount of search space. This does, however, inevitably remove a piece
of the "generality". The benefit is that you can answer more
complicated questions quickly enough to be useful. I don't see any way
around this, short of quantum computers, and I'm not sure about them (I
have this vague suspicion that there will be exponentially increasing
probabilities of error, which require hugely increased error recovery
systems, etc.).
This doesn't mean that we have currently reached the limits of agi. It
means that whatever those limits are, there will always be hueristicly
tuned intelligences that will be more efficient in most problem domains.
Of course, here I am taking a strict interpretation of "general", as in
General Relativity vs. Special Relativity. Notice that while Special
Relativity has many uses, General Relativity is (or at least was until
quite recently) mainly of theoretical interest. Be prepared for a
similar result with General Intelligence vs. Special Intelligence. (The
difference here is that Special Intelligence comes in lots of modules
adapted for lots of special circumstances.)
Personally, I believe that the most effective AI will have a core
general intelligence, that may be rather primitive, and a huge number of
specialized intelligence modules. The tricky part of this architecture
is designing the various modules so that they can communicate. It isn't
clear that this is always reasonable (consider the interfaces between
chess and cooking), but if the problem can be handled in a general
manner (there's that word again!), then one of the intelligences could
be specialized for "message passing". In this model the "core general
intelligence" will be for use when none of the hueristics fit the
problem. And it's attempts will be watched by another module whose
specialty is generating new hueristics.
Plausible? I don't really know. Possibly to complicated to actually
build. It might need to be evolved from some simpler precursor.
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/
- RE: [agi] Early AGI apps Ben Goertzel
- RE: [agi] Early AGI apps Alexander E. Richter
- Re: [agi] Early AGI apps Cliff Stabbert
- RE: [agi] Early AGI apps Ben Goertzel
- Re[2]: [agi] Early AGI apps Cliff Stabbert
- RE: Re[2]: [agi] Early AGI apps Ben Goertzel
- [agi] A point of philosophy, rather ... C. David Noziglia
- Re: [agi] A point of philosophy, rat... Eliezer S. Yudkowsky
- RE: [agi] A point of philosophy, rat... Ben Goertzel
- Re: [agi] A point of philosophy, rat... C. David Noziglia
- RE: [agi] A point of philosophy, rat... Charles Hixson
- RE: [agi] A point of philosophy, rat... Ben Goertzel
- Re: [agi] A point of philosophy, rat... Charles Hixson
- RE: [agi] A point of philosophy, rat... Ben Goertzel
- RE: [agi] A point of philosophy, rat... Arthur T. Murray
- RE: [agi] A point of philosophy, rat... Ben Goertzel
- RE: [agi] A point of philosophy, rat... James Rogers
- RE: [agi] A point of philosophy, rat... Arthur T. Murray
- RE: [agi] A point of philosophy, rat... Ben Goertzel
- Re: [agi] A point of philosophy, rat... Charles Hixson
- RE: [agi] A point of philosophy, rat... Ben Goertzel