Charles,

It's a good example. What it also brings out is the naive totalitarian premises 
of RSI - the implicit premise that you can comprehensively standardise your 
ways to represent and solve problems about the world,   (as well as the domains 
of the world itself). [This BTW has been the implicit premise of literate, 
rational culture since Plato].

The reason we encourage and foster competition in society - and competing, 
diverse companies and approaches - is that we realise that 
competition/diversity is a fundamental part of evolution, at every level, and 
necessary to keep developing better solutions to the problems of life.

What cog sci and AI haven't realised is that humans are also individually 
designed "competitively" with conflicting emotions and ideas and ways of 
thinking inside themselves -  a necessary structure for an AGI. And such 
conflict inevitably stands in the way of any RSI.

It'd be interesting to have Minsky's input here, because one thing he stands 
for is the principle that human/general minds have to be built kludge-ily with 
many different ways to think - different knowledge systems. We clearly aren't 
meant to - and simply can't - think, for example, just logically and 
mathematically. Evolution and human evolution/history have relentlessly built 
up these GI's with ever more complex repertoires of knowledge representation 
and sensors, because it's a good and necessary principle  - the more complex 
you want your interactions with the world to be. 





        >
        >Charles/MT:> If RSI were possible, then you should see some signs of 
it within human society, of
        > humans recursively self-improving - at however small a scale. You 
don't because of this
        > problem of crossing and integrating domains. It can all be done, but 
laboriously and
        > stumblingly not in some simple, formulaic way. That is culturally a 
very naive idea.

        I hope nobody minds if I interject with a brief narrative concerning a 
recent experience. Obviously I don't speak for Ben Goertzel, or anyone else who 
thinks RSI or recognizing superior intelligence is possible.

        As it happened, I was looking for a new job a while back, and landed an 
interview with a major corporate entity. When I spoke to the HR representative, 
she bemoaned the lack of hiring standards, especially for her own department. 
"It's impossible," she said, "As a consultant explained it to us a few years 
ago, the corporation changes with each person we hire or fire, changes into a 
related but different entity. If we measure the intelligence of a corporation 
in terms of how well suited it is to profit from its environment, my job is to 
make sure that people we hire (on average) result in the corporation becoming 
more intelligent." She looked at me for sympathy. "As if all our resources were 
enough to recognize (much less plan) an entity more intelligent than 
ourselves!" She had a point. "What's worse, we're expected to hire new HR staff 
and provide training that will make our department more effective at hiring new 
people." I nodded. That would lead to recursive self improvement (RSI), which 
is clearly impossible. Finally she said I seemed like the sympathetic sort, and 
even though that had nothing to do with her worthless hiring criteria, I could 
have the job and start right away.

        I thought about the problem later, and eventually concluded that one 
good HR strategy would be to form hundreds or thousands (millions?) of 
corporations with stochastic methods for hiring, firing, training, merging and 
creating spinoffs, perhaps using GP or MOSES or some such. Eventually, 
corporations would emerge with superior intelligence.

        The alternative would be a massive cross-disciplinary effort, only 
imaginable by a super-neo-da Vinci character who's a master of psychology, 
mathematics, economics, manufacturing, politics -- essentially every field of 
human knowledge, including medical sciences, history and the arts.

        I guess it doesn't look too hopeful, so we're probably going to be 
stuck with hiring, firing and training practices that mean absolutely nothing, 
forever.

        Charles Griffiths



               
               
       



------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to