****
 > > Philip: I think an AGI needs other AGIs to relate to as a community so that a 
 > > community of learning develops with multiple perspectives available. 
 > > This I think is the only way that the accelerating bootstraping of 
 > > AGIs can be handled with any possibility of being safe. ****** 
  
 > Ben: That feels to me like a lot of anthropomorphizing... 
 
 Why?  Why would humans be the only super-intelligent GI to have perspectives or points of view?  I would have thought it was inevitable for any resource limited/experience limited GI system.  And any AGI in the real world is going to be resource and experience limited. 
****
 
But, to phrase the point loosely and metaphorically: would you rather have one person with an IQ of 200, or 4 people with IQ's of 50?
 
Ten computers of intelligence N, or one computer with intelligence 10*N ?
 
Sure, the intelligence of the ten computers of intelligence N will be a little smarter than N, all together, because of cooperative effects....  But how much more?
 
You can say that true intelligence can only develop thru socialization with peers -- but why?  How do you know that will be true for AI's as well as humans?  I'm not so sure....
 
 
 ***
 The question is only partly technical - there are many other issues that will determine the outcome .  If for no other reason, the monopolies regulators are probably not going to allow all the work requiring an AGI to go through one company.  Also users of AGI services are not going to want to have to deal with a monopolist - most big companies will want to have at the very least least 2-3 AGI service companies in the market place. And its unlikely that these service companies are going to want to have to buy all their AGI grunt from just one company. 
 
 Even in the CPU market there's still AMD serving up a bit of competition to Intel.  And Windows isn't the only OS in the market. 
 
 And then there's the wider community - if there are going to be AGIs at all will the community rest easier if they think there is just one super AGI??  What do people think of Oracle's plan to have one big government database? 
***
 
I don't know how society is going to react to the creation of a super-smart AGI.  But clearly one thing it depends on is the rate of advance.  If Eliezer is right, the transition from a pretty smart AGI to a superintelligent AGI will be quick enough that the slow mechanisms of society won't have a chance to react!  On the other hand, if it's slow enough then, yeah, there will be imitators, and different breeds of AGI's out there.  But still, we might choose to build only one Novamente...
 
***
 In any case it's clearly not safe to have just one AGI in existance - if the one AGI goes feral the rest of us are going to need to access the power of some pretty powerful AGIs to contain/manage the feral one. Humans have the advantage of numbers but in the end we may not have the intellectual power or speed to counter an AGI that is actively setting out to threaten humans. 
***
 
I don't see why multiple superintelligent AGI's are safer than a single one....
 
 
 ***
There is at least one other option that you haven't mentioned and that is to take longer to create the AGI via the 100% experience-based learning route so you can free some resources to devote to following the 'hard-wiring plus experiential learning' route as well. 
 
 It's not going to be the end of the world if we take a little longer to create a safe AGI  
***
 
I don't see it that way.  I see it as fairly likely that some human will end the world in the next 10-20 years through, for example, genetically engineering a new form of plague and releasing it through the atmosphere.  (I admit that I'm picking up some of the paranoid attitude toward terrorism that's common in my new hometown of Washington DC ;-). 
 
It could be the end of the world, although few humans want to admit it...
 
 but it could be the end of the line for all humans or at least those humans not allied with the AGI if we get a feral or dangerous AGI by mistake. 
 
 
****
 And maybe by pursuing both routes simulaneously you might generate more goodwill that might increase the resourcing levels a bit further down the track. 
*** 
 
Well, the bottom line is that the hard-wiring approach doesn't make that much intuitive sense to me.  But I could be wrong, I've been wrong plenty of times before.
 
We're going to have the Novamente book published before we have a super-smart Novamente ready.  So, hopefully someone will read the book and formulate a good approach to hard-wiring ethical rules, in detail....  If it makes enough sense I'll be convinced that it's the way to do things....  I'm not closed-minded about it, i just don't see why it's a good idea yet, and I don't have enough intuition for your idea to design something in detail based on it myself...
 
-- Ben+

Reply via email to