Hello all,
 
Hope everyone had good holiday...
 
I had a question regarding AGI.  As with a human being, it is very important whom we learn from as these people shape what we become, or at least have a very strong inlfuence on what we become.  Even for humans, this "programming" can be extremely difficult to undo. 
 
Considering an AGI, I feel that it will be extremely important for it to learn from "quality" sources.  Along these lines, I was wondering whether it is planned that an AGI might value the input of certain people over others.  This, of course, would have to be built into the system.  But just as our parents brought us into the world, and we therefore value their opinion over others(at least while we are very young!), would it be wrong to encode this into an AGI?
 
To carry this point further...Suppose the AGI is told by many people something that is not beneficial, is not productive, like "Killing is good".  The AGI would learn this and possibly accept it thru this reinforcement.  Would it be desirable to have a "father figure" of sorts (or "mother figure" to be politically correct) who could come along and seeing that the AGI had been given this bad mojo, tell it "No!  It is not good to kill!".  Because of the relative "weight" of the father figure, that single statement, possibly coupled with an explanation, would be enough for the AGI to undo all the prior learning in that area...
 
I'm aware that the "father figure" himself could be a very bad source of information!!  This creates a rather thorny dilemma..
 
I'm interested to hear others thoughts on this matter...
 
Kevin

Reply via email to