Sanjay:
I fully agree here, AGI can be very dangerous in wrong hands.
But same is the case with any powerful tech. Controlling the knowledge is only a temporary measure. In fact, general wisdom says that limiting the knowledge to a chosen few can be more dangerous. Power corrupts easily. Its misuse can only be diluted by spreading it so much that more good people than bad get hold of it. Given that there is not a single way to come up with AI, its only a matter of time before other groups figure out their own ways.
 
That's a good point, the best safety measure is to allow the most people to possess AGI.  It's also Herb's point that the right to bear arms can be a good prevention of crimes -- thanks for the info.  Personally I live in a country (China) where guns are strictly banned.  Looking up the web I found that Europe is also more anti-gun in general.  So it seems that different countries may enforce various degrees of regulation of AGI.

IMHO, you cannot compare s/w with Nukes, because a nuke is a material thing and requires costly resources, so even if everyone knew how to make one, only a handful will be able to actually build it, as the resources required are immense. For AGI you need only a very fast computer...easy.
 
That's true.  AGI is different in many ways with previous weapons / technologies, so we are faced with a pretty much unprecedented situation...

How to protect general public from misuse of AGI? May be the answer lies in AGI itself - make AGI which can detect such attempts, equip the potential victims with it and let the fight begin on equal ground. Once AGI becomes smarter than humans, only AGI will be able to save humans from itself.
However, I predict that the immediate problem will be not security, but ethics. On the lines of the opposition being faced by cloning etc, AGI will receive a lot of criticism and some groups will even fight to stop/ban any research being done on it. Even if we manage to make a machine resembling a 2 year old kid, the ethical problems its going to create would be great. So can I power it off any time? Can I make copies of it? Can it reprogram it to do what I wish? and so on..... because we are dealing with a thinking and sensing being here.
 
I can't speak for others, but my goal is to create AGI as a tool, not as something "sentient".  I believe it is possible to do that, but that possibility does not appeal to me.  Building AGI as a passive tool is much more important IMO.

Well, its a big debate really. To be on a safer side, I feel its good to hold the source until you are very much sure that its safe to release it.
 
The problem with this (and Ben's argument to withhold his Novamente design) is that you can't tell how "safe" some information is.  A good textbook on AI or machine learning may enable some people to build AGIs, so do some journal papers.  The state of AI research is always advancing, and we can't just arbitrary draw a line that says "this is making AGI too easy".  It's just very improbable that the entire AI research community would agree on such a standard.  In reality the information will become more and more available over time.  It seems that public ownership of AGI (under some regulation) is inevitable.
 
Another point is that an early AGI is not sophisticated enough to ensure safety by monitoring itself -- its intelligence is too low for that.  We're still quite a long way from that.
 
yky


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to