Derek Zahn wrote:
Richard Loosemore:

> > a...
I often see it assumed that the step between "first AGI is built" (which I interpret as a functoning model showing some degree of generally-intelligent behavior) and "god-like powers dominating the planet" is a short one. Is that really likely?
Nobody knows the answer to that one. The sooner it is built, the less likely it is to be true. As more accessible computing resources become available, hard takeoff becomes more likely.

Note that this isn't a quantitative answer. It can't be. Nobody really knows how much computing power is necessary for a AGI. In one scenario, it would see the internet as it's body, and wouldn't even realize that people existed until very late in the process. This is probably one of the scenarios that require least computing power for takeoff, and allow for fastest spread. Unfortunately, it's also not very likely to be a friendly AI. It would likely feel about people as we feel about the bacteria that make our yogurt. They can be useful to have around, but they're certainly not one's social equals. (This mode of AI might well be social, if, say, it got socialized on chat-lines and newsgroups. But deriving the existence and importance of bodies from those interactions isn't a trivial problem.)

The easiest answer isn't necessarily the best one. (Also note that this mode of AI could very likely be developed by a govt. as a weapon for cyber-warfare. Discovering that it was a two-edged sword with a mind of it's own could be a very late-stage event.)


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51216209-9c2b04

Reply via email to