Richard Loosemore wrote:


I am not sure I understand.

There is every reason to think that "a currently-envisionable AGI would be millions of times "smarter" than all of humanity put together."

Simply build a human-level AGI, then get it to bootstrap to a level of, say, a thousand times human speed (easy enough: we are not asking for better thinking processes, just faster implementation), then ask it to compact itself enough that we can afford to build and run a few billion of these systems in parallel, then ask it to build the Dyson Sphere (if that is what is considered a sensible thing to do).


Extending this idea - (take with a grain of humor)

A billion units that are fast and each a thousand times quicker than a human being; each is given the task of delivering a Dyson Sphere. The assumption is that one of the billion will deliver the sphere.

But, this raises the issue of resources to fund a billion projects. If we could, then there is no problem. (a later stage of the nano economy?)

One might suggest that the billion do the "mental" work only which is low cost since computer resources are so cheap... eventually.

But mental work is "theoretical" until the Sphere is built. Which plan of the billion proposed by the super conglomerate should be chosen? If we are smart enough to pick the right plan, why not give the rainstorm a chance to solve the puzzle? I'm sure it's in all those drops somewhere, along with consciousness.

By the way, is the terminology for this ability of super thinking considered "deep" or "wide"?

-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com

Reply via email to