Derek Zahn wrote:
I asked:
 > Imagine we have an "AGI".  What exactly does it do?  What *should* it do?
Note that I think I roughly understand Matt's vision for this: roughly, it is google, and it will gradually get better at answering questions and taking commands as more capable systems are linked in to the network. When and whether it passes the "AGI" threshold is rather an arbitrary and unimportant issue, it just gets more capable of answering questions and taking orders. I find that a very interesting and clear vision. I'm wondering if there are others.

Surely not!

This line of argument looks like a new version of the same story that occurred in the very early days of science fiction. People looked at the newly-forming telephone system and they thought that maybe if it just got big enough it might become ...... intelligent.

Their reasoning was ... well, there wasn't any reasoning behind the idea. It was just a mystical "maybe lots of this will somehow add up to more than the sum of the parts", without any justification for why the whole should be more than the sum of the parts.

In exactly the same way, there is absolutely no reason to believe that Google will somehow reach a threshold and (magically) become intelligent. Why would that happen?

If they deliberately set out to build an AGI somewhere, and then hook that up to google, that is a different matter entirely. But that is not what is being suggested here.





Richard Loosemore.

-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com

Reply via email to