Nikolay Ognyanov wrote:
IMHO :

The stated expected benefit of AGI development is overly ambitious on the
science&technology side and not ambitious enough on the social&economy
side. For AGI to become the Next Big Thing it does not really have to come
up with the best medical researcher. Nor would a great medical researcher
have as much impact on the way current civilization works as replacement
of human workers in the sector of services.  Impact of previous technology
revolutions can be described in a very fundamental way as freeing (liberating? discharging?) people from engagement in hunting and similar, then agriculture and similar, then industry and similar. Well, industry in still in the working and AGI could help there too but the direction is clear. Services are next area of human social&economic activity to benefit and suffer at same scale as others did earlier from technology. This is the most obvious general social role and "selling point" of AGI at least until/unless it becomes true deux ex machina ;). To liberate (but also : discharge, which is going to be a huge adoption/penetration problem) humans from engagement in providing economically significant services to other humans. What such roles and how does AGI address/fulfill should be the key metric if it is to be "sold" outside a community which is motivated by the
intellectual challenge alone.

So IMHO if you want to sell AGI to investors you better start with replacing
travel agents, brokers, receptionists, personal assistants etc. etc. rather than
researchers.

I'm sorry, but this makes no sense at all: this is a complete negation of what "AGI" means.

If you could build a (completely safe, I am assuming) system that could think in *every* way as powerfully as a human being, what would you teach it to become:

1) A travel Agent.

2) A medical researcher who could learn to be the world's leading specialist in a particular field, and then be duplicated so that you instantly had 1,000 world-class specialists in that field.

3) An expert in AGI system design, who could then design a faster generation of AGI systems, so that, as a researcher in any scientific field, these second-generation systems could generate new knowledge faster than all the human scientists and engineers on the planet.

?

To say to an investor that AGI would be useful because we could use them to build travel agents and receptionists is to utter something completely incoherent.

This is the "Everything Just The Same, But With Robots" fallacy.



Richard Loosemore


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to