Well, I haven't seen any intelligent responses to this so I'll answer it 
myself:

On Thursday 17 April 2008 06:29:20 am, J Storrs Hall, PhD wrote:
> On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote:
> > If you could build a (completely safe, I am assuming) system that could 
> > think in *every* way as powerfully as a human being, what would you 
> > teach it to become:
> > 
> > 1) A travel Agent.
> > 
> > 2) A medical researcher who could learn to be the world's leading 
> > specialist in a particular field,...
> 
> Travel agent. Better yet, housemaid. I can teach it to become these things 
> because I know how to do them. Early AGIs will be more likely to be 
> successful at these things because they're easier to learn. 
> 
> This is sort of like Orville Wright asking, "If I build a flying machine, 
> what's the first use I'll put it to: 
> 1) Carrying mail.
> 2) A manned moon landing."

Q: You've got to be kidding. There's a huge difference between a mail-carrying 
fabric-covered open-cockpit biplane and the Apollo spacecraft. It's not 
comparable at all.

A: It's only about 50 years' development. More time elapsed between railroads 
and biplanes. 

Q: Do you think it'll take 50 years to get from travel agents to medical 
researchers?

A: No, the pace of development has speeded up, and will speed up more so with 
AGI. But as in the mail/moon example, the big jump will be getting off the 
ground in the first place.

Q: So why not just go for the researcher? 

A: Same reason Orville didn't go for the moon rocket. We build Rosie the 
maidbot first because:
1) we know very well what it's actually supposed to do, so we know if it's 
learning it right
2) we even know a bit about how its internal processing -- vision, motion 
control, recognition, navigation, etc -- works or could work, so we'll have 
some chance of writing programs that can learn that kind of thing.
3) It's easier to learn to be a housemaid. There are lots of good examples. 
The essential elements of the task are observable or low-level abstractions. 
While the robot is learning to wash windows, we the AGI researchers are going 
to learn how to write better learning algorithms by watching how it learns.
4) When, not if, it screws up, a natural part of the learning process, 
there'll be broken dishes and not a thalidomide disaster.

The other issue is that the hard part of this is the learning. Say it takes a 
teraop to run a maidbot well, but petaop to learn to be a maidbot. We run the 
learning on our one big machine and sell the maidbots cheap with 0.1% the 
cpu. But being a researcher is all learning -- so each one would need the 
whole shebang for each copy. A decade of Moore's Law ... and at least that of 
AGI research.

Josh

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to