Derek Zahn wrote:
Matt Mahoney writes:
> Just what do you want out of AGI? Something that thinks like a person or
> something that does what you ask it to?
I think this is an excellent question, one I do not have a clear answer
to myself, even for my own use.
Imagine we have an "AGI". What exactly does it do? What *should* it do?
"It does whatever we tell it" is not good enough. What would we tell it
to do? And no wigged-out scifi allowed; you can't say "invent molecular
nanotechnology and build me a Dyson sphere" -- first, because such a
vision is completely unhelpful in guiding how to get there, and second
because there's no reason to think a currently-envisionable AGI would be
millions of times "smarter" than all of humanity put together.
I am not sure I understand.
There is every reason to think that "a currently-envisionable AGI would
be millions of times "smarter" than all of humanity put together."
Simply build a human-level AGI, then get it to bootstrap to a level of,
say, a thousand times human speed (easy enough: we are not asking for
better thinking processes, just faster implementation), then ask it to
compact itself enough that we can afford to build and run a few billion
of these systems in parallel, then ask it to build the Dyson Sphere (if
that is what is considered a sensible thing to do).
Assuming that there are no scale problems in any of this (and at the
moment we have no reasons to believe there will be), this is
straightforward.
The question of what it should do is something I have been thinking
about since I read the first edition of "Machines Who Think" back in the
mid-80s (cf http://www.pamelamc.com/html/machines_who_think.htm).
Richard Loosemore
-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com