On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote:
Matt Mahoney writes:
> Just what do you want out of AGI? Something that thinks like a
person or
> something that does what you ask it to?
The "or" is interesting. If it really "thinks like a person" and at
at least human level then I doubt very much it will "do what you ask"
any more often than people do.
What I want out of AGI is something that thinks a lot better, deeper,
faster and richer than human beings do. I would refer it to be a
colleague but I doubt it would find me very interesting for long.
I think this is an excellent question, one I do not have a clear
answer to myself, even for my own use.
Imagine we have an "AGI". What exactly does it do? What *should*
it do?
"It does whatever we tell it" is not good enough. What would we
tell it to do?
Beware the wish granting genie conundrum.
And no wigged-out scifi allowed; you can't say "invent molecular
nanotechnology and build me a Dyson sphere" -- first, because such a
vision is completely unhelpful in guiding how to get there, and
second because there's no reason to think a currently-envisionable
AGI would be millions of times "smarter" than all of humanity put
together.
It doesn't need to be. If it could simply pull together all relevant
research more efficiently and have greater capacity to consider more
facets at once then it could suggest new directions and form new
integrations that humans would not see and thus be more likely to
arrive at solutions than all current researchers.
-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com