Matt Mahoney wrote:
--- rg <[EMAIL PROTECTED]> wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we can't.
Wrong.
*You* cannot define friendliness for reasons of your own. Others cmay
well be able to do so.
It would be fine to state "I cannot see a way to define friendliness"
but it is not correct to state this as a general fact.
Friendliness, briefly, is a situation in which the motivations of the
AGI are locked into a state of empathy with the human race as a whole.
There are possible mechanisms to do this: those mechanisms are being
studied right now (by me, at the very least, and possibly by others too).
[For anyone reading this who is not familiar with Matt's style: he has
a preference for stating his opinions as if they are established fact,
when in fact the POV that he sets out is not broadly accepted by the
community as a whole. I, in particular, strongly disagree with his
position on these matters, so I feel obliged to step in when he makes
these declarations.]
Richard Loosemore
Initially I believe that a distributed AGI will do what we want it to do
because it will evolve in a competitive, hostile environment that rewards
usefulness. If by "friendly" you mean that it does what you want it to do,
then it should be friendly as long as humans are the dominant source of
knowledge. This should be true until just before the singularity.
The question is more complicated when the technology to simulate and reprogram
your brain is developed. With a simple code change, you could be put in an
eternal state of bliss and you wouldn't care about anything else. Would you
want this? If so, would an AGI be friendly if it granted or denied your
request? Alternatively you could be inserted into a simulated fantasy world,
disconnected from reality, where you could have anything you want. Would this
be friendly? Or you could alter your memories so that you had a happy
childhood, or you had to overcame great obstacles to achieve your current
position, or you lived the lives of everyone on earth (with real or made-up
histories). Would this be friendly?
Proposals like CEV ( http://www.singinst.org/upload/CEV.html ) don't seem to
work when brains are altered. I prefer to investigate the question of what
will we do, not what should we do. In that context, I don't believe CEV will
be implemented because it predicts what we would want in the future if we knew
more, but people want what they want right now.
-- Matt Mahoney, [EMAIL PROTECTED]
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com