No it won't, because people are free to decide what makes pain "real".
What? You've got to be kidding . . . . What makes pain real is how the
sufferer reacts to it -- not some abstract wishful thinking that we use to
justify our decisions of how we wish to behave.
I'm sorry that it's taken me this long to realize exactly how morally
bankrupt you are.
What about autobliss?
Autobliss is a toy that proves absolutely nothing.
It learns to avoid negative reinforcement and it says "ouch".
Not the version that you posted . . . . more hot air and hyperbole.
Do you really think that if we build AGI in the likeness of a human mind,
and stick it with a pin and it says "ouch", that we will finally have an
answer to the question of whether machines have a consciousness?
I think that we have the answer now but that people like you won't be
convinced even if given overwhelming proof.
100 years ago there was little controversy over animal rights,
euthanasia, abortion, or capital punishment.
Because our standard of living wasn't high enough to support it . . . .
Do you think that the addition of intelligent robots will make the
boundary between human and non-human any sharper?
No, I think that it will make it much fuzzier . . . . but since the boundary
is just a strawman for lazy thinkers, removing it will actually make our
ethics much sharper.
----- Original Message -----
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Monday, November 17, 2008 12:44 PM
Subject: Re: FW: [agi] A paper that actually does solve the problem of
consciousness--correction
--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:
For example, in
fifty years, I think it is quite possible we will be able to say with some
confidence if certain machine intelligences we design are conscious nor
not,
and whether their pain is as real as the pain of another type of animal,
such
as chimpanzee, dog, bird, reptile, fly, or amoeba .
No it won't, because people are free to decide what makes pain "real". The
question is not resolved for simple systems which are completely understood,
for example, the 302 neuron nervous system of C. elegans. If it can be
trained by reinforcement learning, it that "real" pain? What about
autobliss? It learns to avoid negative reinforcement and it says "ouch". Do
you really think that if we build AGI in the likeness of a human mind, and
stick it with a pin and it says "ouch", that we will finally have an answer
to the question of whether machines have a consciousness?
And there is no reason to believe the question will be easier in the future.
100 years ago there was little controversy over animal rights, euthanasia,
abortion, or capital punishment. Do you think that the addition of
intelligent robots will make the boundary between human and non-human any
sharper?
-- Matt Mahoney, [EMAIL PROTECTED]
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com