silky wrote:
On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker <meeke...@dslextreme.com
<mailto:meeke...@dslextreme.com>> wrote:
silky wrote:
On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker
<meeke...@dslextreme.com <mailto:meeke...@dslextreme.com>> wrote:
silky wrote:
On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou
<stath...@gmail.com <mailto:stath...@gmail.com>>
wrote:
2010/1/18 silky <michaelsli...@gmail.com
<mailto:michaelsli...@gmail.com>>:
It would be my (naive) assumption, that this
is arguably trivial to
do. We can design a program that has a desire
to 'live', as desire to
find mates, and otherwise entertain itself. In
this way, with some
other properties, we can easily model simply pets.
Brent's reasons are valid,
Where it falls down for me is that the programmer
should ever feel
guilt. I don't see how I could feel guilty for ending
a program when I
know exactly how it will operate (what paths it will
take), even if I
can't be completely sure of the specific decisions
(due to some
randomisation or whatever)
It's not just randomisation, it's experience. If you
create and AI at
fairly high-level (cat, dog, rat, human) it will
necessarily have the
ability to learn and after interacting with it's
enviroment for a while it
will become a unique individual. That's why you would
feel sad to "kill" it
- all that experience and knowledge that you don't know
how to replace. Of
course it might learn to be "evil" or at least annoying,
which would make
you feel less guilty.
Nevertheless, though, I know it's exact environment,
Not if it interacts with the world. You must be thinking of a
virtual cat AI in a virtual world - but even there the program, if
at all realistic, is likely to be to complex for you to really
comprehend. Of course *in principle* you could spend years going
over a few terrabites of data and you could understand, "Oh
that's why the AI cat did that on day 2118 at 10:22:35, it was
because of the interaction of memories of day 1425 at 07:54:28 and
...(long string of stuff)." But you'd be in almost the same
position as the neuroscientist who understands what a clump of
neurons does but can't get a wholistic view of what the organism
will do.
Surely you've had the experience of trying to debug a large
program you wrote some years ago that now seems to fail on some
input you never tried before. Now think how much harder that
would be if it were an AI that had been learning and modifying
itself for all those years.
I don't disagree with you that it would be significantly complicated,
I suppose my argument is only that, unlike with a real cat, I - the
programmer - know all there is to know about this computer cat.
But you *don't* know all there is to know about it. You don't know what
it has learned - and there's no practical way to find out.
I'm wondering to what degree that adds or removes to my moral obligations.
Destroying something can be good or bad. Not knowing what you're
destroying usually counts on the "bad" side.
so I can recreate
the things that it learned (I can recreate it all; it's all
deterministic: I programmed it). The only thing I can't
recreate, is
the randomness, assuming I introduced that (but as we know, I can
recreate that anyway, because I'd just use the same "seed" state;
unless the source of randomness is "true").
I don't see how I could ever think "No, you
can't harm X". But what I find very interesting, is
that even if I
knew *exactly* how a cat operated, I could never kill one.
but I don't think making an artificial
animal is as simple as you say.
So is it a complexity issue? That you only start to
care about the
entity when it's significantly complex. But exactly
how complex? Or is
it about the unknowningness; that the project is so
large you only
work on a small part, and thus you don't fully know
it's workings, and
then that is where the guilt comes in.
I think unknowingness plays a big part, but it's because
of our experience
with people and animals, we project our own experience of
consciousness on
to them so that when we see them behave in certain ways we
impute an inner
life to them that includes pleasure and suffering.
Yes, I agree. So does that mean that, over time, if we
continue using
these computer-based cats, we would become attached to them
(i.e. your
Sony toys example
Hell, I even become attached to my motorcycles.
Does it follow, then, that we'll start to have laws relating to
"ending" of motorcycles humanely? Probably not. So there must be more
too it then just attachment.
We don't try to pass laws to control everything. There's no law against
killing your cat or dog even though almost everyone would say it was
wrong. Anyway my sentimental attachment to my motorcycles doesn't have
any implications for society, so it's just a personal matter.
Indeed, this is something that concerns me as well. If
we do create an
AI, and force it to do our bidding, are we acting
immorally? Or
perhaps we just withhold the desire for the program to
do it's "own
thing", but is that in itself wrong?
I don't think so. We don't worry about the internet's
feelings, or the air
traffic control system. John McCarthy has written essays
on this subject
and he cautions against creating AI with human like
emotions precisely
because of the ethical implications. But that means we
need to understand
consciousness and emotions less we accidentally do
something unethical.
Fair enough. But by the same token, what if we discover a way to
remove emotions from real-born children. Would it be wrong to
do that?
Is "emotion" an inherent property that we should never be
allowed to
remove, once created?
Certainly it would be fruitless to remove all emotions because
that would be the same as removing all discrimination and
motivation - they'd be dumb as tape recorders. So I suppose
you're asking about removing, or providing specific emotions.
Removing, for example, empathy would certainly be bad idea -
that's how you get sociopathic killers. Suppose we could remove
all selfishness and create an altruistic being who only wanted to
help and serve others (as some religions hold up as an ideal). I
think you can immediately see that would be a disaster.
Suppose we could add and emotion that put a positive value on
running backwards. Would that add to their overall pleasure in
life - being able to enjoy something in addition to all the other
things they would have naturally enjoyed? I'd say yes. In which
case it would then be wrong to later remove that emotion and deny
them the potential pleasure - assuming of course there are no
contrary ethical considerations.
So the only problem you see is if we ever add emotion, and then remove
it. The problem doesn't lie in not adding it at all? Practically, the
result is the same.
No, because if we add it and then remove it after the emotion is
experienced there will be a memory of it. Unfortunately nature already
plays this trick on us. I can remember that I felt a strong emotion the
first time a kissed girl - but I can't experience it now.
If a baby is born without the "emotion" for feeling overworked, or
adjusted so that it enjoys this overworked state, then we take
advantage of that, are we wrong? If the AI we create is modelled on
humans anyway, isn't it somewhat "cheating" to not re-implement
everything, and instead only implement the parts that we selflishly
consider useful?
I suppose there is no real obligation to recreate an entire human
consciousness (after all, if we did, we'd have no more control over it
than we do other "real" humans), but it's interesting that we're able
to pick and choose what to create, and yet, not able to remove from
real children what we determine is inappropriate to make *them* more
"effective" workers.
We do try to remove emotions that we consider damaging, even though they
may diminish the life of the subject. After all serial killers probably
get a lot of pleasure from killing people. This is the plot of the play
"Equus"; ever seen it?
The argument against that sort of thing would be we are depriving the
child of a different life; but would it ever know? What would it care?
And who is competent to say which life is better? We wouldn't hesitate
deprive a serial killer of his pleasure in killing because of societal
concerns out weight his pleasure. But what about extreme feelings of
physical aggressiveness?...we just draft the guy into the NFL as a
linebacker.
Brent
And regardless, doesn't the program we've written deserve the same
rights? Why not?
Brent
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To post to this group, send email to
everything-list@googlegroups.com
<mailto:everything-list@googlegroups.com>.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com
<mailto:everything-list%2bunsubscr...@googlegroups.com>.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.
--
silky
http://www.mirios.com.au/
http://island.mirios.com.au/t/rigby+random+20
RAMIE bloated double-knit hearten fleetness.
------------------------------------------------------------------------
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.