People seem to be predisposed to accept AI programs as human(oid) if one can judge by reactions to Hal, Colossus, Robby, Marvin etc. m.a.


----- Original Message ----- From: "Brent Meeker" <meeke...@dslextreme.com>
To: <everything-list@googlegroups.com>
Sent: Monday, January 18, 2010 6:09 PM
Subject: Re: on consciousness levels and ai


silky wrote:
On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou <stath...@gmail.com> wrote:

2010/1/18 silky <michaelsli...@gmail.com>:

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

Brent's reasons are valid,


Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever)

It's not just randomisation, it's experience.  If you create and AI at
fairly high-level (cat, dog, rat, human) it will necessarily have the
ability to learn and after interacting with it's enviroment for a while
it will become a unique individual.  That's why you would feel sad to
"kill" it - all that experience and knowledge that you don't know how to
replace.  Of course it might learn to be "evil" or at least annoying,
which would make you feel less guilty.

I don't see how I could ever think "No, you
can't harm X". But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.



but I don't think making an artificial
animal is as simple as you say.


So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.


I think unknowingness plays a big part, but it's because of our
experience with people and animals, we project our own experience of
consciousness on to them so that when we see them behave in certain ways
we impute an inner life to them that includes pleasure and suffering.


Henry Markham's group are presently
trying to simulate a rat brain, and so far they have done 10,000
neurons which they are hopeful is behaving in a physiological way.
This is at huge computational expense, and they have a long way to go
before simulating a whole rat brain, and no guarantee that it will
start behaving like a rat. If it does, then they are only a few years
away from simulating a human, soon after that will come a superhuman
AI, and soon after that it's we who will have to argue that we have
feelings and are worth preserving.


Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's "own
thing", but is that in itself wrong?


I don't think so.  We don't worry about the internet's feelings, or the
air traffic control system.  John McCarthy has written essays on this
subject and he cautions against creating AI with human like emotions
precisely because of the ethical implications.  But that means we need
to understand consciousness and emotions less we accidentally do
something unethical.

Brent


--
Stathis Papaioannou







--------------------------------------------------------------------------------


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


Reply via email to