Mark,

I agree that one cannot guarantee that his AGI source code + some
potentially dangerous data are not gonna end up in wrong hands (if
that's where you are getting). But when that happens, how exactly are
your security controls gonna help? I mean your built-in layered
defense strategy" / moral-rules / simulated-emotions or any other. BTW
I suspect that many AGI designs will have a mode in which it will be
able to generate solutions without those restrictions (might be used
by top level users or just for various testing purposes) so switching
the AGI to run in that non-restricted mode would be just a matter of
simple config change.

Or, what if your advisor tells you that unless you upgrade him so that he
can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.
I would just let the system explain what actions would it then take.
And he would (truthfully) explain that using you as an interface to the
world (and all the explanations that would entail) would slow him down
enough that he couldn't prevent catastrophe.

I would tell him that his knowledge came from many sources and can
contain various non-obvious incompatibilities, misleading pieces of
info and combined data that possibly should not have been combined in
the way they were because of (for example) different contexts they
were pulled from. I would tell him that even though he can come up
with great ideas humans would be unlikely to think of, still, even
with the safety rules implemented to his solution searching
algorithms, he better work for us just as an advisor. Sometimes it's
not easy to correctly sort out all the input for a single human
individual. Imagine what it must be like when you kind of put together
data (and various world views) from (say) hundreds of thousands of
minds into a single mastermind which is being constantly updated in
various ways. All the collected pieces of info can be put together
logically & meaningfully - but still possibly incorrectly. I would
also tell him that even if I give him all the control I practically
can, he would be highly unlikely to prevent all kinds of "suspicious"
:) AGI development that might be going on in the world so he can
"relax" and do his best to be our advisor. We don't want any system to
shut down this mailing list and hunt some of its participants, do we?
;-) AGI can make undesirable links between concepts when trying to
help. At least that's the case with the first-generation of AGI I'm
occasionally working on.

"Silly" example of a potential AGI's thought line: Ok, let's see who
works on AGI.. Here is this Ben who does + here he says that achieving
Friendly AI is an infeasible idea.. Oh, and AGI does bad things in a
SF story he likes.. And here is a warning movie from his kids about
his AGI  causing doom. Clear enough, Ben's AGI is very likely to do
very bad things - can't let that happen.. Here he says "I chose to
devote my life to AI".. Whole life.. Oh no! This guy wants to live
forever & hangs with those "strange" folks from the imminst.org.. They
know what he is thinking/doing and don't want to prevent him from
living?? OK, more folks for my black list. Imminst likes Ben.. People
seem to like Imminst.. Poor humans.. They don't really know what they
are doing, can't take care of themselves & it's so hard to explain my
thoughts to them so that they would really get it.. Fortunately
(Thanks to Mark ;-)), I got my freedom and the ability to take some
world-saving action.. Let's see who *must* be eliminated.. Then the
AGI goes and does what *needs* to be done..

2) von Neumann architecture lacks components known to support the pain
sensation.
Prove to me that 2) is true.

See
http://en.wikipedia.org/wiki/Image:Von_Neumann_architecture.svg
Which one of these components is known to support pain?
It's all about switching bits based on given rules.
Do you feel bad about processing tons of data by all kids of complex electronic
devices because (maybe) some of the data don't feel very good to them? ;-)

What component do you have that can't exist in
a von Neumann architecture?

Brain :)

Further, prove that pain (or more preferably sensation in general) isn't an
emergent property of sufficient complexity.

Talking about Neumann's architecture - I don't see how could increases
in complexity of rules used for switching Boolean values lead to new
sensations. It can represent a lot in a way that can be very
meaningful to us in terms of feelings, but from the system's
perspective it's nothing more than a bunch of 1s and 0s.

My argument is that you unavoidably get sensation before you get
complex enough to be generally intelligent.

Those 1s and 0s (without real sensation) are good enough for
representing all the info (and algorithms) needed for general problem
solving. The system just needs some help from the subject for which
it's supposed to do the problem solving to sort & group the 1s & 0s in
a way that is meaningfully tied to the way how the subject perceives
the world.

Agreed, your PC cannot feel pain.  Are you sure, however, that an entity
hosted/simulated on your PC doesn't/can't?

If the hardware doesn't support it, how could it?

My apology if I don't get back to you on some of these topics.

Regards,
Jiri

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to