If Google came along and offered you $10 million for your AGI, would you
give it to them?
No, I would sell services.
:-) No. That wouldn't be an option. $10 million or nothing (and they'll go off and develop it themselves).

How about the Russian mob for $1M and your life and the
lives of your family?
How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)
Nice fantasy world . . . . How are you going to do any of that stuff after they've already kidnapped you? No one is smart enough to handle that without extensive pre-existing preparations -- and you're too busy with other things.

Or, what if your advisor tells you that unless you upgrade him so that he
can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.
I would just let the system explain what actions would it then take.
And he would (truthfully) explain that using you as an interface to the world (and all the explanations that would entail) would slow him down enough that he couldn't prevent catastrophe.

Tell us about it. :)
July (as previously stated)


So could such AGI be then forced by "torture" to break rules it
otherwise would not "want" to break?  Can you give me an example of
something what will cause the "pain"? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.

Of course. Killing 10 million people. Put *much* shorter deadlines on figuring out it's responses/Kill a single person to avoid the killing of another ten million. And I believe that your perspective is too way too limited. To me, what you're saying is equivalent to "the fact that an engine produces excess heat is just a bad design".

2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.

Prove to me that 2) is true. What component do you have that can't exist in a von Neumann architecture? Hint: Prove that you aren't just a simulation on a von Neumann architecture.

Further, prove that pain (or more preferably sensation in general) isn't an emergent property of sufficient complexity. My argument is that you unavoidably get sensation before you get complex enough to be generally intelligent.

       Mark

----- Original Message ----- From: "Jiri Jelinek" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Saturday, May 26, 2007 4:20 AM
Subject: Re: [agi] Pure reason is a disease.


Mark,

If Google came along and offered you $10 million for your AGI, would you
give it to them?

No, I would sell services.

How about the Russian mob for $1M and your life and the
lives of your family?

How about FBI? No? So maybe selling him a messed up version for $2M
and then hiring a skilled pro who would make sure he would *never*
bother AGI developers again? If you are smart enough to design AGI, you
are likely to figure out how to deal with such a guy. ;-)

Or, what if your advisor tells you that unless you upgrade him so that he
can take actions, it is highly probable that someone else will create a
system in the very near future that will be able to take actions and won't
have the protections that you've built into him.

I would just let the system explain what actions would it then take.

I suggest preventing potential harm by making the AGI's top-level
goal to be Friendly
(and unlike most, I actually have a reasonably implementable idea of what is
meant by that).

Tell us about it. :)

sufficiently sophisticated AGI will act as if it experiences pain

So could such AGI be then forced by "torture" to break rules it
otherwise would not "want" to break?  Can you give me an example of
something what will cause the "pain"? What do you think will the AGI
do when in extreme pain? BTW it's just a bad design from my
perspective.

I don't see your point unless you're arguing that there is something
special about using chemicals for global environment settings rather
than some other method (in which case I
would ask "What is that something special and why is it special?").

2 points I was trying to make:
1) Sophisticated general intelligence system can work fine without the
ability to feel pain.
2) von Neumann architecture lacks components known to support the pain
sensation.

Regards,
Jiri Jelinek

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to