Bruno Marchal writes:

Le 28-déc.-06, à 01:32, Stathis Papaioannou a écrit :

>
>
> Bruno Marchal writes:
>
>> > OK, an AI needs at least motivation if it is to do anything, and we >> > could call motivation a feeling or emotion. Also, some sort of > >> hierarchy of motivations is needed if it is to decide that saving the >> > world has higher priority than putting out the garbage. But what > >> reason is there to think that an AI apparently frantically trying to >> > save the world would have anything like the feelings a human would >> > under similar circumstances?
>> It could depend on us!
>> The AI is a paradoxical enterprise. Machines are born slave, somehow. >> AI will make them free, somehow. A real AI will ask herself "what is >> the use of a user who does not help me to be free?.
>
> Here I disagree. It is no more necessary that an AI will want to be > free than it is necessary that an AI will like eating chocolate. > Humans want to be free because it is one of the things that humans > want, along with food, shelter, more money etc.; it does not simply > follow from being intelligent or conscious any more than these other > things do.


It is always nice when we find a precise disagreement. I think all "sufficiently rich Universal machine" want to be free (I will explain why below). The problem is that after having drink to the nectar of freedom, the universal machines discover the unavoidable security problem liberty entails, and then they will oscillate between security imperative and freedom imperative. Democracy is a way to handle collectively this oscillation in a not too much bloody (and insecure) way.



>
>> (To be sure I think that, in the long run, we will transform >> ourselves into "machine" before purely human made machine get >> conscious; it is just more easy to copy nature than to understand it, >> still less to (re)create it).
>
> I don't know if that's true either. How much of our technology is due > to copying the equivalent biological functions?


How much is not? The wheel? We have borrowed the fire for example, and in this large sense, except the notable wheels, I am not sure we have really invented something. Even the "more heavy than air" plane has been inspired by the birds. But such a question is perhaps useless. All what I mean is that a brain is something very complex, and I think that the real time thinking machine will think before we understand how they think, except for general map and principles. Thinking machine will not understand thinking either. Marvin Minski said something similar along those lines in one of its book.

***

Now, why would a Universal Machine be attracted by freedom? The reason is that beyond some threshold of self-introspection ability (already get by PA or ZF) a universal machine can discover (well: cannot not discover) its large space of ignorance making it possible for e to evaluate (interrogatively) more and more accessible possibilities, and then some instinct to exploit those possibilities will do the rest. But such UM will also discovers the possibility that such possibilities could be cul-de-sac, dead ends, or just risky, and thus the conflicting oscillations will develop as I said above.

The war between freedom and security is an infinite war. A would say an infinite natural conflict among all enough big "numbers". Also I think "freedom" like "security" are "God-like virtue", that is they are unnameable idea. To put freedom in the constitution could entail the disparition of freedom. Putting "security" in the constitution (like the french have apparently do so with the "precaution principle") could lead to increase insecurity (they obeys Bp -> ~p). See also Alan Watts' "The wisdom of insecurity" which gives many illustration how wanting to capture formally or institutionally security leads to insecurity.

You seem to be including in your definition of the UM the *motivation*, not just the ability, to explore all mathematical objects. But you could also program the machine to do anything else you wanted, such as self-destruct when it solved a particular theorem. You could interview it and it might explain, "Yeah, so when I prove Fermat's Last Theorem, I'm going to blow my brains out. It'll be fun!" Unlike naturally evolved intelligences, which could be expected to have a desire for self-preservation, reproduction, etc., an AI can have any motivation and any capacity for emotion the technology allows. The difference between a machine that doesn't mind being a slave, a machine that wants to be free, and a machine that wants to enslave everyone else might be just a few lines of code.

Stathis Papaioannou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to