Paul:It is my understanding that the basic problem in Friendly AI is that it is 
possible for the AI to interpret the command "help humanity" etc wrong, and 
then destroy humanity (what we don't want it to do). The whole problem is to 
find some way to make it more probable to not destroy us all. It is correct 
that a simple sentence can be interpreted to mean something that we don't 
really mean, even though the interpretation is logical for the AI. 

Yes - and essentially this is a replay of the problem that has plagued 
philosophy and linguistics for hundreds if not thousands of years - the dream 
of producing a language with precise meanings - the "perfect language." (Has 
this not been discussed here?) Eco wrote a book about it. I think it's now 
generally recognised that it is a pure fantasy.

I'm not so sure though whether it has been fully recognised that the whole 
function of language and any symbolic system is to be general and abstract and 
NOT pin down meaning or reference precisely. You obviously don't want numbers, 
for example, like "1" to refer to only one particular object. But you don't 
even want apparently particular names, like "Paul Horsmalahti" to refer to one 
particular object at one particular point in time. There are many "Paul 
Horsmalahti's", for every human has a rich, varied and developing personality - 
and,usually, physique..

 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=20469632-016a92

Reply via email to