Dear fellow minds, 
 
After editing the book "Nanotechnology, towards a molecular construction
kit" (1998), I have become a believer in strong AI. As a result, I still
worry about an upcoming "war against the machines" leading to our
destruction or enslavement. Robots will simply evolve beyond us. Until a
few days ago, I believed this war and outcome to be inevitable. 
 
However, there may be a way out. What thoughts has any of you concerning
the following line of reasoning: 
 
First, human values have evolved along the model of Claire Graves. Maybe
you heard about his work in terms of "Spiral Dynamics". Please look into
it if you don't. To me, it has been an eye opener. 
Second, a few days ago it dawned on me that intelligent robots might
follow the same spiral evolution of values: 
 
1. The most intelligent robots today are struggling for their survival
in the lab (survival). Next, they would develop a sense of: 
2. a tribe
3. glory & kingdom (here comes the war...)
4. order (the religous robots in Battlestar Galactica, which triggered
this idea in the first place)
5. discovery and entrepreneurship (materialism)
6. social compassion ("robot hippies")
7. systemic thinking
8. holism. 
 
In other words, if we guide robots/AI quickly and safely into the value
system of order (3) and help them evolve further, they might not kill us
but become our companions in the universe. N.B. This is quite different
from installing Asimov's laws: the robots need to be able to develop
their own set of values.  
 
Anyone? 
 
Best regards,
Arthur ten Wolde
The Netherlands
 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=75126314-7958e6

Reply via email to