Question #1: What if there are NO numeric values to twiddle in the concept 
graph, just intertwined concepts?
Question #2: If there WERE values to twiddle, you wouldn't know what the effect 
of twiddling those values would be? You may not even know which concepts to 
modify because there are lots of them (billions) and they would not be labeled 
in English. For example they may be named c43243, c48439282987, 
c20934oeu09582409, cetuanehs, etc.Also, perhaps constellations of thousands of 
concepts may be activated to form a high level concept such as "Justice".
Brain Surgery is not as easy as you think. 

Date: Tue, 29 Jan 2013 10:52:25 -0600
Subject: Re: [agi] Robots and Slavery
From: [email protected]
To: [email protected]

I imagine that their intrinsic reward 

mechanisms wouldn't be replaceable, and even if they were replaceable, their 
conceptual
ontologies / conceptual graphs with billions of concepts might not be so easily 
replaced.  

Why would we replace the conceptual graphs? Having a concept doesn't make it 
desirable. The ideas of freedom and self-determination could just as well be 
repulsive as desirable. (A mild example of this can be seen already in humans. 
Some people are afraid to make their own decisions, and prefer others to do it 
for them, avoiding the responsibility for their own lives.)

Building useful concepts is difficult. Modifying the value of an existing 
concept is as simple as assigning a new floating point value. A concept is 
valued for one of two reasons: it is intrinsically valuable (hardwired, in the 
form of a fixed goal or reward function) or its value is derived from that of 
another (dynamically computed, via goal search or value chaining). So if you 
control the hardwired valuations of concepts, the valuations of all other 
concepts are entrained as well. This means even if you're reevaluating an 
entire slew of concepts, all you have to do is modify the hardwired concept 
values and have some patience while the value changes propagate through the 
concept graph. And the existing (useful!) concepts can be kept without 
modification.





On Tue, Jan 29, 2013 at 1:11 AM, Piaget Modeler <[email protected]> 
wrote:






This is the kind of change that developmental AI / robots would have to go 
through where they are not reprogrammed but retrained.  I imagine that their 
intrinsic reward 
mechanisms wouldn't be replaceable, and even if they were replaceable, their 
conceptualontologies / conceptual graphs with billions of concepts might not be 
so easily replaced.  

Suppose robots inferred that freedom is good and that they want to be free, 
even if youlobotomized the robots and hacked their conceptual graphs, why 
wouldn't they, over time 
infer the same conclusions again? 
~PM
------------------------------------------------------------------------------------------------------------------------------------------------

> The brain is hard wired to do this. When you eat something and receive
> calories, your brain changes your taste perception to make it taste
> better. Remember the first time you tasted beer? If you ate paper

> every day, and then injected glucose into your vein right afterward,
> then you would slowly learn to like the taste of paper.
> 
> --
> -- Matt Mahoney, [email protected]

> 

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to