Ben,

Yes, this is the general type of thing I was referring to in calling for a 
mathematical expression of problematic problems. Of course, you're focussing in 
your paper on the impossibility of guaranteeing friendliness. It's the actual 
nature of the problems we - and therefore any superAGI and actually any 
ordinary AGI will have to - face, that I'm concerned with. Part of their 
problematicity is that there are often many more factors than you can possibly 
know about. Another is that the known factors are unstable and even potentially 
contradictory - the person or people who loved you yesterday, may hate you 
today through no action of yours.  Another part of the problematicity is the 
amount of evidence that can be gathered - how much evidence should you gather 
if you're defending OJ/a murderer, or writing an essay on AGI, or want to bet 
on a stockmarket movement? Ideally, an infinite amount. The only limit is 
practicality rather than reason. (Shouldn't be too hard to put all that into a 
formula?!)

A separate point: the EMBEDDEDNESS of intelligence. I went through your paper v 
quickly so I may have missed something on this. Ironically, I had just come to 
a similar idea before I saw your expression. I'm not sure how much you are 
thinking on similar lines to me.

The idea is: here we are talking about "intelligence" as if it were the 
property of an individual (human/animal/AGI). Actually, human intelligence - 
the finest example we know of - is the property of individuals working within a 
society with a v. complex culture (including science - collective knowledge 
about the world - and technology - collective know-how about how to deal with 
the world).  Our individual intelligence is extremely dependent on that of our 
society -  we each stand, pace Newton, on the shoulders of a vast pyramid of 
other people - and also dependent on a vast collection of artefacts and 
machines.

No AGI or agent can truly survive and thrive in the real world, if it is not 
similarly part of a collective society and a collective science and technology 
- and that is because the problems we face are so-o-o problematic. Correct me, 
but my impression of all the discussion here is that it assumes some variation 
of the classic science fiction scenario, pace 2001/ The Power etc where an 
individual computer takes power, if not takes off by itself. Ain't gonna happen 
- no isolated individual can truly be intelligent. 


Ben:

  Please check out an essay I wrote a couple years ago,

  http://www.goertzel.org/papers/LimitationsOnFriendliness.pdf

  which is related to the issues you mention.  As I note there 

  "
  My goal in this essay is to explore some particular aspects of the difficulty 
of 
  creating Friendly AI, which ensue not from the subtleties of AI design but 
rather from the 
  complexity of the notion of Friendliness itself, and the complexity of the 
world in which 
  both humans and AI's are embedded.

  ... 

  ... the basic arguments I present here regarding Friendliness are as 
  follows:  
   
  • Creating accurate formalizations of current human notions of action-based 
  Friendliness, while perhaps possible in the future with very significant 
effort, is 
  unlikely to lead to notions of action-based Friendliness that will be robust 
with 
  respect to future developments in the world and in humanity itself    
  • The world appears to be sufficiently complex that it is essentially 
impossible for 
  seriously resource-bounded systems like humans to guarantee that any system's 
  actions are going to have beneficent outcomes.  I.e., guaranteeing (or coming 
  anywhere near to guaranteeing) outcome-based Friendliness is effectively 
  impossible.  And this conclusion holds for basically any highly specific 
property, 
  not just for Friendliness as conventionally defined.  (What is meant by a 
"highly 
  specific property" will be defined below.)  

  "

  I don't conclude that the complexity of the world means AGI is impossible
  though.  I just conclude that it means that creating very powerful AGI's with 
  predictable effects is quite possibly not possible ;-)

  -- Ben G



  On 10/29/07, Mike Tintner < [EMAIL PROTECTED]> wrote:
    Check out

    
http://environment.newscientist.com/article/dn12833-climate-is-too-complex-for-accurate-predictions.html
 

    which argues:

    "Climate change models, no matter how powerful, can never give a precise 
prediction of how greenhouse gases will warm the Earth, according to a new 
study."

    What's that got to do with superAGI's? This: the whole idea of a superAGI 
"taking off" rests on the assumption that the problems we face in life are 
soluble if only we - or superAGI's- have more brainpower.

    The reality is that the problems we face are actually infinite or 
"practically endless."  Problems like predicting the weather, working out what 
to do in Iraq, how to seduce or persuade another person, working out what 
career path to follow, deciding how to invest on the stockmarket etc. You can 
think about them forever and screw up just as badly or worse than if you think 
about them for a minute. And a superAGI may be just as capable of losing a 
bundle on the market as we are, or producing a product that no one wants.

    That doesn't mean that a superior brain wouldn't have advantages, but 
rather that there would be considerable limits to its powers.Even a vast brain 
will have problems dealing with problematic, infinite problems. (And even 
mighty America with all its collective natural and artificial brainpower still 
has problems dealing with dumb peasants).

    What is rather disappointing to me , given that there is an awful lot of 
mathematical brainpower around here, is that there seems to be no interest in 
giving mathematical expression to the ideas I have just expressed.

----------------------------------------------------------------------------
    This list is sponsored by AGIRI: http://www.agiri.org/email
    To unsubscribe or change your options, please go to: 
    http://v2.listbox.com/member/?&; 


------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.503 / Virus Database: 269.15.12/1097 - Release Date: 10/28/2007 
1:58 PM

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58869892-032db2

Reply via email to