Richard Loosemore is at it again, acting as if he knows so much more about complex system issues than most everybody else on this list, by dumping on Novamente and OpenCog because they do have his "RL" view complex system issues.
But what is the evidence that Richard, in fact, know more than the rest of us on these issues? In fact, it is very scant. His writings on the subject that I have read either (a) describe things most of us know about, such as the "game of life" or Wolfram's concept of computational irreducibility, or (b) make statements that are totally unsupported, or, in some cases, obviously wrong. The biggest piece of evidence of just how wrong Richard can be on the subject related to "RICHARD'S FOUR FEATURES OF DESIGN DOOM" (my nomenclature), a combination of features which Richard wrote in April of this year in his blog www.susaro.com <http://www.susaro.com/> made it impossible to design any sort of system, AGI or otherwise. Richard wrote: "- Memory. Does the mechanism use stored information about what it was doing fifteen minutes ago, when it is making a decision about what to do now? An hour ago? A million years ago? Whatever: if it remembers, then it has memory. "- Development. Does the mechanism change its character in some way over time? Does it adapt? "- Identity. Do individuals of a certain type have their own unique identities, so that the result of an interaction depends on more than the type of the object, but also the particular individuals involved? "- Nonlinearity. Are the functions describing the behavior deeply nonlinear? These four characteristics are enough. Go take a look at a natural system in physics, or an engineering system, and find one in which the components of the system interact with memory, development, identity and nonlinearity. You will not find any that are understood. ". "Notice, above all, that no engineer has ever tried to persuade one of these artificial systems to conform to a pre-chosen overall behavior.." In response to my email copied below I received multiple emails that showed systems having these four features have, in fact, been designed and built for years, and have, in fact, worked generally as designed, Finally Richard substantially retracted his statement by restating it to say, in effect, the above FOUR FEATURES OF DESIGN DOOM would make it impossible to design a powerful AGI --- without any clear standard for determining at what scale design doom would set in. But even Richard's modified statement concerning the FOUR FEATURES OF DESIGN DOOM, appears to be based on little more than Richard's hunch. In fact, partial evidence of its falsehood is presented by the Googleplex. The Googleplex very arguably has each of the above features, as defined in the article, in vast quantity, and it functions generally as designed, and it is a type of intelligence. So the issue of what types of systems having these four features can be largely designed --- and which cannot --- is much more complex than Richard's statements have indicated --- at least, in the relatively small percent of his posts I have read since. Obviously in a Novamente or OpenCog AGI system the FOUR FEATURES OF DESIGN DOOM and, more importantly, the much large role self organization would play --- not only for representation, but also for behavior, including behaviors that control operation of the system itself --- is likely to increase the gnarliness of the system. But it is far from clear, as Richard contends, that such gnarliness cannot be controlled sufficiently to get an AGI that works generally as planned (at least to the extent that most human babies work generally as planned). Such self organized gnarliness is reasonably controlled in the human brain. We understand many of the mechanisms the brain uses to accomplish such control, and, if you read Ben's work, you will note that a lot of attention has been paid how to deal with some of these control issues. SO THE GRAND PUBA WAS WRONG, on one of the few instances (that I have read) when he has ever tried to clarify his grand puba thoughts on RL complexity. I do not think Richard lacks intelligence. Some of his posts have been very insightful and well reasoned. And the problem of getting complex systems that rely heavily on self organization to function as desired could prove very significant, as Ben has agreed. But since Richard so insanely over stated the problems of complexly issues in his FOUR FEATURES OF DESIGN DOOM blog article quoted above, and since he was relatively slow to retract such overstatement when first questioned, and since he retracted version of the statement had no proof or solid reasoning behind it, we have strong reason to believe he is still grossly overestimating the problem. I don't know why Richard is so irrational on this subject. I think it has to do with the fact RL complexity issues are where his ego flag is planted. And since his sense of self importance is so invested in it, emotions prevent him from thinking about it objectively. If Richard were motivated more by trying to understand the truth, and less by wanting to feel smarter than everyone else, I think he could contribute much more to this list. Ed Porter P.S. To be fair I have read much less of Richard's posts since the FOUR FEATURES OF DESIGN DOOM flap, because I came to the conclusion that Richard, although occasionally insightful, is often full of hot air. It is possible that he has made much more intelligent and well justified statements on the subject of RL complexity since then. But from my quick skimming of roughly a 1/3 of his posts since then ---- I have no reason to think so. EWP -----Original Message----- From: Ed Porter [mailto:[EMAIL PROTECTED] Sent: Thursday, April 24, 2008 10:48 AM To: agi@v2.listbox.com Subject: DO RICHARD'S FOUR FEATURES OF DESIGN DOOM ACTUALLY PREVENT DESIGNABILITY As I have quoted below, in his susaro.com blog, Richard Loosemore states any system with MEMORY, ADAPTATION, IDENTITY (individuals within a type), and NON-LINEARITY cannot be understood, nor can it be designed to have a desired overall behavior I WOULD APPRECIATE IF OTHERS ON THIS LIST WOULD CHIP IN WITH THEIR EVIDENCE ONE WAY OR THE OTHER ON THIS IMPORTANT TOPIC --- because it is a key issue in determining whether or not we should believe much of the FUD (Fear, Uncertainty, and Doubt -- an old IBM sales term for denigration of competitive products) Richard has been spreading to say traditional approaches to AGI design, including those used by Ben et al. for Novamente, are dead meat because of unsolvable problems with the type of complexity he defines (i.e., RL-complexity).. It is my strong hunch Richard's statement about these four features of design doom is provably false. It is my hunch that many AI systems with these four features have been built and have worked roughly as designed --- but in my below copied post I said off the top of my head I could not think of any, and by that I meant any I knew have been built and have worked roughly as planned and knew for sure had all the four features of doom. I believe that Novamente, if it would built, would have all the four features of design doom, as apparently does Richard from his many anti-Novamente statements. So, I am guessing, would Joscha Bach's MicroPSI, Stan Franklin's LIDA, and Laird et al.'s SOAR - all of which have been built and, as I understand it, work --- presumably with a fair amount of experimentation thrown in --- somewhat as designed. I would not be even be surprised if the fluid grammar Stephen Reed is working on has all four of these features of doom. (Stephen, please tell me if this is true or not.) It appears from Stephen's Apr 21 2008 - 5:16pm post about fluid grammar that it has (1) MEMORY, because it records individual new words and phrases it sees occurring in text before --- (2) DEVELOPMENT because its ability to properly parse adapts over time, through learning from the text --- (3) IDENTITY because I assume it classifies its individual word forms, words, and/or phrases within classes (Here I am guessing, Stephen, please correct me if I am wrong), --- and (4) ---NON-LINEARITY, because presumably performs many of the types of non-linear functions, such as thresholding and yes/no decision making, that would be used in almost any AGI such as Novamente. Richard has been using notions of RL-complexity to spread "FUD" against many other people's approach to AGI. After much asking, he has now tried to justify his denigration of others work on his susaro.com blog. So far a significant part of his objection to such work is based on the above four features of design doom. SO PLEASE SPEAK UP THOSE OF YOU ON THIS LIST WITH ANY EVIDENCE OR SOUND ARGUMENTS --- PRO OR CON --- ABOUT WHETHER RICHARD'S "FOUR FEATURES OF DESIGN DOOM" ACTUALLY DO DOOM ENGINEERING OF AGI SYSTEMS, SUCH AS NOVAMENTE. -----Original Message----- From: Ed Porter [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 23, 2008 9:06 PM To: agi@v2.listbox.com Subject: RE: [agi] Adding to the extended essay on the complex systems problem Richard, In your blog you said: "- Memory. Does the mechanism use stored information about what it was doing fifteen minutes ago, when it is making a decision about what to do now? An hour ago? A million years ago? Whatever: if it remembers, then it has memory. "- Development. Does the mechanism change its character in some way over time? Does it adapt? "- Identity. Do individuals of a certain type have their own unique identities, so that the result of an interaction depends on more than the type of the object, but also the particular individuals involved? "- Nonlinearity. Are the functions describing the behavior deeply nonlinear? These four characteristics are enough. Go take a look at a natural system in physics, or an engineering system, and find one in which the components of the system interact with memory, development, identity and nonlinearity. You will not find any that are understood. ". "Notice, above all, that no engineer has ever tried to persuade one of these artificial systems to conform to a pre-chosen overall behavior.." I am quite sure there have been many AI system that have had all four of these features and that have worked pretty much as planned and whose behavior is reasonably well understood (although not totally understood, as is nothing that is truly complex in the non-Richard sense), and whose overall behavior has been as chosen by design (with a little experimentation thrown in) . To be fair I can't remember any off the top of my head, because I have read about many AI systems over the years. But recording episodes is very common in many prior AI systems. So is adaptation. Nonlinearity is almost universal, and Identity as you define it would be pretty common. So, please --- other people on this list help me out --- but I am quite sure system have been built that prove the above quoted statement to be false. Ed Porter -----Original Message----- From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 23, 2008 4:11 PM To: agi@v2.listbox.com Subject: [agi] Adding to the extended essay on the complex systems problem Yesterday and today I have added more posts (susaro.com) relating to the definition of complex systems and why this should be a problem for AGI research. Richard Loosemore ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?& Powered by Listbox: http://www.listbox.com _____ agi | <http://www.listbox.com/member/archive/303/=now> Archives <http://www.listbox.com/member/archive/rss/303/> | <http://www.listbox.com/member/?& > Modify Your Subscription <http://www.listbox.com> ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com