On Fri, Aug 22, 2008 at 7:30 AM, Valentina Poletti <[EMAIL PROTECTED]> wrote: > Jim, > I was wondering why no-one had brought up the information-theoretic aspect > of this yet. Are you familiar at all with the mathematics behind such a > description of AGI? I think it is key so I'm glad someone else is studying > that as well.
I am not familiar with the mathematics behind an information-theoretic description of AGI. I am not sure if you were talking to me, because I do not feel that information theory is the right way to approach the problem. I think that Shannon wrote in 1949 that semantics was not an engineering problem and I would generalize that to say that the discovery of meaning from input is not an engineering problem. It cannot be solved through concise mathematical formulas alone. There is no doubt that skill in Information Theory would be useful in a complicated computer project (I wish I knew more) but I do not feel that it is the key to discovering the yet to be discovered theories of AI. I know a little about the various concepts that are discussed in these AI groups and I feel that it is useful to generalize and combine those different viewpoints. But I would like to try to answer a little of your question. No an airplane does not do much for birdom, but airplanes are not designed for that. Aircraft are also not designed to be intelligently adaptive except in controlled ways. We can use this idea to begin to think about different degrees of freedom in intelligently adaptive learning though. A autopilot might instruct an aircraft to fly level given its input in a fairly simple way. A more advanced design might use more advanced systems that included a variety of feedback on its flight control surfaces (controlled by output) so that it could recognize that certain actions might only have limited effects under some conditions. At a next level of the freedom of intelligence, the aircraft might plot a course using radar and positional notes posted by other aircraft so that it could avoid turbulence. And at a higher level of freedom in learning, one might think about constructing a program that ran on a simulator so that the simulated aircraft program could learn for itself based on trial and error. I think most of us would argue that such a system could be designed for bird flight, although that would be more challenging for a number of reasons. This could actually be used to help birds if there was a will to experiment in that direction! But I hope I got the idea across that the easiest adaptive programs to design just have one level of reaction: they just follow the program which was itself written to deal with as many circumstances as it could to be effectively used in controlling a machine. Another level of reaction might include more detailed planning and operation given the circumstances that the programmers expected to be encountered using generalizations that only differed by measures or constrained groups of conditionals that could be detected using instrumental devices of some kind. The next level includes learning for itself through trial and error. I feel that this highest level of learning implies that such a system would have to be capable of intricate systems of representing knowledge which is beyond our current theories. I am interested in certain mathematical programs especially as they apply to AI but I do not know much about the information theoretic approach to AI. I am currently working on my own theory about solving the Logical Satisfiability Problem in polynomial time, but I haven't done it yet. Jim Bromer ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com