Re: [agi] structure of the mind
I think that the concept that many of you are struggling to voice is "Credit attribution is a really hard problem in AGI. Market economies solve that problem (with various difficulties, but . . . . :-)" - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
>> As has been pointed out in this thread (I believe by Goertzel and >> Hall) Minsky's approach in Society of Mind et seq of adding large >> numbers of systems then begs the question: how will these things >> ever work together, and why should the system generalize? rooftop> How does adding auditory modules to our brain generalize rooftop> anything? How does addinga new inference algorithm generalize rooftop> anything? Because you have extra ways to process information, rooftop> you can extract new information and build new modules around rooftop> it. rooftop> I don't see how adding information and code can be a bad rooftop> thing (if you have enough cpu power), it will just make it rooftop> more likely for the right subset to be part of your system An AGI does something specific. Adding a new inference algorithm to it, aside from slowing it down, can also make it do the wrong thing. Especially when its a program for something as complicated as cognition. Its not that I'm against modules. rooftop> I criticized it >> from this point of view in What is Thought? One way to try to >> handle the organization then is an economic framework. >> rooftop> I thought the obvious equivalent of {economy and money} is rooftop> information spreading. If you are a big player, a lot of rooftop> other modules will take your outputs (information) and rooftop> process it, giving you more influence overall. Useless rooftop> information won't be further processed and will be a dead end rooftop> in the system Well, I'm not sure what particular algorithm you are referring to here. Something has to decide who is a big player and which information is useless. Minsky doesn't describe how that's done anywhere that I know. The Soviet Union tried to run the economy by central management, and it didn't work in part because the center doesn't have the information to decide what is needed to be done. That information is carried in a free market economy by prices, and its basically unavailable to the central planners, who therefor can't readily get the economy right. In the mind you might imagine a central unit which effectively understands what the price structure should be because it was created by evolution, but its by no means clear how to get it right in an AI unless you have something like prices. rooftop> rooftop> Now that's room service! Choose from over 150,000 hotels in rooftop> 45,000 destinations on Yahoo! Travel to find your fit. rooftop> http://farechase.yahoo.com/promo-generic-14795097 rooftop> - This list is sponsored by AGIRI: rooftop> http://www.agiri.org/email To unsubscribe or change your rooftop> options, please go to: rooftop> http://v2.listbox.com/member/?list_id=303 - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
Russell> On 3/20/07, Eric Baum <[EMAIL PROTECTED]> wrote: >> This is the problem with Wallace's complaints. You actually want >> the "machine [to do] something unpredicted", namely the right thing >> in unpredicted circumstances. Its true that its hard and expensive >> to engineer/find an underlying compact explanation, but it is >> precisely the fact that this very constrained/compact underlying >> program is so improbable that makes it work! The arguments for its >> working in fact *rest exactly* on the fact that it is so >> improbable, it wouldn't exist unless it generalized to new >> experiences. So while its hard to engineer this, which might be >> called emergence, you will IMO be forced to if you want to >> succeed. That is the reason why AGI is hard. >> Russell> It's one reason why AGI is hard, and there is truth in what Russell> you say. Russell> However, ab initio search for compact explanation is so hard Russell> that we humans mostly don't do it because we can't. When we Russell> do have to bite the bullet and explicitly attempt it, it Russell> often takes entire communities of geniuses working for Russell> decades to produce a result that can be boiled down to a few Russell> lines. Newton, Darwin, Einstein et al were by no means the Russell> only ones working on their various problems. Koza has an Russell> example of the invention of a simple circuit, I think it was Russell> the negative feedback amplifier or somesuch, you could draw Russell> it on the back of a cigarette pack, it took a very bright Russell> engineer months or years of thinking before he cracked it, Russell> and there were lots of others trying at the same time. Russell> What we mostly do is use existing solutions and blends Russell> thereof, that were developed by our predecessors over Russell> millions of lifetimes. Don't forget the investment of effort by evolution, which was even far greater still. Even when I'm programming, apparently Russell> writing new code, I'm really mostly using concepts I learned Russell> from other people, tweaking and blending them to fit the Russell> current context. Russell> And an AGI will have to do the same. Yes, it will have to be Russell> able to bite the bullet and run a full-blown search for a Russell> compact solution when necessary. But that's just plain too Russell> hard to be doing all the time, so an AGI will have to, like Russell> humans, mostly rely on existing concepts developed by other Russell> people. Oh absolutely. What's hard, and has to be faced, is designing the AGI. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
On Tue, Mar 20, 2007 at 06:34:25PM +, Russell Wallace wrote: > wouldn't exist unless it generalized to new experiences. So while > its hard to engineer this, which might be called emergence, It's not emergence, but rather failing gracefully and doing the right thing. > you will IMO be forced to if you want to succeed. That is the > reason why AGI is hard. There are many reasons why AGI is hard. This is only one of them. Folks, please use the right quoting style. Not posting HTML-only is a good start. Levels of whitespace indentation don't cust the mustard. You have to use ">". >It's one reason why AGI is hard, and there is truth in what you say. >However, ab initio search for compact explanation is so hard that we >humans mostly don't do it because we can't. When we do have to bite Exhaustive searches are intractable, but if the fitness space has high diversity in a small ball at each given point of genotype space and a neutral fitness network though which individua can percolate through without suffering dire consequences you can reach pretty good solutions without doing the impossible. And, of course, the systems reshaping their fitness landscape in above way is the hardest trick they have to do, because they have to effectively (statistically) brute force that initial threshold. It's pretty easy sailing afterwards. >the bullet and explicitly attempt it, it often takes entire >communities of geniuses working for decades to produce a result that >can be boiled down to a few lines. Newton, Darwin, Einstein et al were >by no means the only ones working on their various problems. Koza has >an example of the invention of a simple circuit, I think it was the >negative feedback amplifier or somesuch, you could draw it on the back >of a cigarette pack, it took a very bright engineer months or years of >thinking before he cracked it, and there were lots of others trying at >the same time. Evolutionary designs typically produce networks with both positive and negative feedback loops. Miraculously, these are not only stable, but rather robust. Notice that a mix of positive and negative feedback loops is an earmark of nonlinear dynamics systems. That evolutionary algorithms produce just these is not a coincidence. It indicates nonlinear systems are damn good solutions. Notice that human designers routinely miss these, and don't even have the analytical tools to understand these when plunked down in front of their very noses. What you described is not an isolated occurence. It is a typical case. >What we mostly do is use existing solutions and blends thereof, that >were developed by our predecessors over millions of lifetimes. Even >when I'm programming, apparently writing new code, I'm really mostly >using concepts I learned from other people, tweaking and blending them >to fit the current context. I don't view programming as programming, but as state and state transformations. Everything else is just semantics and syntactic sugar. And once you realize that you're dealing with a lot of state, and quite nonlinear transformations, then immediately the source of the state (somebody typing it in? I don't think so) and the kind of transformations (written down explicitly? I don't think so) come in. >And an AGI will have to do the same. Yes, it will have to be able to >bite the bullet and run a full-blown search for a compact solution Why "bite the bullet"? Optimisations is where it's all at. >when necessary. But that's just plain too hard to be doing all the >time, so an AGI will have to, like humans, mostly rely on existing >concepts developed by other people. People, as not bipedal primate people. And of course this assumes that everything is zero diversity, so you can just drop in modules, and expect them to make sense. Just for the record of any future readers: not all of us are quite that silly. -- Eugen* Leitl http://leitl.org";>leitl http://leitl.org __ ICBM: 48.07100, 11.36820http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
> > As has been pointed out in this thread (I believe by Goertzel and Hall) > Minsky's approach in Society of Mind et seq of adding large numbers > of systems then begs the question: how will these things ever work > together, and why should the system generalize? How does adding auditory modules to our brain generalize anything? How does addinga new inference algorithm generalize anything? Because you have extra ways to process information, you can extract new information and build new modules around it. I don't see how adding information and code can be a bad thing (if you have enough cpu power), it will just make it more likely for the right subset to be part of your system I criticized it > from this point of view in What is Thought? One way to try to handle > the organization then is an economic framework. > I thought the obvious equivalent of {economy and money} is information spreading. If you are a big player, a lot of other modules will take your outputs (information) and process it, giving you more influence overall. Useless information won't be further processed and will be a dead end in the system Now that's room service! Choose from over 150,000 hotels in 45,000 destinations on Yahoo! Travel to find your fit. http://farechase.yahoo.com/promo-generic-14795097 - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
On 3/20/07, Eric Baum <[EMAIL PROTECTED]> wrote: This is the problem with Wallace's complaints. You actually want the "machine [to do] something unpredicted", namely the right thing in unpredicted circumstances. Its true that its hard and expensive to engineer/find an underlying compact explanation, but it is precisely the fact that this very constrained/compact underlying program is so improbable that makes it work! The arguments for its working in fact *rest exactly* on the fact that it is so improbable, it wouldn't exist unless it generalized to new experiences. So while its hard to engineer this, which might be called emergence, you will IMO be forced to if you want to succeed. That is the reason why AGI is hard. It's one reason why AGI is hard, and there is truth in what you say. However, ab initio search for compact explanation is so hard that we humans mostly don't do it because we can't. When we do have to bite the bullet and explicitly attempt it, it often takes entire communities of geniuses working for decades to produce a result that can be boiled down to a few lines. Newton, Darwin, Einstein et al were by no means the only ones working on their various problems. Koza has an example of the invention of a simple circuit, I think it was the negative feedback amplifier or somesuch, you could draw it on the back of a cigarette pack, it took a very bright engineer months or years of thinking before he cracked it, and there were lots of others trying at the same time. What we mostly do is use existing solutions and blends thereof, that were developed by our predecessors over millions of lifetimes. Even when I'm programming, apparently writing new code, I'm really mostly using concepts I learned from other people, tweaking and blending them to fit the current context. And an AGI will have to do the same. Yes, it will have to be able to bite the bullet and run a full-blown search for a compact solution when necessary. But that's just plain too hard to be doing all the time, so an AGI will have to, like humans, mostly rely on existing concepts developed by other people. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
Eric Baum wrote: Hayek doesn't directly scale from random start to an AGI architecture in as much as the learning is too slow. But the same is true of any other means of EC or learning that doesn't start with some huge head start. It seems entirely reasonable to merge a Hayek like architecture with scaffolds and hand-coded chunks and other stuff (maybe whatever is in Novamente) to get it a head start. This does seem reasonable in principle, and is something worth exploring. We use some economic ideas in the Novamente design, but those aspects of the design have not been implemented yet except in crude prototype form; and in the current version of the design they are more simplistic than (and much faster than) the sort of stuff in Hayek... An advantage of having the economic system then is to impose coherence and constrainedness-- parts that don't in fact work effectively with others will be seen to be dying, forcing you to fix the problems. Without the economic discipline, you are likely to have subsystems (and sub-subsystems) you think are positive but are failing in some way through interaction effects. True. However, to get the economic system to work effectively enough to identify problems in a general and accurate way, requires significant computational resources to be devoted to the economics aspect. So the system as a whole must make a tradeoff between more accurate economic regulation, and having more processor time for things other than economic regulation... -- Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
This response will cover points raised by several previous posts in the emergence/agenda/structure of mind threads, by Goertzel, Hall, Wallace, etc. What makes an intelligence "general", to the extent that is possible, is that it does the right thing on new tasks or new situations, which it hadn't seen before. That's not going to happen unless the system is built in a very constrained way to respond to previous situations, say by being produced by very compact (or constrained) code. If you just keep adding new modules or features for each new task, you may solve that task, but you won't solve generalizations. This is the problem with Wallace's complaints. You actually want the "machine [to do] something unpredicted", namely the right thing in unpredicted circumstances. Its true that its hard and expensive to engineer/find an underlying compact explanation, but it is precisely the fact that this very constrained/compact underlying program is so improbable that makes it work! The arguments for its working in fact *rest exactly* on the fact that it is so improbable, it wouldn't exist unless it generalized to new experiences. So while its hard to engineer this, which might be called emergence, you will IMO be forced to if you want to succeed. That is the reason why AGI is hard. As has been pointed out in this thread (I believe by Goertzel and Hall) Minsky's approach in Society of Mind et seq of adding large numbers of systems then begs the question: how will these things ever work together, and why should the system generalize? I criticized it from this point of view in What is Thought? One way to try to handle the organization then is an economic framework. Hayek doesn't directly scale from random start to an AGI architecture in as much as the learning is too slow. But the same is true of any other means of EC or learning that doesn't start with some huge head start. It seems entirely reasonable to merge a Hayek like architecture with scaffolds and hand-coded chunks and other stuff (maybe whatever is in Novamente) to get it a head start. An advantage of having the economic system then is to impose coherence and constrainedness-- parts that don't in fact work effectively with others will be seen to be dying, forcing you to fix the problems. Without the economic discipline, you are likely to have subsystems (and sub-subsystems) you think are positive but are failing in some way through interaction effects. The brain was not developed exactly through a Hayek system, but that doesn't mean it does not exploit one (for example, mediated by endorphins or whatever) nor that one might not be very useful to impose on an AGI. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
YKY> On 3/20/07, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: >> There is one way you can form a coherent, working system from a >> congeries YKY> of >> random agents: put them in a marketplace. This has a fairly >> rigorous discipline of its own and most of them will not >> survive... and of course YKY> the >> system has to have some way of coming up with new ones that will. >> [...] YKY> This is assuming that you have a *massive* number of agents who YKY> participate in said market. In reality I don't think there are a YKY> massive number of narrow AI projects wanting to plug into a large YKY> AI ecosystem. There are "many", but not "massively many", narrow YKY> AI projects out there. YKY> For example, I can believe there are 100s of face-recognition YKY> systems world-wide, but definitely not > 10,000. YKY> Can you clarify: are those "agents" all engineered by one group YKY> of programmers, or are they recruited externally, eg from the YKY> internet? I think what Josh had in mind was a system like my Hayek system, where the agents were actually evolved. But there is no reason why engineered agents couldn't participate also. The point of the market is to provide feedback to the agents. If the economy is set up right, then agents that earn money are contributing to the performance of the overall system, and agents that lose money are harming it. Thus designer of the agent, whether it be an evolutionary algorithm or a programmer or a team of programmers, can pay attention to a local signal, earning money. YKY> In many ways, my rule-based production system cum truth YKY> maintenance system can be viewed as a market place (of production YKY> rules or beliefs). The beliefs in such a system depend on its YKY> experience, is unpredictable, and is therefore emergent. In this YKY> sense, *any* AGI would display emergent behavior. YKY> It all goes back to my original analysis: everyone wants to start YKY> their own "marketplace" and get other people to participate in YKY> it. The caveat is that this only works if your economy is set up correctly. It has to obey principles "conservation of money" and "property rights". Numerous AI systems that believed they were invoking economic like organizations-- such as Eurisko and Classifier Systems-- ran into pathologies because they didn't construct the economy properly. (And related pathologies can be observed in the real economy and ecosystem, where these principles are violated.) I recommend Chapter 10 of What is Thought? for more extended explanation. YKY> YKY - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
On 3/20/07, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: There is one way you can form a coherent, working system from a congeries of random agents: put them in a marketplace. This has a fairly rigorous discipline of its own and most of them will not survive... and of course the system has to have some way of coming up with new ones that will. [...] This is assuming that you have a *massive* number of agents who participate in said market. In reality I don't think there are a massive number of narrow AI projects wanting to plug into a large AI ecosystem. There are "many", but not "massively many", narrow AI projects out there. For example, I can believe there are 100s of face-recognition systems world-wide, but definitely not > 10,000. Can you clarify: are those "agents" all engineered by one group of programmers, or are they recruited externally, eg from the internet? In many ways, my rule-based production system cum truth maintenance system can be viewed as a market place (of production rules or beliefs). The beliefs in such a system depend on its experience, is unpredictable, and is therefore emergent. In this sense, *any* AGI would display emergent behavior. It all goes back to my original analysis: everyone wants to start their own "marketplace" and get other people to participate in it. YKY - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
On Monday 19 March 2007 19:36, Ben Goertzel wrote: > For instance, Baum's Hayek is an innovative and exciting use of > economics in an AI learning context, > yet the approach seems not to be scalable into anything resembling an > AGI architecture. Charles Smith (http://autogeny.org/chsmith.html) built somewhat bigger structures than Hayek, but the two are not directly comparable. I think CS is completely scalable. The key thing that price theory does is to bring global knowledge to bear in such a way that local decisions, made on price info alone, act toward a global optimization. (Nothing magic, it's n-dimensional hill-climbing not unlike backprop. Still, there's 200+ years of mathematical analysis for the taking to evaluate market mechanisms.) > Novamente uses economic ideas in some aspects, but mainly just for > allocation of attention (system > resources) among different internal processes. That can work... > My strong intuitive feeling is that using a virtual marketplace to > originate a coherent working system from a > congerie of random agents would not be computationally feasible. Yes and no. It definitely needs a head start and a push in my experiments, i.e. a reasonably well designed system which the market just keeps on track. But in personal experience, it takes the actual brain about 50 years to settle down till it really understands what it's doing :-) > However, I really doubt the brain relies on emergent market dynamics to > enable interoperation of its various components. In the research I did before CS, we did some experiments comparing our rational design algos to the standard, cost-oblivious ones. The rational versions, i.e. algorithms that balanced the computational resources they expected to use against the expected value-added in any given search, typically used half the time to achieve comparable results. They used some fairly brain-cracking incremental statistical modelling, tho, and I hoped to avoid the hard work with the market model :-) Josh - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
Re: [agi] structure of the mind
J. Storrs Hall, PhD. wrote: On Monday 19 March 2007 17:30, Ben Goertzel wrote: ... My own view these days is that a wild combination of agents is probably not the right approach, in terms of building AGI. Novamente consists of a set of agents that have been very carefully sculpted to work together in such a way as to (when fully implemented and tuned) give rise to the right overall emergent structures. There is one way you can form a coherent, working system from a congeries of random agents: put them in a marketplace. This has a fairly rigorous discipline of its own and most of them will not survive... and of course the system has to have some way of coming up with new ones that will. In principle, yeah, this can work. But we have to remember that the biggest problem of AGI is dealing with severe computational resource limitations (and, the brain's resources are also to be considered severely limited, compared to what naive computational algorithms could easily consume, mathematically speaking). The question is whether a "virtual marketplace" is a viable approach to AGI, in terms of computational expense... For instance, Baum's Hayek is an innovative and exciting use of economics in an AI learning context, yet the approach seems not to be scalable into anything resembling an AGI architecture. Novamente uses economic ideas in some aspects, but mainly just for allocation of attention (system resources) among different internal processes. My strong intuitive feeling is that using a virtual marketplace to originate a coherent working system from a congerie of random agents would not be computationally feasible. This, to me, falls into the same general category as "build a primordial soup and let Alife and then AI evolve from it." Yes, these things can work given enough resources. But the resource requirements are way higher than for more direct engineering-oriented approaches. The brain may well involve some economics-ish dynamics. Energy minimization and energy conservation certainly share some common factors with profit maximization and money conservation. However, I really doubt the brain relies on emergent market dynamics to enable interoperation of its various components. The interoperation of the components was originated via evolution, and is merely tuned and minorly adjusted by brain dynamics during the life of the organism (quasi-"economic" or otherwise). -- Ben G - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303