Re: [agi] OpenCog
On Dec 28, 2007 4:17 AM, Ed Porter [EMAIL PROTECTED] wrote: Richard, You are entitled to your reservations about OpenCog, but others, like me, are entitled to our enthusiasms about it. You are correct that OpenCog starts with a certain approach, but I think it is an approach that has a lot of promise, and if it has fatal limitations, hopefully OpenCog will help us learn about them, so either the system can be improved, or replaced by a better approach. If you have another approach, I wish you good luck with it. I can't be too enthusiastic about OpenCog yet because I know next to nothing about it, despite all these 'executive' publications and stray papers about Novamente. Let's wait and see. -- Vladimir Nesovmailto:[EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79854254-d72e0c
Re: [agi] OpenCog
OpenCog is definitely a positive thing to happen in the AGI scene. It's been all vaporware so far. I wonder what would be the level of participation? Also I think it's going to increase the chance of a safe takeoff, by exposing users and developers gradually to AGI. But we also need to have some security measures. I look forward to seeing it! YKY - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79861421-6f527c
RE: [agi] AGI and Deity
But the traditional gods didn't represent the unknowns, but rather the knowns. A sun god rose every day and set every night in a regular pattern. Other things which also happened in this same regular pattern were adjunct characteristics of the sun go. Or look at some of their names, carefully: Aphrodite, she who fucks. I.e., the characteristic of all Woman that is embodied in eros. (Usually the name isn't quite that blatant.) Well yes gods were(are) sort of like distributed knowledge bases. The distributed entity may or may not exist if you took the humans out of the equation. So you nuke the earth when Aphrodite was popular does she still exist? Maybe residual molecular and quantum permutations of some sort distributed but the majority of her existed in social human substrate. She was added to and changed over time, some of the information compressed and extractable lossily but some of the knowledge not extractable beyond compression, distorted and twisted. But she was composed on both known and unknown representation - but contained utility. Gods represent the regularities of nature, as embodied in our mental processes without the understanding of how those processes operated. (Once the processes started being understood, the gods became less significant.) Yes this is the pattern. I'm arguing that much of our individual and social knowledge has layers and layers directly related to deities and even more so things like taboos, myths, ceremonies, etc. even though many people today totally renounce any sort of religious belief. IOW it is so baked into us, but the question is how much of it is baked into knowledge and intelligence itself. Sometimes there were chance associations...and these could lead to strange transformations of myth when things became more understood. In Sumeria the goddess of love was associated with (identified with) the evening star and the god of war was associated with (identified with) the morning star. When knowledge of astronomy advanced it was realized that those two were identical, and they ended up with Ishtar, the goddess of Love and War. Because lovers tend to meet in the early evening, and warriors tend to try to launch the attack as soon as they can see what's going on (to catch to victims by surprise). This is a small part of why I believe that human intelligence is largely a development from pattern matching. Certainly and the whole pattern matching function that is in our brains may or may not be entirely the most efficient mechanism available due to the way it has been evolved. Evolution can create extremely efficient mechanisms and also inefficient ones. John - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79872228-6b2b41
Re: [agi] NL interface
On Dec 28, 2007 12:45 AM, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: That's why I want to build an interface that lets users provide grammatical information and the likes. The exact form of the GUI is still unknown -- maybe like a panel with a lot of templates to choose from, or like the autocomplete feature. I have previously recommended the interface used in the Alice programming environment. (www.Alice.org) The object browser can be directly acted upon, or the objects can be drag/dropped into the programming pane where each of the object's methods are exposed, then the parameters for each method are supplied. It quickly becomes an intuitive process. The resulting statement makes the syntax obvious and each choice can be updated by reselecting from a picklist. Even if you have no interest in animation, the programming interface does a really good job of providing flexibility without being too complicated. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79873507-07eb34
Re: [agi] OpenCog
Benjamin Goertzel wrote: I wish you much luck with your own approach And, I would imagine that if you create a software framework supporting your own approach in a convenient way, my own currently favored AI approaches will not be conveniently explorable within it. That's the nature of framework-building. Actually, that would be a serious miusunderstanding of the framework and development environment that I am building. Your system would be just as easy to build as any other. My purpose is to create a description language that allows us to talk about different types of AGI system, and then construct design variations autonmatically. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79873601-00cc5e
Re: [agi] OpenCog
On Dec 28, 2007 5:59 AM, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: OpenCog is definitely a positive thing to happen in the AGI scene. It's been all vaporware so far. Yes, it's all vaporware so far ;-) On the other hand, the code we hope to release as part of OpenCog actually exists, but it's not yet ready for opening-up as some of it needs to be extracted from the overall Novamente code base, and other parts of it need to be cleaned-up in various ways... Much of the reason for yakking about it months in advance of releasing it, was a desire to assess the level of enthusiasm for it. There are a number of enthusiastic potential OpenCog developers on the OpenCog mail list, so in that regard, I feel the response has been enough to merit proceeding with the project... I wonder what would be the level of participation? Time will tell! -- Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79870666-e314ea
RE: [agi] AGI and Deity
On Dec 10, 2007 6:59 AM, John G. Rose [EMAIL PROTECTED] wrote: Dawkins trivializes religion from his comfortable first world perspective ignoring the way of life of hundreds of millions of people and offers little substitute for what religion does and has done for civilization and what has came out of it over the ages. He's a spoiled brat prude with a glaring self-righteous desire to prove to people with his copious superficial factoids that god doesn't exist by pandering to common frustrations. He has little common sense about the subject in general, just his Wow. Nice to see someone take that position on Dawkins. I'm ambivalent, but I haven't seen many rational comments against him and his views. Nice? Why? I thought you wanted rational comments. Rational by definition means comments giving reasons, which the above do not. Well I shouldn't berate the poor dude... The subject of rationality is pertinent though as the way that humans deal with unknown involves irrationality especially in relation to deitical belief establishment. Before we had all the scientific instruments and methodologies irrationality played an important role. How many AGIs have engineered irrationality as functional dependencies? Scientists and computer geeks sometimes overly apply rationality in irrational ways. The importance of irrationality perhaps is underplayed as before science, going from primordial sludge to the age of reason was quite a large percentage of mans time spent in existence... and here we are. John - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79875428-48610a
Re: [agi] OpenCog
On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Actually, that would be a serious miusunderstanding of the framework and development environment that I am building. Your system would be just as easy to build as any other. ... considering the proliferation of AGI frameworks, it would appear that any other framework is pretty easy to build, no? ok, I'm being deliberately snarky - but if someone wrote about your own work the way you write about others, I imagine you would become increasingly defensive. My purpose is to create a description language that allows us to talk about different types of AGI system, and then construct design variations autonmatically. I do believe an academic formalism for discussing AGI would be valuable to allow different camps to identify their similarity/difference in approach and implementation. However, I do not believe that AGI will arise automatically from meta-discussion. My guess is that any system that is generalized enough to apply across design paradigms will lack the granular details required for actual implementation. I applaud the effort required to succeed at your task, but it does not seem to me that you are building AGI as much as inventing a lingua franca for AGI builders. I admit in advance that I may be wrong. This is (after all) just a friendly discussion list and nobody's livelihood is being threatened here, right? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79882049-5a2bf8
Re: [agi] OpenCog
IMHO more important than working towards contributing clean code would be to *publish the (required) interfaces for the modules as well as give standards for/details on the knowledge representation format*. I am sure that you have those spread over various internal and published documents (indeed, developing a system like Novamente or proposing a framework is impossible without those) but a cut-and-paste of the relevant sections are essential documentation for the framework. Also a concrete example of how a third-party module would slot into this framework would be mightily useful. I am raising this because many would-be AGI developers have to decide on an interface and KR standard even if they develop their own proprietory system - lots of mileage would be gotten from not having to reinvent the wheel. =Jean-Paul -- Research Associate: CITANDA Post-Graduate Section Head Department of Information Systems Phone: (+27)-(0)21-6504256 Fax: (+27)-(0)21-6502280 Office: Leslie Commerce 4.21 On 2007/12/28 at 14:59, in message [EMAIL PROTECTED], Benjamin Goertzel [EMAIL PROTECTED] wrote: On Dec 28, 2007 5:59 AM, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: OpenCog is definitely a positive thing to happen in the AGI scene. It's been all vaporware so far. Yes, it's all vaporware so far ;-) On the other hand, the code we hope to release as part of OpenCog actually exists, but it's not yet ready for opening-up as some of it needs to be extracted from the overall Novamente code base, and other parts of it need to be cleaned-up in various ways... Much of the reason for yakking about it months in advance of releasing it, was a desire to assess the level of enthusiasm for it. There are a number of enthusiastic potential OpenCog developers on the OpenCog mail list, so in that regard, I feel the response has been enough to merit proceeding with the project... I wonder what would be the level of participation? Time will tell! -- Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79895084-0bd555
Re: [agi] OpenCog
On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Benjamin Goertzel wrote: I wish you much luck with your own approach And, I would imagine that if you create a software framework supporting your own approach in a convenient way, my own currently favored AI approaches will not be conveniently explorable within it. That's the nature of framework-building. Actually, that would be a serious miusunderstanding of the framework and development environment that I am building. Your system would be just as easy to build as any other. My purpose is to create a description language that allows us to talk about different types of AGI system, and then construct design variations autonmatically. I don't believe it is possible to create a framework that both a) is unbiased regarding design type b) makes it easy to construct AGI designs Just as different programming languages are biased toward different types of apps, so with different AGI frameworks... -- Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79885135-d592af
Re : [agi] OpenCog
http://gbbopen.org/ - Message d'origine De : Benjamin Goertzel [EMAIL PROTECTED] À : agi@v2.listbox.com Envoyé le : Vendredi, 28 Décembre 2007, 15h14mn 10s Objet : Re: [agi] OpenCog On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Benjamin Goertzel wrote: I wish you much luck with your own approach And, I would imagine that if you create a software framework supporting your own approach in a convenient way, my own currently favored AI approaches will not be conveniently explorable within it. That's the nature of framework-building. Actually, that would be a serious miusunderstanding of the framework and development environment that I am building. Your system would be just as easy to build as any other. My purpose is to create a description language that allows us to talk about different types of AGI system, and then construct design variations autonmatically. I don't believe it is possible to create a framework that both a) is unbiased regarding design type b) makes it easy to construct AGI designs Just as different programming languages are biased toward different types of apps, so with different AGI frameworks... -- Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?; _ Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail http://mail.yahoo.fr - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79906306-182ce3
RE: [agi] AGI and Deity
From: Samantha Atkins [mailto:[EMAIL PROTECTED] Indeed. Some form of instaneous information transfer would be required for unlimited growth. If it also turned out that true time travel was possible then things would get really spooky. Alpha and Omega. Mind without end. I think that it is going to be constrained by the speed of light especially in the very beginning and especially if it is software based. Nanotechnological AGI may be able to figure out a way to, if it is not engineered initially, to transform itself from a super atomic embodiment to a subatomic, quantum or sub quantum embodiment and potentially thwart the speed of light and even communicate and/or transfer/replicate to other multiverses. I don't know if intermultiverse communication is constrained by speed of light, I think that it is not depending on the multiverse instance and communication medium. But initially software AGI is most definitely constrained. If it's going to become more efficient intelligence-wise within physical and computational resource constraints it will need to come up with better stuff mathematically/algorithmically. And the mathematical constraint space is limited by other factors. The thing is definitely constrained if it cannot alter its physical medium (electronic - CPU, memory, etc.). How much intelligence and knowledge can be achieved with particular amount of resource is up to debate I believe. But if intelligence has units you could probably figure out how much intelligence maximally would fit into a finite resource set... John - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=79919689-d86a31
Re: [agi] OpenCog
Benjamin Goertzel wrote: On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Benjamin Goertzel wrote: I wish you much luck with your own approach And, I would imagine that if you create a software framework supporting your own approach in a convenient way, my own currently favored AI approaches will not be conveniently explorable within it. That's the nature of framework-building. Actually, that would be a serious miusunderstanding of the framework and development environment that I am building. Your system would be just as easy to build as any other. My purpose is to create a description language that allows us to talk about different types of AGI system, and then construct design variations autonmatically. I don't believe it is possible to create a framework that both a) is unbiased regarding design type Nobody says unbiased. b) makes it easy to construct AGI designs Then you have not been paying attention :-) (because I know for a fact that I have said this to you in the past ) I am specifically targetting the problem of making it easier. In my environment your Novamente system would be harder to implement than a system that is better suited to my framework, BUT the point of all the effort I am making is that your system would be (e.g.) ten times easier to build than it is now, whereas my type of AGI design would be (e.g.) a thousand times easier to build than it would be if I had to hand craft it using the currently available tools. Either way, it would be easier. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=80022516-3d8694
Re: [agi] OpenCog
Mike Dougherty wrote: On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Actually, that would be a serious miusunderstanding of the framework and development environment that I am building. Your system would be just as easy to build as any other. ... considering the proliferation of AGI frameworks, it would appear that any other framework is pretty easy to build, no? ok, I'm being deliberately snarky - but if someone wrote about your own work the way you write about others, I imagine you would become increasingly defensive. You'll have to explain, because I am honestly puzzled as to what you mean here. I mean framework in a very particular sense (something that is a theory generator but not by itself a theory, and which is complete account of the domain of interest). As such, there are few if any explicit frameworks in AI. Implicit ones, yes, but not explicit. I do not mean framework in the very loose sense of bunch of tools or bunch of mechanisms. And in my comment to Ben, I said any other in reference to a particular AI system, not referring to frameworks at all. As for the way I write about others' work. I don't understand. I have done a particular body of research in AI/cognitive science, and as a result I have published a paper in which I have explained that there is a very serious problem with the methodological foundations of all current approaches to AI. As a result I am obliged to point out that many things said about AI fall within the scope of that problem. This is not personal nastiness on my part, just a consequence of the research I have done. Should anyone become defensive or offended by that? Not at all. So I am confused. As for the comment above: because of that problem I mentioned, I have evolved a way to address it, and this approach means that I have to devise a framework that allows an extremely wide variety of Ai systems to be constructed within the framework (this was all explained in my paper). As a result, the framework can encompass Ben's systems as easily as any other. It could even encompass a system built on pure mathematical logic, if need be. This is not a particularly dramatic statement. My purpose is to create a description language that allows us to talk about different types of AGI system, and then construct design variations autonmatically. I do believe an academic formalism for discussing AGI would be valuable to allow different camps to identify their similarity/difference in approach and implementation. However, I do not believe that AGI will arise automatically from meta-discussion. Oh, nobody expects it to arise automatically - I just want the system-building process to become more automated and less hand-crafted. My guess is that any system that is generalized enough to apply across design paradigms will lack the granular details required for actual implementation. On the contrary, that is why I have spent (am still spending) such an incredible amount of effort on building the thing. It is entirely possible to envision a cross-paradigm framework. Give me about $10 million a year in funding for the next three years, and I will deliver that system to your desk on January 1st 2011. I applaud the effort required to succeed at your task, but it does not seem to me that you are building AGI as much as inventing a lingua franca for AGI builders. Not really. I don't want a lingua franca as such, I just need the LF as part of the process of addressing the complex systems problem. I admit in advance that I may be wrong. This is (after all) just a friendly discussion list and nobody's livelihood is being threatened here, right? No, especially since few people are being paid full time to work on AGI projects. There is, though, the possibility that a lot of effort could be wasted on yet another AI project that starts out with no clear idea of why it thinks that its approach is any better than anything that has gone before. Given the sheer amount of wasted effort expended over the last fifty years, I would be pretty upset to see it happen yet again. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=80020995-5b8a2d
Re: [agi] OpenCog
On Dec 28, 2007 1:55 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Mike Dougherty wrote: On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Actually, that would be a serious miusunderstanding of the framework and development environment that I am building. Your system would be just as easy to build as any other. ... considering the proliferation of AGI frameworks, it would appear that any other framework is pretty easy to build, no? ok, I'm being deliberately snarky - but if someone wrote about your own work the way you write about others, I imagine you would become increasingly defensive. You'll have to explain, because I am honestly puzzled as to what you mean here. I am not a published computer scientist. I recognize there are a lot of brains here working at a level beyond my experience. I was only pointing out that using language like just as easy to build to trivialize your system could be confrontational. It may not deliberately offend anyone, either because they are also not concerned about this nuance or they discount your attitude as a matter of course. I think with slightly different sentence constructions your ideas would be better received and sound less condescending. That's all I was saying on that. I mean framework in a very particular sense (something that is a theory generator but not by itself a theory, and which is complete account of the domain of interest). As such, there are few if any explicit frameworks in AI. Implicit ones, yes, but not explicit. I do not mean framework in the very loose sense of bunch of tools or bunch of mechanisms. hmm... I never considered framework in that context. I thought framework referred to more of a scaffolding to enable work. As such, a scaffolding makes a specific kind of building. Though I can see how it can be general enough to apply the technique to multiple building designs. As for the comment above: because of that problem I mentioned, I have evolved a way to address it, and this approach means that I have to devise a framework that allows an extremely wide variety of Ai systems to be constructed within the framework (this was all explained in my paper). As a result, the framework can encompass Ben's systems as easily as any other. It could even encompass a system built on pure mathematical logic, if need be. I believe I misunderstood your original statement. This clarification makes more sense. Oh, nobody expects it to arise automatically - I just want the system-building process to become more automated and less hand-crafted. Again, I agree this is a good goal - but isn't it akin to optimizing too early in a development process? Sure, there are well-known solutions to certain classes of problem. Building a sloppy implementation to those solutions is foolish when there are existing 'best practice' methods. Is there currently a best practice way to achieve AI? Let me preemptively agree that we should all continuously strive to implement better practices than we may currently be comfortable with - we should be doing that anyway. (how can we build self-improving systems if we are not examples of such ourselves) My guess is that any system that is generalized enough to apply across design paradigms will lack the granular details required for actual implementation. On the contrary, that is why I have spent (am still spending) such an incredible amount of effort on building the thing. It is entirely possible to envision a cross-paradigm framework. With a different understanding of your use of framework I am less dubious of this position. Give me about $10 million a year in funding for the next three years, and I will deliver that system to your desk on January 1st 2011. Well, I'd love to have the cash on hand to prove you wrong. It would be a nice condition to have for both of us. There is, though, the possibility that a lot of effort could be wasted on yet another AI project that starts out with no clear idea of why it thinks that its approach is any better than anything that has gone before. Given the sheer amount of wasted effort expended over the last fifty years, I would be pretty upset to see it happen yet again. Considering the amount of wasted effort in every other sector that I have experience with, I think you should keep your expectations low. Again, I would like to be wrong. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=80057282-a98eae
Re: [agi] AGI and Deity
On Dec 28, 2007, at 5:34 AM, John G. Rose wrote: Well I shouldn't berate the poor dude... The subject of rationality is pertinent though as the way that humans deal with unknown involves irrationality especially in relation to deitical belief establishment. Before we had all the scientific instruments and methodologies irrationality played an important role. How many AGIs have engineered irrationality as functional dependencies? Scientists and computer geeks sometimes overly apply rationality in irrational ways. The importance of irrationality perhaps is underplayed as before science, going from primordial sludge to the age of reason was quite a large percentage of mans time spent in existence... and here we are. Methinks there is no clear notion of rationality or rational in the above paragraph. Thus I have no idea of what you are actually saying.Rational is not synonymous with science. What forms of irrationality do you think have a place in an AGI and why? What does the percentage of time supposedly spend in some state have to do with the importance of such a state especially with respect to an AGI? - samantha - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=80096415-b46a5a