Re: [agi] Relevance of SE in AGI
At the current time, almost all AGI projects are still working on conceptual design issues, and the systems developed are just prototypes, so software engineering is not that much relevant. In the future, when most of the theoretical problems have been solved, especially when it becomes clear that one approach is going to lead us to AGI, software engineering will become really relevant. The existing AI applications are not that different from just computer applications, for which software engineering is necessary, but there isn't much intelligence in them. BTW, in a sense software engineering is just the opposite of artificial intelligence: while the latter tries to make machines to work as flexibly as humans, the former tries to make humans (programmers) to work as rigidly as machines. ;-) Pei On Sat, Dec 20, 2008 at 8:28 PM, Valentina Poletti jamwa...@gmail.com wrote: I have a question for you AGIers.. from your experience as well as from your background, how relevant do you think software engineering is in developing AI software and, in particular AGI software? Just wondering.. does software verification as well as correctness proving serve any use in this field? Or is this something used just for Nasa and critical applications? Valentina agi | Archives | Modify Your Subscription --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com
Re: [agi] Relevance of SE in AGI
2008/12/21 Valentina Poletti jamwa...@gmail.com: I have a question for you AGIers.. from your experience as well as from your background, how relevant do you think software engineering is in developing AI software and, in particular AGI software? If by software engineering you mean techniques for writing software better, then software engineering is relevant to all production of software, whether for AI or anything else. AI can be thought of as a particularly hard field of software development. Just wondering.. does software verification as well as correctness proving serve any use in this field? I've never used formal proofs of correctness of software, so can't comment. I use software testing (unit tests) on pretty much all non-trivial software thast I write -- i find doing so makes things much easier. -- Philip Hunt, cabala...@googlemail.com Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com
Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience
I know Dharmendra Mohdha a bit, and I've corresponded with Eugene Izhikevich who is Edelman's collaborator on large-scale brain simulations. I've read Tononi's stuff too. I think these are all smart people with deep understandings, and all in all this will be research money well spent. However, there is no design for a thinking machine here. There is cool work on computer simulations of small portions of the brain. I find nothing to disrespect in the scientific work involved in this DARPA project. It may not be the absolute most valuable research path, but it's a good one. However, IMO the rhetoric associating it with thinking machine building is premature and borderline dishonest. It's marketing rhetoric. It's more like interesting brain simulation research that could eventually play a role in some future thinking-machine-building project, whose nature remains largely unspecified. Getting into the nitty-gritty a little more: until we understand way, way more about how brain dynamics and structures lead to thoughts, and/or have way, way better brain imaging data, we're not going to be able to build a thinking machine via brain simulation. -- Ben G On Sat, Dec 20, 2008 at 5:25 PM, Ed Porter ewpor...@msn.com wrote: I don't think this AGI list should be so quick to dismiss a $4.9 million dollar grant to create an AGI. It will not necessarily be vaporware. I think we should view it as a good sign. Even if it is for a project that runs the risk, like many DARPA projects (like most scientific funding in general) of not necessarily placing its money where it might do the most good --- it is likely to at least produce some interesting results --- and it just might make some very important advances in our field. The article from http://www.physorg.com/news148754667.html said: …a $4.9 million grant…for the first phase of DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project. Tononi and scientists from Columbia University and IBM will work on the software for the thinking computer, while nanotechnology and supercomputing experts from Cornell, Stanford and the University of California-Merced will create the hardware. Dharmendra Modha of IBM is the principal investigator. The idea is to create a computer capable of sorting through multiple streams of changing data, to look for patterns and make logical decisions. There's another requirement: The finished cognitive computer should be as small as a the brain of a small mammal and use as little power as a 100-watt light bulb. It's a major challenge. But it's what our brains do every day. I have just spent several hours reading a Tononi paper, An information integration theory of consciousness and skimmed several parts of his book A Universe of Consciousness he wrote with Edleman, whom Ben has referred to often in his writings. (I have attached my mark up of the article, which if you read just the yellow highlighted text, or (for more detail) the red, you can get a quick understanding of. You can also view it in MSWord outline mode if you like.) This paper largely agrees with my notion, stated multiple times on this list, that consciousness is an incredibly complex computation that interacts with itself in a very rich manner that makes it aware of itself. However, it is not clear to me --- from reading this paper or one full chapter of A Universe of Consciousness on Google Books and spending about fifteen minutes skimming the rest of it --- that either he or Edelman have anything approaching Novamente or OpenCog's detail description of how to build an AGI. I did not hear enough discussion of the role of grounding, and the need for proper selection in the spreading activation of a representational net so that the consciousness would be one of awareness of appropriate meaning. But Tononi is going to work with Dharmendra Modha of IBM, who is a leader in brain simulation, so they may well produce something interesting. I personally think it would be more productive to spend the money with a more Novamente-like approach, where we already seem to have good ideas for how to solve most of the hard problems (other than staying within a computational budget, and parameter tuning) --- but whatever it discovers should, at least, be relevant. Furthermore, what little I have read about the hardware side of this project is very exciting, since it provides a much more brain like platform, which if it could be made to work using Memsistors, or grapheme based technology, could enable artificial brains to be made for amazingly low prices, with energy costs 1/1000 to 1/30,000 that of CMOS machines with similar computational power. Its goal is to develop a technology that will enable AGIs to be built small enough that we could carry them around like an iPhone (albeit with large batteries, at least for a decade or so). In any case, I think we
Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience
2008/12/21 Ben Goertzel b...@goertzel.org: However, IMO the rhetoric associating it with thinking machine building is premature and borderline dishonest. It's marketing rhetoric. It's more like interesting brain simulation research that could eventually play a role in some future thinking-machine-building project, whose nature remains largely unspecified. Yes, which would sound less dramatic. Some time ago there was a similar borderline dishonest report that a mouse brain had been simulated on a supercomputer. This sounded exciting, but it just turns out that they've been able to simulate a number of neuron-like elements (the Izhikevich spiking model, I think) similar in quantity to a mouse-sized brain within some tractable amount of time, which is not quite as impressive. This kind of research is eventually doomed to succeed, but at present we still don't know in detail how even a mouse brain is organized, beyond a fairly gross level of anatomy. Some of the newer techniques, such as genetic modification which gives each neuron a unique colour, should be helpful in this regard. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com
RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience
Ben, It would seem to me that a lot of the ideas in OpenCogPrime could be implemented in neuromorphic hardware, particularly if you were to intermix it with some traditional computing hardware. This is particularly true if such a system could efficiently use neural assemblies, because that would appear to allow it to much more flexibly allocate representational resources in given amount of neuromophic hardware. (This is one of he reasons I have asked so many questions about neural assemblies on this list.) So if the researcher on this project have been learning some of your ideas, and some of the better speculative thinking and neural simulations that have been done in brains science --- either directly or indirectly --- it might be incorrect to say that there is no 'design for a thinking machine' in SyNAPSE. But perhaps you know the thinking of the researchers involved enough to know that they do, in fact, lack such a design, other than what they have yet to learn by progress yet to be made by their neural simulations. (It should be noted that neuromophic hardware might be able to greatly reduce the cost of, and speed up, many types of neural simulations, increasing the rate at which they may be able to make progress with such an approach.) ANYWAY, I THINK WE SHOULD, AT LEAST, INVITE THEM TO AGI 2009. I though one of the goal of AGI 2009 it to increase the attention and respect our movement receives from the AI community in general and AI funders in particular. Ed Porter -Original Message- From: Ben Goertzel [mailto:b...@goertzel.org] Sent: Sunday, December 21, 2008 12:17 PM To: agi@v2.listbox.com Subject: Re: [agi] SyNAPSE might not be a joke was Building a machine that can learn from experience I know Dharmendra Mohdha a bit, and I've corresponded with Eugene Izhikevich who is Edelman's collaborator on large-scale brain simulations. I've read Tononi's stuff too. I think these are all smart people with deep understandings, and all in all this will be research money well spent. However, there is no design for a thinking machine here. There is cool work on computer simulations of small portions of the brain. I find nothing to disrespect in the scientific work involved in this DARPA project. It may not be the absolute most valuable research path, but it's a good one. However, IMO the rhetoric associating it with thinking machine building is premature and borderline dishonest. It's marketing rhetoric. It's more like interesting brain simulation research that could eventually play a role in some future thinking-machine-building project, whose nature remains largely unspecified. Getting into the nitty-gritty a little more: until we understand way, way more about how brain dynamics and structures lead to thoughts, and/or have way, way better brain imaging data, we're not going to be able to build a thinking machine via brain simulation. -- Ben G On Sat, Dec 20, 2008 at 5:25 PM, Ed Porter ewpor...@msn.com wrote: I don't think this AGI list should be so quick to dismiss a $4.9 million dollar grant to create an AGI. It will not necessarily be vaporware. I think we should view it as a good sign. Even if it is for a project that runs the risk, like many DARPA projects (like most scientific funding in general) of not necessarily placing its money where it might do the most good --- it is likely to at least produce some interesting results --- and it just might make some very important advances in our field. The article from http://www.physorg.com/news148754667.html said: .a $4.9 million grant.for the first phase of DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project. Tononi and scientists from Columbia University and IBM will work on the software for the thinking computer, while nanotechnology and supercomputing experts from Cornell, Stanford and the University of California-Merced will create the hardware. Dharmendra Modha of IBM is the principal investigator. The idea is to create a computer capable of sorting through multiple streams of changing data, to look for patterns and make logical decisions. There's another requirement: The finished cognitive computer should be as small as a the brain of a small mammal and use as little power as a 100-watt light bulb. It's a major challenge. But it's what our brains do every day. I have just spent several hours reading a Tononi paper, An information integration theory of consciousness and skimmed several parts of his book A Universe of Consciousness he wrote with Edleman, whom Ben has referred to often in his writings. (I have attached my mark up of the article, which if you read just the yellow highlighted text, or (for more detail) the red, you can get a quick understanding of. You can also view it in MSWord outline mode if you like.) This paper largely agrees with my notion, stated multiple times on this list, that
Re: [agi] Relevance of SE in AGI
Valentina, Having written http://www.DrEliza.com, several NN programs, and LOT of financial applications, and holding a CDP - widely recognized in financial programming circles, here are my comments. The real world is a little different than the theoretical world of CS, in that people want results rather than proofs. However, especially in the financial world, errors CAN be expensive. Hence, the usual approaches involve extensive internal checking (lots of Assert statements, etc.), careful code reviews (that often uncover errors that testing just can't catch because a tester may not think of all of the ways that a piece of code might be stressed), and code-coverage analysis to identify what has NOT been exercise/exorcised. I write AI software pretty much the same way that I have written financial software. Note that really good internal checking can almost replace early testing, because as soon as something produces garbage, it will almost immediately get caught. Hence, just write it and throw it into the rest of the code, and let its environment test it. Initially, it might contain temporary code to display its results, which will soon get yanked when everything looks OK. Finally, really good error handling is an absolute MUST, because no such complex application is ever completely wrung out. If it isn't fail-soft, then it probably will never ever make it as a product. This pretty much excuses C/C++ from consideration, but still leaves C# in the running. I prefer programming in environments that check everything possible, like Visual Basic or .NET. These save a LOT of debugging effort by catching nearly all of the really hard bugs that languages like C/C++ seem to make in bulk. Further, when you think that your application is REALLY wrung out, you can then re-compile with most of the error checking turned off to get C-like speed. Note that these things can also be said for Java, but most implementations don't provide compilers that can turn off error checking, which cuts their speed to ~1/3 that of other approaches. Losing 2/3 of the speed is a high price to pay for a platform. Steve Richfield == On 12/20/08, Valentina Poletti jamwa...@gmail.com wrote: I have a question for you AGIers.. from your experience as well as from your background, how relevant do you think software engineering is in developing AI software and, in particular AGI software? Just wondering.. does software verification as well as correctness proving serve any use in this field? Or is this something used just for Nasa and critical applications? Valentina -- *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com
Re: [agi] Relevance of SE in AGI
Great post, Steve. Thanks. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com