RE: [agi] rule-based NL system
AGISim will run under Windows but it takes some (lots of) effort. You also need to use the right versions of the dependant software otherwise you have no chance! It is currently a bit of a black art at the moment. I plan on spending some time over this coming month in bringing AGISim upto date so it is 'easier' to get running under windows. I'll post when I have something available. Tony Lofthouse _ From: James Ratcliff [mailto:[EMAIL PROTECTED] Sent: Friday, May 04, 2007 10:00 PM To: agi@v2.listbox.com Subject: Re: [agi] rule-based NL system Yeah I am trying to get his to run, but no luck yet, wish it wasnt only linux based, But even a general 3-D graphical app is not that hard to write, I have done a few, and I am also looking at something like using a Second Life interface, as much of the graphics and interface design has already been done, and there is a rich environment and interface there that could be built upon. I also wrote a bot for World of Warcraft, though I dont believe the environment is rich enough for full interactions needed by an AGI. Once you could get to a level of telling the AGI to do something like Fill up that bucket full of water, having it respond with ? Dont know how, please show me, and then being able to use your character to specificially show it how to do tasks, you would be in a good position to have a teachable robot that could then generalize on these tasks to learn how to do many different things. James Ratcliff YKY (Yan King Yin) [EMAIL PROTECTED] wrote: On 5/4/07, James Ratcliff [EMAIL PROTECTED] wrote: The point of most of this is humans and an AI would need to construct a imaginary world environment in their mind. Most people make a typical elephant, and a typical chair and then interact the to as directed. A blind person still gets its information from experience... if it reads about an elephant, it proabbly says a big animal the size of a car, and her experience lets her know abnout cars and animals, and she has sat in chairs and know how big they are. But both of those are tied to the physical experences that she has. You can only get so much from the words alone unless you have an infinite database where everything poeeible has been described fully. But many many things can be gathered from the text alone as well. A VR interface would certainly be nice, but it takes a lot of time to build one and I'm not good at that area. Maybe Ben's AGI-Sim can be used by another AGI? If so we can save a lot of efforts. YKY _ This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/? http://v2.listbox.com/member/?; ___ James Ratcliff - http://falazar.com Looking for something... _ Ahhh...imagining that irresistible new car smell? Check out new http://us.rd.yahoo.com/evt=48245/*http:/autos.yahoo.com/new_cars.html;_ylc= X3oDMTE1YW1jcXJ2BF9TAzk3MTA3MDc2BHNlYwNtYWlsdGFncwRzbGsDbmV3LWNhcnM- cars at Yahoo! Autos. _ This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/? http://v2.listbox.com/member/?; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936
RE: [agi] My proposal for an AGI agenda
this as well so they removed pointers from their version of C!) If I wanted a language that is as low a level (everything to everyone) language like C, I would just have used C in the first place. You presumption is incorrect, C# is a memory managed language and has no pointers. Boundary overruns are not possible in C# (see caveat at end). This was a design goal built in from the beginning. My point 6 about simple as possible means that some flexibility is lost so that the programmer can spend their time on the project instead of the language/programming. Memory and disk garbage collection are built-in to my system and no garbage collector could be programmed in any case as allocating memory isn't even part of my language. Allocating memory is just not the programmers business in my language. That is the approach that C# takes. There is no memory allocation! It is all automatic. The garbage collector is a very sophisticated three level algorithm, very similar in concept to the design of the Eiffel language if you are familiar with that. There is one caveat I need to mention. In order to provide the maximum flexibility for the developer C# provides something called unsafe mode. This allows the programmer to use pointers and memory allocation if necessary. In practice this is rarely used and needs to be explicitly declared. -- David Clark - Original Message - From: Tony Lofthouse [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Saturday, March 24, 2007 7:54 AM Subject: RE: [agi] My proposal for an AGI agenda David Clarke wrote: I have 18 points at www.rccconsulting.com/aihal.htm and an explanation for each one. Prove to me that this list of features can all be accommodated by any existing computer language and I will stop my development right now and switch. David, your full list of requirements is completely provided by C# and .NET. See below for point by point matching: 1. Object Oriented As a modern OO language C# supports encapsulation, data hiding, polymorphism, etc, blah, blah, blah 2. All Classes, Methods, Properties, Objects can be created and changed easily by the program itself. .NET provides in depth reflection capabilities that provide this capability. You can also construct code at run time using various approaches from high level code to MSIL. 3. Efficiently run many programs at the same time. There is in depth support for multi tasking at the process and thread level. This includes multithreaded debugging. This is in part dependent on the OS but that’s not an issue with Windows or Linux. 4. Fully extensible. .NET is fully extensible through the addition of class libraries. 5. Built-in IDE (Interactive Development Environment) that allows programs to be running concurrently with program edit, compile and run. A number of IDEs are available. The Microsoft IDE supports real time update of variables during debug sessions. As well as the language being fully extensible the IDE is also fully extensible. 6. As simple as possible. This is of course an unknown. That said C# has been designed to be as clean a language as possible. It is type safe, has very good garbage collection, and a minimal syntax that is very familiar to C/C++ and Java programmers. 7. Quick programming turnaround time. If you mean compile time then C# compiles in a JIT environment so is very quick, especially incremental builds. 8. Fast where most of the processing is done. C# is almost as efficient as C++. Where speed critical components are needed you can drop into unmanaged mode for high performance. 9. Simple hierarchy of Classes and of Objects. What is a simple hierarchy? If you mean the library classes then it is as simple as you want to make it. There is a rich set of class libraries that you can utilise but you don't have to. 10. Simple Class inheritance. C# uses single inheritance rather than multiple inheritance. This greatly simplifies class design. Interfaces are provided to support those cases where multiple inheritance would have been used. IMO this is much better approach. 11. Simple external file architecture. This is dependent on the OS but at the simplest you have text files. At the other extreme you have RDBMS. 12. Finest possible edit and compile without any linking of object modules. There are no object modules as such in .NET. Classes are grouped into namespaces and then libraries. Each library is a .DLL which is directly callable from the code. 13. Scalable to relatively large size. C# is industrial strength. There are no limitations on current hardware. You can develop fully distributed apps across multiple domains if you can afford the hardware :-) 14. Built in SQL, indexes, tables, lists, stacks and queues. Microsoft's version of C# ships with SQLSERVER RDBMS (single user, 3 conections), all common data structures are available as class libraries. SQL syntax
RE: [agi] /. [Unleashing the Power of the Cell Broadband Engine]
Are any of you guys familiar with the SAI Architecture? http://iris.usc.edu/~afrancoi/sai/ I guess you could classify it as a async message passing model (sort of). It has some nice features and may be a good fit to the CBE platform. Is there anyone familiar with SAI and CBE who has a view on this? -Original Message- From: Eugen Leitl [mailto:[EMAIL PROTECTED] Sent: 27 November 2005 7:31 PM To: agi@v2.listbox.com Subject: Re: [agi] /. [Unleashing the Power of the Cell Broadband Engine] On Sun, Nov 27, 2005 at 10:21:07AM -0800, [EMAIL PROTECTED] wrote: In general, I think it's fair to say that there's a lot of concern in the game dev community about how useful the Cell will really be. It will certainly have far less effective, usable power than the raw specs suggest. Repeat after me: it depends on your problem. Some (many) codes will bite. Some will run like rabid foxes on meth. There are a couple of specific issues: - Multithreading is really hard to work with. Even for a programmer with extensive experience, multithreaded code is simply slower to write and harder to debug than single-threaded code. This is a fundamental problem that is largely unfixable by tools and libraries. Yes, people are lousy at parallelism. But it doesn't matter: the easy payoffs on ramping up clock of single cores are past. It's time to dive into the parallel programming model. If you can't state your problem in terms of asynchronous message passing (not just threads), you've got a problem on your hands that will only get worse with time. - The Cell programming model is asymmetrical (ie, not all processors are identical), which obviously makes things harder. 8 SPEs are homogenous allright. The main CPU is just the caretaker, and runs your vanilla OS, and feeds the SPEs (or sets them up to do the work by themselves, and just twiddles thumbs). - The individual processors in the Cell are relatively underpowered by today's standards. Not if you consider in-register SIMD performance, and codes running directly out of their individual on-die memory areas. - Perhaps most critically, the SPEs just don't have enough memory to actually do much real work. There are very few interesting problems that fit in 256K, and the performance penalty for accessing external memory is You can assume 2 MByte on-die memory. very substantial. How is that different from today's machines? The first three points above apply to a somewhat lesser extent to the Xbox 360 processor also. The Xbox is a closed shop. It doesn't matter which hardware it runs, it will never become available to generic programming projects. Although it's clearly true that parallelism is the future, don't underestimate the difficulties it will bring. You can make a pretty strong It doesn't matter: you will have to deal with the problem sooner or later, anyway. The longer you wait, the worse it's going to get. case that one effect of the move to parallelism will be to substantially raise the skill bar for professional programmers. A mediocre programmer who might be able to produce useful code in the old world will likely be a net liability in the modern world, simply because parallel architectures make it so easy to make catastrophic mistakes. AI is not a domain well-suited for mediocre programmers. Finally, remember that consoles are approximately closed platforms, meaning that you can't just buy one and start writing code for it. This is only true as long as Sony doesn't ship a Linux kit (it took them a while to ship one for the PS2, but I presume they learned something since). IBMs Cell-based blades (assuming, they will ever ship) will be probably not priced competitively, agreed. The current sweet spot is AMD64, especially dual-core. By the time the Cell ships it will be cheap DDR2 dual cores, or even DDR4 quad cores. -- Eugen* Leitl a href=http://leitl.org;leitl/a __ ICBM: 48.07100, 11.36820http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] -- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.362 / Virus Database: 267.13.8/184 - Release Date: 11/27/2005 -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.362 / Virus Database: 267.13.8/184 - Release Date: 11/27/2005 --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
RE: [agi] Learning friendliness/morality in the sand box
It seems to me that in order for an AGI to 'want' to take responsibility for a virtual pet it would need to have a sense of morality already in place. Pets seem to be a very human concept - I am not aware of any other animal that has this type of relationship. I believe you would need a high degree of social intelligence to exist before 'pet' became a meaningful concept (in human terms). You could of course hard code the goal - maximally satisfy your pet. pet hungry - feed it pet bored - play with it (any action that changes the bored state to false) pet tired - put to sleep etc, Whilst the AGI could learn a bunch of rules and procedures about how to keep a pet maximally satisfied, I don't see how this would lead to a sense of morality or even a social awareness. For me the concept of morality is grounded in the ability to understand suffering. This is a horribly fuzzy concept and is very sensitive to the specific situation at hand. Putting aside the idea of a 'universal morality' it seems that morality is a relative concept; therefore there are many possible moralities an AGI could learn. Some cultures/individuals would regard the keeping of pets as morally reprehensible. Again, certain cultures find it totally acceptable to hunt whales and dolphins whilst others take the opposite stance. Many vegetarians find it unacceptable to kill any animals for food whilst others are quite Happy to tuck into their roast lamb/beef/pork/chicken/etc. So where do you begin to teach an AGI? Maybe where you start is in fact with politics rather than morality! Tony -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Philip Sutton Sent: 18 June 2004 3:24 PM To: [EMAIL PROTECTED] Subject: [agi] Learning friendliness/morality in the sand box Maybe a good way for AGIs to learn friendliness and morality, while still in the sand box, is to: - be able to form friendships - affiliations with 'others' that go beyond self-interest - vitual 'others' in the sand box - to have responsibility for caring for virtual 'pets' I guess this is part of a broader program to build AGIs' social skills. One value of targeting the these two forms of relationship is that it raises issues of how to construct the sandbox and 'who' should be in it. It also raises the issue of how to make it easy for AGIs to form these relationships and how to structure useful learning. The value that I see in having an AGI look after a virtual pet is that it gets the AGI used to recognising: - the existence of others - the needs of others - the positive things that the AGI could do for the other - then need to avoid doing damage while trying to do good etc. I could well imagine that the first virtual pet could be *very simple* - maybe simple virtual version of the Tamagotchi 'pets'. It might just be a blob with inputs, outputs and some internal processes/state requirements. So if the AGI doesn't diligently work on maintaining the inputs and handling the outputs and keeping the environmental conditions OK (eg. some arbitary factor but could be modelled on temperature or protection from rain or ...whatever.) then the pet will decline in health/happiness or could die. The AGI would need to be taught and/or given a built in empathy to help it avoid negative states for the pet. Care would need to be exercised to make sure that the AGIs don't learn or get programmed to have sharp lines of demarcation between the 'others' it should care for and the all other 'others'. (As far as I can see most of the really nasty things that people do arise when they place others into the not to be empathised with category ie. others are put in the instrumental object category or the enemy category.) Cheers, Philip --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
RE: [agi] Unmasking Intelligence.
It seems that your 'layered hierarchy' approach is very similar to Rod Brooks subsumption architecture. This has been used to good effect in generating natural behaviours in robotics but has not been very useful in developing higher level cognition. Or maybe you are suggesting something else? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Alan Grimes Sent: 17 January 2003 08:09 To: [EMAIL PROTECTED] Subject: [agi] Unmasking Intelligence. om I seem to have fallen into the list-ecological niche of good discussion starter. In that capacity I write the following. I attended my first session of CS480: Introduction to Artificial Intelligence, this morning and it got me to thinking about something that has started to bug me... What if one of the techniques already in use was the real solution but we just don't know it 'cuz it has never been integrated into a system which would behave in a way that we could recognise? In studying neuroscience I have learned that human intelligence has evolved as a layer on top of lower behavior generators.. Instead of a simple model like: SENSES BRAIN BEHAVIOR... We rather have a heirarchy of systems stacked on top of each other: SENSES SPINAL CHORD BEHAVIOR. \/ /\ Brain steam. \/ /\ Midbrain \/ /\ Hypocampus \/ /\ primitive areas of the neocortex \/ /\ Higher areas of the neocortex. .. [eek, drew it upside down, not worth fixing] The book that I have been reading uses the terminology modulate to describe this process. It is saying that each system in the stack above modulates the layer above/below it. In this context, modulate means to add information/complexity to. This creates a problem for the AI researcher in that it is not clear at all what the top layers do because their signal is obscured by the functions of the lower centers... Hopefully recient work in growing cortical tissues on silicon will help elucidate what the heck is going on! =P The other side of this problem is equally interesting. Lets say that we were perfectly sucessful in creating an artifical mind-matrix and in training this mind matrix in numerous radically different programming paradigms. Lets say it knew smalltalk, Pascal, Lisp, Prolog, Assembler, and Forth. Lets say that it also has a general knowlege of math, computer organization, and algorithmics. We intend to direct this matrix to do either of the following: 1. Optomize a program in language A. 2. Translate a program in a language A into a specific language A' from the list above. This is an example of a motovation problem. We need a way to motovate the system to do the translation even though there is no way to specificly instruct the matrix to doso. What is needed is something akin to the lower levels in the above diagram, A way to organize the behavior in the matrix in a goal-oriented fassion. The term I have given this process of writing programs to trigger events in a cognitive matrix Cybernetic Programming where the program is a set of concepts rather than a specific program such as in traditional programming paradigms. Cybernetic programs can use fragments of a cognitive matrix as well as we see in the lymbic association areas of the brain such as the penninsular cortex, the cingulate gyrus, and the parahypocampal gyrus. In my thinking about this subject I have come up with the following principals of cybernetic programming (and mind-organization in general) which probably aren't of any use. =P SYMETRY: All output channels are associated with at least one input/feedback mechanism. SEMANTIC RELATIVITY: The primary symantic foundation of the system is the input and output systems. (almost everything is expressed in terms of input and output at some level..) TEMPORALITY: Both input and outputs have a semanticly significant temporal component. Reciently, I've added this other observation to my folder full of handwritten notes: CHANNEL INDEPENDANCE PROBLEM: The naieve implementation of abstraction, such as FORTH, (semantic relativity) tends to be strongly bound to an exact pattern match or a close match on a specific input channel. The brain is clearly more flexable than this so there must be a way to express abstractions in the form of an independant relation that can be applied to any input or output channel. (this is the heart-center of pattern recognition etc...) -- Linux programmers: the only people in the world who know how to make Microsoft programmers look competent. http://users.rcn.com/alangrimes/ --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED] --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
RE: Games for AIs (Was: [agi] TLoZ: Link's Awakening.)
Michael, You wrote: Tony Lofthouse: I've heard you are working on the shape-world interface. Have you considered what games we might play in it? Ideas? To clarify this point. I am currently developing a 2D input capability for Novamente. It is a very crude form of vision that allows the presentation of (x, y) time series to the system. This should not be confused with the shape-world interface mentioned above. Whilst one may lead to the other shape-world is not the current focus. Having said this I do have a couple of comments relating to AI games. Those of you who have had the opportunity to raise children will no doubt be well aware of the fact that children don't play TLoZ (or contemporary equivalent) until well into their childhood. There are many stages of learning before a child is capable of this level of sophistication. One of the first games that young children play is the categorisation game, i.e. What shape is this?, what colour is this?, how may sides?, etc. I would expect to use the 2D world and Shape-world subsequently for the same purpose. This is followed by the comparison game, i.e. is this big?, is this small?, which is bigger?, etc. Then you have the counting game (sort of obvious). The relationship game, i.e. above, below, inside, outside. There are lots of these type games! Then you move on to the reasoning game, i.e. what comes next?, what is missing?, what is the odd one out?, etc. Now the child is ready to combine learning from these different games and moves on to story telling both listening to them and then telling them. Then there are several more years of honing these key skills whilst increasing the level of world knowledge and social understanding. Finally the child is ready to play TLoZ! So as you can see I think there is a lot to do before you get to play TLoZ with your baby AGI. That is the purpose of 2d World and then Shape-World. T -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of Michael Roy Ames Sent: 12 December 2002 01:35 To: [EMAIL PROTECTED] Subject: Games for AIs (Was: [agi] TLoZ: Link's Awakening.) Shane, You wrote: My other point is that an AGI has to be a General Intelligence. So being able to just play PacMan isn't really enough, what we would really need is huge collection games like this that exercised the AI's brain in all sorts of slightly different ways with different types of simple learning problems. We need somebody to build a collection of simple games with a common simple API. A standard AGI test bed of sorts. I've been putting together ideas for AI games such as you mention, in the format of a curriculum for a seed AI. The games will be graduated in complexity, so the AI doesn't get stuck. Also the interface complexity (between the game-software and the AI) will be graduated, so that the AI will not be overwhelmed with I/O. Having a graduated interface also allows the programmers to concentrate on the AI-ish part of the code up-front, rather than spend a lot of time coding the interface. The process of imagining the games and the interfaces required by those games, has been quite instructive in regards to what specific skills need to be learned before a game can be mastered. While imagining new games comes fairly easily to me, more difficult is the correct ordering of the games so that one game builds on the lessons of the previous games. Also difficult is the designing of the games to that they build incrementally, allowing the AI to build/discover one cognitive skill at a time, rather than having to learn several skills before mastering a given game. The process has revealed several gaps in my thinking about what cognitive skills are needed during the development process. The curriculum document is not yet 'in shape' enough to share, but I will post a link to it when I have a draft. If anyone has URLs or suggestions of games that I might include, please send them to me [EMAIL PROTECTED] (or to the list if you like :). Thanks. Tony Lofthouse: I've heard you are working on the shape-world interface. Have you considered what games we might play in it? Ideas? Michael Roy Ames --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED] --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]