Re: [agi] SOTA
Peter, I'm afraid that your question cannot be answered as it is. AI is highly fragmented, which not only means that few project is aiming at the whole field, but also that few is even covering a subfield as you listed. Instead, each project usually aims at a special problem under a set of special assumptions. Consequently, it is not always meaningful to compare them in functionality. For example, many people may agree that Stanley the Volkswagen represents the SOTA in robot car, but is it SOTA in Interactive robotics systems? Is it ahead of Cog? When common-sense KB is mentioned, people will think about Cyc, but is it SOTA? If it is not, which one is? How can we compare an inference engine based on first-order predicate calculus to one on Bayesian net? Of course, in each field, there are projects that are more typical, more influential, or more interesting than the rest, but they are not really SOTA in the sense that it is ahead of the others in functionality, since the others are usually running to different directions. In your list, NLP may be an exception to what I said above. Since I'm not an expert in that field, I won't try to answer. By definition, Integrated intelligent systems should be comparable, but clearly there is no consensus on this topic yet. ;-) Pei On 10/19/06, Peter Voss [EMAIL PROTECTED] wrote: I'm often asked about state-of-the-art in AI, and would like to get some opinions. What do you regard, or what is generally regarded as SOTA in the various AI aspects that may be, or may be seen to be relevant to AGI? For example: - Comprehensive (common-sense) knowledge-bases and/or ontologies - Inference engines, etc. - Adaptive expert systems - Question answering systems - NLP components such as parsers, translators, grammar-checkers - Interactive robotics systems (sensing/ actuation) - physical or virtual - Vision, voice, pattern recognition, etc. - Interactive learning systems - Integrated intelligent systems ... whatever ... I'm looking for the best functionality -- irrespective of proprietary, open-source, or academic. Peter - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
Re: [agi] SOTA
- Comprehensive (common-sense) knowledge-bases and/or ontologies Cyc/OpenCyc, Wordnet, etc. but there seems to be no good way for applications to use this information and no good alternative to hand coding knowledge. - Inference engines, etc. - Adaptive expert systems A dead end. There has been little progress since the 1970's. - Question answering systems Google. - NLP components such as parsers, translators, grammar-checkers Parsing is unsolved. Translators like Babelfish have progressed little since the 1959 Russian-English project. Microsoft Word's grammar checker catches some mistakes but is clearly not AI. - Interactive robotics systems (sensing/ actuation) - physical or virtual The Mars Rovers and the DARPA Grand Challenge (robotic auto race) are impressive but we clearly have a long way to go before your car drives itself. - Vision, voice, pattern recognition, etc. It is difficult to say about face recognition systems, because of their use in security, accuracy rates are secret. I believe they have been oversold. Voice recognition is limited to words and short phrases until we develop better language models with AI behind them. A keyboard is still faster than a microphone. - Interactive learning systems - Integrated intelligent systems Lots of theoretical results, but no real applications. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
Re: [agi] SOTA
On 10/19/06, Matt Mahoney wrote: - NLP components such as parsers, translators, grammar-checkers Parsing is unsolved. Translators like Babelfish have progressed little since the 1959 Russian-English project. Microsoft Word's grammar checker catches some mistakes but is clearly not AI. http://www.charlotte.com/mld/charlotte/news/nation/15783022.htm American soldiers bound for Iraq equipped with laptop translators Called the Two Way Speech-to-Speech Program, it's a translator that uses a computer to convert spoken English to Iraqi Arabic and vice versa. - If it is life-or-death, it must work pretty well. :) I believe this is based on the IBM MASTOR project. http://domino.watson.ibm.com/comm/research.nsf/pages/r.uit.innovation.html MASTOR's innovations include: methods that automatically extract the most likely meaning of the spoken utterance, store it in a tree structured set of concepts like actions and needs, methods that take the tree-based output of a statistical semantic parser and transform the semantic concepts in the tree to express the same set of concepts in a way appropriate for another language; methods for statistical natural language generation that take the resultant set of transformed concepts and generate a sentence for the target language; generation of proper inflections by filtering hypotheses with an n-gram statistical language model; etc BillK - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
Re: [agi] SOTA
Hi Peter, I think in all of the categories you listed, thereshould be a lot ofprogress, but they will hit a ceiling because of the lack of an AGI architecture. It is very clear that vision requires AGI to be complete. So does NLP. In vision, many objects require reasoning to recognize.NLP also requires reasoning to interpret metaphors, which are beyond the scope of current parsers. So thegoal is for vision/NLP researchers to work within some AGI framework.Unfortunately a standard framework isunavailable now. We may start such a framework;lying out the common knowledge representation would be most important. This also shows theneed formodularity and divide-and-conquer. AGI sub-problems like vision and NLP are themselves pretty big projects. So it maybeunwise to try to solve them all alone. I think other candidates that have the potential tobecome AGI are: Cyc, Soar, ACT-R, andother less known cognitive architectures. YKY This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
Re: [agi] SOTA
- Original Message From: BillK [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, October 19, 2006 11:43:46 AM Subject: Re: [agi] SOTA On 10/19/06, Matt Mahoney wrote: - NLP components such as parsers, translators, grammar-checkers Parsing is unsolved. Translators like Babelfish have progressed little since the 1959 Russian-English project. Microsoft Word's grammar checker catches some mistakes but is clearly not AI. http://www.charlotte.com/mld/charlotte/news/nation/15783022.htm I think the problem will eventually be solved. There was a long period of stagnation since the 1959 Russian-English project but I think this period will soon end thanks to better language models due to the recent availability of large text databases, fast hardware, and cheap memory. Once we solve the language modeling problem, we will remove the main barrier to many NLP problems such as speech recognition, translation, OCR, handwriting recognition, and question answering. Google has made good progress in this area using statistical modeling methods and was top ranked in a recent competition. Google has access to terabytes of text in many languages and a custom operating system for running programs in parallel on thousands of PCs. Here is Google's translation of the above article into Arabic and back to English. But as you can see, the job isn't finished. American soldiers heading to Iraq with a laptop translators from Stephanie Hinatz daily newspapers (Newport News,va. (ethnic)نورفولكVa. army-star trip now using similar instrument in Iraq to help the forces of language training without contact with Iraqi civilians and the training of the country's emerging police and military forces. the name of a double discourse to address Albernamjoho translator, which uses computers to convert spoken English Iraqi pwmound and vice versa. while the program is still technically in the research and development stage,Norfolk-based U.S. Joint Forces Command,in conjunction with the Defense Advanced Research projects Agency,some models has been sent to Iraq, 70 troops is used in tactical environments to evaluate its effectiveness. and so far is fine and said Wayne Richards,Commander leadership in the implementation section. the need for such a device for the first time in April 2004 when the joint forces command received an urgent request from commanders on the ground in Abragherichards. soldiers on the ground needed to improve communication with the Iraqi people. But because of the shortage of linguists and translators throughout the Department of Defense do not come from the difficult,even some of the forces of the so-called most important work in Iraq today in Iraq, the training of police and military forces. get those troops trained and capable of maintaining the security of the country itself is a reminder of return for service members to continue der inside and outside the war zone. experts are trying to develop this kind of technical translation for 10 years,He said that Richards. today, in its current form,The translator is the rugged laptop with the plugs are two or loudspeakers and Alsmaatrichards pointing to a model and convert. It is also easy to use Talking on the phone,as evidenced shortly after the Norfolk demonstration Tuesday. I tell you, an Iraqi withdrawal on a computer. you put the microphone up to your mouth. when he said :We are here to provide food and water for your family, You held by the E key to security in a painting keys. you,I wrote to you the text of what we discussed to delight on the screen. you wipe the words to make sure you get exactly. If you can change it manually. when you are convinced you to the t key to the interpretation and sentence looming on the screen once Achrihzh time in Arab Iraq. the computer also says his loud speakers through. the process is the same Balanceof those who did not talk to you. I repeat what you have and the Arab computer will spit on you, the words in the English language. as do translator rights,the program assumes some meanings. not 100% Richards. when I ask,For example,Can the newspaper today, the Arab-language Alanklizihaltrgmeh direct Can the newspaper today. because in any act made in every conversation with the translator is taken. any translation is not due to the past program. Defense Language Institute in California also true of all the translations and Richards. now,because of its size,the best place to use the translator is at the center of command and control or a classroom. It is unlikely that the average Navy will be overseeing the cart with 100 pounds of equipment to implement that attacks in Baghdad, in Sadr City. We hope if the days will be small enough that the sergeant to be implemented in a skirt. Think about it and Richards. sergeant beating on the door of the house formulateseen in Fallujah. a woman answers the door. The soldier's weapon. because it is afraid. the soldier immediately to the effects translator
Re: [agi] SOTA
Matt Mahoney wrote: From: BillK [EMAIL PROTECTED] Parsing is unsolved. Translators like Babelfish have progressed little since the 1959 Russian-English project. Microsoft Word's grammar checker catches some mistakes but is clearly not AI. I think the problem will eventually be solved. There was a long period of stagnation since the 1959 Russian-English project but I think this period will soon end thanks to better language models due to the recent availability of large text databases, fast hardware, and cheap memory. Once we solve the language modeling problem, we will remove the main barrier to many NLP problems such as speech recognition, translation, OCR, handwriting recognition, and question answering. Sorry, but IMO large databases, fast hardware, and cheap memory ain't got nothing to do with it. Anyone who doubts this get a copy of Pim Levelt's Speaking, read and digest the whole thing, and then meditate on the fact that that book is a mere scratch on the surface (IMO a scratch in the wrong direction, too, but that's neither here nor there). I saw a recent talk about an NLP system which left me stupified that so little progress has been made since 20 years ago. Having a clue about just what a complex thing intelligence is, has everything to do with it. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
Re: [agi] SOTA
On 10/19/06, Richard Loosemore [EMAIL PROTECTED] wrote: Sorry, but IMO large databases, fast hardware, and cheap memory ain't got nothing to do with it. Anyone who doubts this get a copy of Pim Levelt's Speaking, read and digest the whole thing, and then meditate on the fact that that book is a mere scratch on the surface (IMO a scratch in the wrong direction, too, but that's neither here nor there). I saw a recent talk about an NLP system which left me stupified that so little progress has been made since 20 years ago. Having a clue about just what a complex thing intelligence is, has everything to do with it. Most normal speaking requires relatively little 'intelligence'. Adults who take young children on foreign holidays are amazed at how quickly the children appear to be chattering away to other children in a foreign language. They manage it for several reasons: 1) they don't have the other interests and priorities that adults have. 2) they use simple sentence structures and smallish vocabularies. 3) they discuss simple subjects of interest to children. The new IBM MASTOR system seems to be better than Babelfish. IBM are just starting on widespread commercial marketing of the system. Aiming at business travellers, apparently. MASTOR project description http://domino.watson.ibm.com/comm/research.nsf/pages/r.uit.innovation.html Here is a pdf file describing the MASTOR system in more detail http://acl.ldc.upenn.edu/W/W06/W06-3711.pdf Here is a 12MB mpg download of the system in use. Simple speech, but impressive. http://www.research.ibm.com/jam/speech_to_speech.mpg BillK - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
Re: [agi] SOTA
(Excellent list there, Matt)Although Pei Wang makes a good point that the fragmentation of AI does make it difficult to compare projects, it is interesting+ to note the huge differences in the movements in different narrow-AI fields. As has already been mentioned, it is interesting+ to compare the way that progress is very slow in areas such as NLP and Expert Systems, whereas there is significant, albeit gradual progress in physical interaction systems. For instance, the soccer-bots get better every year, cars can now finish DARPA grand challenge -like events in reasonable time... (I personally think that we're fast approaching a critical point where the technology is just good enough to attract more cash and hence more improvement; although meatbags will be better traffic-drivers for a while yet, physical interaction systems can now perform well enough for many applications) Although the question What is State-of-the-Art? won't attract an incontrivertibly good answer, it prompts a lot of bloody good questions that can be answered usefully.-- Olie This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]