Sorry, but this was no proof that a natural language understanding system is necessarily able to solve the equation x*3 = y for arbitrary y.
1) You have not shown that a language understanding system must necessarily(!) have made statistical experiences on the equation x*3 =y. 2) you give only a few examples. For a proof of the claim, you have to prove it for every(!) y. 3) you apply rules such as 5 * 7 = 35 -> 35 / 7 = 5 but you have not shown that 3a) that a language understanding system necessarily(!) has this rules 3b) that a language understanding system necessarily(!) can apply such rules In my opinion a natural language understanding system must have a lot of linguistic knowledge. Furthermore a system which can learn natural languages must be able to gain linguistic knowledge. But both systems do not have necessarily(!) the ability to *work* with this knowledge as it is essential for AGI. And for this reason natural language understanding is not AGI complete at all. -Matthias -----Ursprüngliche Nachricht----- Von: Matt Mahoney [mailto:[EMAIL PROTECTED] Gesendet: Dienstag, 21. Oktober 2008 05:05 An: agi@v2.listbox.com Betreff: [agi] Language learning (was Re: Defining AGI) --- On Mon, 10/20/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote: > For instance, I doubt that anyone can prove that > any system which understands natural language is > necessarily able to solve > the simple equation x *3 = y for a given y. It can be solved with statistics. Take y = 12 and count Google hits: string count ------ ----- 1x3=12 760 2x3=12 2030 3x3=12 9190 4x3=12 16200 5x3=12 1540 6x3=12 1010 More generally, people learn algebra and higher mathematics by induction, by generalizing from lots of examples. 5 * 7 = 35 -> 35 / 7 = 5 4 * 6 = 24 -> 24 / 6 = 4 etc... a * b = c -> c = b / a It is the same way we learn grammatical rules, for example converting active to passive voice and applying it to novel sentences: Bob kissed Alice -> Alice was kissed by Bob. I ate dinner -> Dinner was eaten by me. etc... SUBJ VERB OBJ -> OBJ was VERB by SUBJ. In a similar manner, we can learn to solve problems using logical deduction: All frogs are green. Kermit is a frog. Therefore Kermit is green. All fish live in water. A shark is a fish. Therefore sharks live in water. etc... I understand the objection to learning math and logic in a language model instead of coding the rules directly. It is horribly inefficient. I estimate that a neural language model with 10^9 connections would need up to 10^18 operations to learn simple arithmetic like 2+2=4 well enough to get it right 90% of the time. But I don't know of a better way to learn how to convert natural language word problems to a formal language suitable for entering into a calculator at the level of an average human adult. -- Matt Mahoney, [EMAIL PROTECTED] ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?& Powered by Listbox: http://www.listbox.com ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34 Powered by Listbox: http://www.listbox.com