--- On Mon, 10/20/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote: > For instance, I doubt that anyone can prove that > any system which understands natural language is > necessarily able to solve > the simple equation x *3 = y for a given y.
It can be solved with statistics. Take y = 12 and count Google hits: string count ------ ----- 1x3=12 760 2x3=12 2030 3x3=12 9190 4x3=12 16200 5x3=12 1540 6x3=12 1010 More generally, people learn algebra and higher mathematics by induction, by generalizing from lots of examples. 5 * 7 = 35 -> 35 / 7 = 5 4 * 6 = 24 -> 24 / 6 = 4 etc... a * b = c -> c = b / a It is the same way we learn grammatical rules, for example converting active to passive voice and applying it to novel sentences: Bob kissed Alice -> Alice was kissed by Bob. I ate dinner -> Dinner was eaten by me. etc... SUBJ VERB OBJ -> OBJ was VERB by SUBJ. In a similar manner, we can learn to solve problems using logical deduction: All frogs are green. Kermit is a frog. Therefore Kermit is green. All fish live in water. A shark is a fish. Therefore sharks live in water. etc... I understand the objection to learning math and logic in a language model instead of coding the rules directly. It is horribly inefficient. I estimate that a neural language model with 10^9 connections would need up to 10^18 operations to learn simple arithmetic like 2+2=4 well enough to get it right 90% of the time. But I don't know of a better way to learn how to convert natural language word problems to a formal language suitable for entering into a calculator at the level of an average human adult. -- Matt Mahoney, [EMAIL PROTECTED] ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34 Powered by Listbox: http://www.listbox.com