On Sat, Nov 23, 2013 at 8:33 PM, Piaget Modeler <piagetmode...@hotmail.com> wrote: Be Like Nike... ------------------------------- Maybe you are trying to protect me from embarrassing myself further?... I am not saying that I have AGI all figured out and that by this time next year the challenge will be history. I am saying I have a plan. If I can teach a computer program capable of incremental trial and error learning to react to simple grammars in a the way I want, then I should be able to use simple commands (that it learned) to help direct it to encode more complicated features of language. I am not saying that I have it all figured out, I am saying that I have an interesting plan that I can actually test. I believe that I have tried in this thread to point out that I am talking about simple grammars, regular, context-free and context-sensitive. And I have tried to point out that I am using the Chomsky hierarchy just as a way to get the idea across. So I am not talking about a fantasy but something that can be tested. And I haven't said that I am certain that it will work but I have tried to make it clear that I am talking about ideas that can be tested. I have been talking about this stuff for years now, but oh yeah, I just remembered. Piaget Modeler is the guy who has had no idea what I have been talking about. OK, so that puts it in perspective. So now I have explained this to you again.
Even if I can't get my idea to work I will be still able to get it to work. Does that help explain it? Experiment does not have to be a works-doesn't work thing. It is about learning something that you didn't understand before but doing it with actual technology instead of relying on some crackpot's imaginary explanation of everything. So even if I can't get this to work in a way that would amaze me I can still examine those cases where it does work (and obviously it will work some of the time) to see if I can push the technology further. Jim Bromer On Sat, Nov 23, 2013 at 8:59 PM, Jim Bromer <jimbro...@gmail.com> wrote: > > On Sat, Nov 23, 2013 at 8:33 PM, Piaget Modeler <piagetmode...@hotmail.com> > wrote: > Be Like Nike... > ------------------------------- > > I had to look that up to understand what you were saying. I am not sure why > people try to repress discussions about ideas that can be tested in these > groups. Although I can understand that people might be advised not to talk > about their projects because it can attract tedious repetition of insipid > criticisms, I see that as a failing of the arrogant naïfs rather than a > failing of the people who are just trying to talk about their projects. As > far as the other criticism, that I have been talking about this for 20 years > so it is about time that I actually go out and try some of my ideas, that is > exactly what I am doing. So I don't respect the complaints about people who > try to talk about their projects on these discussion groups, even if we are > talking about the things that we haven't yet done. > Jim Bromer > > > > > On Sat, Nov 23, 2013 at 8:33 PM, Piaget Modeler <piagetmode...@hotmail.com> > wrote: >> >> Be Like Nike... >> >> ~PM >> >> > Date: Sat, 23 Nov 2013 09:02:35 -0500 >> > Subject: [agi] Re: ...Therefore it is feasible to teach the AI program >> > Type 0 Grammars >> > From: jimbro...@gmail.com >> > To: a...@listbox.com >> >> > >> > I recently advanced the idea that more complicated grammars could be >> > taught to a program that learned incrementally, (through trial and >> > error), by first teaching it simpler grammars. How might this work? >> > This instruction would be associated with particular statements, so >> > the grammars would be acquired as it learned about specific 'objects' >> > of reference. But, a particular statement might refer to objects of >> > generalization as well as specific objects. For example, 'my car' >> > refers to a specific car, 'the car' can refer to some specific car >> > which is not fully specified (by the phrase), so it is a little like a >> > variable that refers to 'some car', and the term 'a car' usually >> > refers to a car which is not going to be fully specified. These >> > simple syntactic distinctions are not consistently upheld in natural >> > language and that is part of the problem, but I am just using them as >> > examples. Further examples of syntactic markers can be found in early >> > AI. The phrase, 'is a' can often refer to a higher level of >> > generalization which might be used as a category. 'A cat is an >> > animal' is an example. The term, 'has a', also used in early AI, is >> > often a way of denoting that some object of reference has some >> > characteristic or property. There were many problems with the >> > overly-simplistic use of syntactic markers. One is that they are not >> > used consistently and the second is that the statements in which they >> > appear are not usually universally true (which makes logical deduction >> > problematic). 'A leopard has spots' can be true, but I have a >> > specific memory of a black leopard that I saw (because it made me >> > think of a much larger version of a black Burmese house cat that we >> > had) that did not have spots. Since my AI / AGI program would be >> > designed to look for common words that can be found within different >> > kinds of sentences (and text) it will be able to detect potential >> > candidates that might be used as generalizations in more complicated >> > sentences. >> > >> > It is my feeling that by using previously acquired simple grammatical >> > forms I should be able to direct my program to be able to effectively >> > use the relative generalization level of the sentences that I would >> > use with it. And since I am designing the program to look for reason >> > based reasoning, I will also be able to use simpler grammatical forms >> > to emphasize relations that can be tied together by true reasoning. >> > And I will also be able to use simpler grammatical forms to direct its >> > attention to the connections of anaphoric-like relations in the text. >> > >> > I realize that I haven't convinced many of the people who will read >> > this, but that is not my interest. I am trying to give the few people >> > who might actually be interested some insight into what I am working >> > on. I should have some more substantial examples, whether they work >> > or not, sometime next year. >> > >> > >> > ------------------------------------------- >> > AGI >> > Archives: https://www.listbox.com/member/archive/303/=now >> > RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc >> > Modify Your Subscription: https://www.listbox.com/member/?& >> > Powered by Listbox: http://www.listbox.com >> AGI | Archives | Modify Your Subscription > > > > > -- > Jim Bromer -- Jim Bromer ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com