Jim, You have apparently lost you way here... On Wed, Oct 16, 2013 at 11:50 AM, Jim Bromer <[email protected]> wrote:
> I said, > "A lot of ideas about AI models have been put forth about using some > kind of simple set of fundamentals which all learned objects and > relations could be reduced to the examples of the fundamentals. These > abstractions would be used as generalizations thereby allowing the > programmer just to worry about the fundamentals. The most familiar > example was the attempt to declare that an object IS a..., or that it > HAS a.... > The essence of Arthur's Mentaflex concept. This idea of declaring some concise set of fundamentals was > just a bad idea. We want the program to be able to derive new > abstractions and to genuinely learn some new things, > What good is an "abstraction" if the computer can't tell us what it means? So what if it has identified thousands of common word groupings, etc. Where do you go from there?!!! > we don't want > every new idea be pigeonholed to a few or a few hundred predefined > abstract concepts." > Why not? This is all that is needed for marketing purposes, and other uses are often achieved by a dozen or so additional concepts. > > It may turn out that was a perfectly good idea. I don't know for > sure. Perhaps, a few thousand basic relations might be strong enough > to act as fundamental relations for all learned conceptual relations. > Then, as they are applied to conceptual objects the distinctions might > also be expressed by using other fundamental relations. If you need > some more fundamental relations you just add them. > > The value of the relations is that they can designate consequences > which might be relevant. This runs into the "principal component analysis problem", in that things get grouped in ways that might be good for data compression, but which have no real-world utility. > However, if ideas like this can be used to > program a computer to 'act' the right way when it learns something > new, then why couldn't the computer derive these fundamental relations > out for itself? > As I discovered with the DrEliza project, the words people use to describe their problems USUALLY can NOT be found in any related Wikipedia article, etc. The world is divided into two camps - those with problems and those with solutions, and it is RARE to find ANY link between the two camps - certainly not common enough for any sort of statistical analysis to find its way. Also, you should (re)read my fast parsing patent. To qualify someone for a sale, treatment, etc., there is a checklist of qualifications: 1. Do they have enough "symptoms", needs, etc. to apply a particular solution. 2. Can they afford a particular solution. 3. Have they already tried a particular solution. 4. Are there any counter-indicators for a particular solution. 5. Etc. there are often special cases particular to a prospective solution. NONE of the above, not even the structure, can be discovered by analyzing the entire Internet!!! > > So I may try something like this with my artificial paralanguage but > if I do I will use it to study how the program could derive these > relations and how they relate to insightful consequences. I am not > saying that the paralanguage is going to solve the difficult problems. > Lotsa luck. Steve ================= > On Tue, Oct 15, 2013 at 9:11 AM, Jim Bromer <[email protected]> wrote: > >> Steve said: > >> Do you mean some sort of extreme extension to Backus Naur Form? > >> I suggest giving some examples, without worrying too much about the > language details. > >> ----- > > > > I wasn't really thinking of the form but just that I would directly > > input a relation into the program. For example the relation of > > dependency. A temporal dependency where one action can only occur if > > some other action has already occurred. > > > > A lot of ideas about AI models have been put forth about using some > > kind of simple set of fundamentals which all learned objects and > > relations could be reduced to the examples of the fundamentals. These > > abstractions would be used as generalizations thereby allowing the > > programmer just to worry about the fundamentals. The most familiar > > example was the attempt to declare that an object IS a..., or that it > > HAS a.... This idea of declaring some concise set of fundamentals was > > just a bad idea. We want the program to be able to derive new > > abstractions and to genuinely learn some new things, we don't want > > every new idea be pigeonholed to a few or a few hundred predefined > > abstract concepts. > > > > On the other hand it would be useful to have some method to provide > > the program with some way to get that a strong relation exists without > > it having to learn it for itself through months or years of work. The > > reason of course is that I don't have it all figured out so I need > > some examples that the program can learn quickly so that I can study > > what the program is doing. > > > > But it is not that easy. For example, I cannot simply denote that > > some reference is dependent on some other reference to occur before it > > does because those are just words. So if I am going to use this idea > > of a reference relation marker I have to also show what the > > consequences of that relation might be with some examples. That still > > is a problem because even if I denoted that some other action was > > temporally dependent on another and input how that connects with a few > > examples the consequences of that relation the program would still > > have to figure out how that relation might be applied to a new > > context. But that is the goal of the project. By inputting that > > relation through the artifice of a formal paralanguage I believe I > > could see how the program was acting to integrate the relation when > > into some other situation which could also be an example of the > > referential relation. The paralanguage gives me the opportunity to > > jump ahead with some explicit relations without locking me into a > > methodology that has not produced powerful results. I am t still > > required to solve the kinds of problems that I am interested in. And > > it forces me to give some more thought about the problem of applying > > the relation to different situations. The paralanguage does not solve > > the problem for me, but it can help me to work with some more concrete > > examples so that I can see what my program is doing and what it is not > > doing. And since it can be made to work with very simple examples > > (like anaphoric relations) this suggests that I should be able to get > > it to work with more complicated examples. > > > > What happens when a temporal dependency is not a material dependency > > but a dependency between ideas? Or when a material dependency is not > > the principle causative agent of the dependency? 'I will go hiking > > today if the weather is nice.'. My going hiking is dependent on the > > weather because I will decide not to go hiking if the weather is bad. > > But I might later change my mind could and go hiking even if the > > weather was bad. These examples should register with anyone who is > > interested in AI/AGI although they may seem somewhat mundane. > > > > I could use BNF or something similar and then compile a sub-program > > for combinations of situations. This would require that the program > > interpret what situations are occurring because it would need to > > decide how to best handle the situation so this particular idea is > > circular. However, by using multiple stages of analysis it might be > > possible to do this by steps. That could eliminate the circularity of > > the premise since it would only be used for looking for incrementally > > better guesses. So I think I will use a BNF-equivalent form to > > represent the referential paralanguage just in case I want to try > > compiling some of the experimental user-defined relations sometime. > > > > Jim Bromer > > > > > >> > >> > >> On Mon, Oct 14, 2013 at 6:24 AM, Steve Richfield < > [email protected]> wrote: > >>> > >>> Jim, > >>> > >>> Do you mean some sort of extreme extension to Backus Naur Form? > >>> > >>> I suggest giving some examples, without worrying too much about the > language details. > >>> > >>> Steve > >>> ===================== > >>> > >>> > >>> On Sun, Oct 13, 2013 at 4:45 PM, Jim Bromer <[email protected]> > wrote: > >>>> > >>>> The idea of a linguistic reference marker language seems kind of > >>>> interesting. Someone in these groups have pointed out that there are > >>>> artificial languages in which anaphoric-like references may be > >>>> defined, and anyone could do that by denoting those kinds of > >>>> connective relations by using some meta-notation. However, my idea of > >>>> the artificial reference marker language does go a step further in > >>>> that it would allow for definitions of linguistic markers by type and > >>>> other possible abstractions that could be defined with other levels of > >>>> referential relations. This definition with types does sound like a > >>>> programming language but I believe I can take it a step higher in that > >>>> it can be used to create run time dilemmas some of which should be > >>>> resolvable while the program is running so long as the basis of the > >>>> defined relations are not too poorly constructed. This could feasibly > >>>> turn out to be a highly controllable testing program that has a rich > >>>> potential of expression and which could detail some of the problems > >>>> that need to be solved in this field. > >>>> > >>>> Right now I am thinking about a system which would simultaneously run > >>>> the reference marker language as a meta-language or a paralanguage to > >>>> a text-based natural language. By keeping the language simple the > >>>> test might be run by creating the needed linguistic markers (like > >>>> anaphoric-like connectors) as they are needed. So months or years of > >>>> learning might be avoided to prepare for a test run. And abstractions > >>>> or generalizations might be denoted by groups of examples or by > >>>> categorical denotations. > >>>> > >>>> There have been many attempts to use formal linguistics in AI and they > >>>> have not generated overwhelming evidence that the method is the best > >>>> route to AGI. However, my theory is that most professional linguistic > >>>> AI models are overly reliant on generalizations that are too broad and > >>>> are too simple. I believe that true intelligence must be supplied > >>>> with a rich set of possibilities and that old AI linguistic models > >>>> have not provided the programs with those possibilities. But, a rich > >>>> set of generalizations probably would overwhelm an AGI program with > >>>> too much complexity. By using the linguistic reference marker > >>>> language some of that complexity could be studied in a controlled > >>>> environment using relatively simple examples. > >>>> > >>>> For example, (an abstract example), if there are many possible > >>>> reference marker systems (that were previously 'learned' or defined) > >>>> then the program would have to choose which of them would be > >>>> appropriate for a particular context. These possibilities would not > >>>> all be competitive selections, and in most all cases many possible > >>>> reference relational systems would have to be used to understand the > >>>> sentence properly. So then, part of the problem is that the program > >>>> would need to know when it had interpreted the sentence well and that > >>>> it should stop looking for other possible referential relations for > >>>> the sentence. At this point I have no idea how I would program a > >>>> computer to decide something like this. But, by using this > >>>> specialized test facility, I could gain a lot of experience by relying > >>>> on my intuition to decide when the program had come up with an > >>>> interpretation that was good enough at that time. > >>>> > >>>> Another simple abstract example that I have in mind is that I could > >>>> try to use natural language to point something out (about the > >>>> referential relations) and if that did not work then I could use the > >>>> artificial referential marker language that was running concurrently > >>>> with the natural language exchanges to present it to the computer > >>>> program. Then later I could see if I could use similar terms (from the > >>>> natural language) to direct the computer to become aware of some > >>>> referential relation in the subject discussion without needing further > >>>> detailing using the referential marker language. > >>>> > >>>> While none of this is totally new to me it is clear that I am starting > >>>> to think more definitely about some of these kinds of problems just > >>>> because I am thinking about developing the referential marker > >>>> language. So it seems like an interesting idea that should be useful > >>>> to me. I will probably try to develop it and try it out. > >>>> > >>>> Some people think that this has little to do with AGI. Well similar > >>>> techniques could be used to designate the referential relations > >>>> between visual and other sensory data so that shows that the method is > >>>> general enough. > >>>> > >>>> Jim Bromer > >>>> > >>>> > >>>> ------------------------------------------- > >>>> AGI > >>>> Archives: https://www.listbox.com/member/archive/303/=now > >>>> RSS Feed: > https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac > >>>> Modify Your Subscription: https://www.listbox.com/member/?& > >>>> Powered by Listbox: http://www.listbox.com > >>> > >>> > >>> > >>> > >>> -- > >>> Full employment can be had with the stoke of a pen. Simply institute a > six hour workday. That will easily create enough new jobs to bring back > full employment. > >>> > >> > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > -- Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
