The artificial language would be 'programmed' by using language. So it would learn the language through the use of language. It could be given examples of how to interpret a simple sentence. Then it could be given variations of the interpretations using conditionals that would typically relate to details of the kinds of situations the sentences referred to. So at first the only knowledge that it had would be based on the types of 'stories' that had been input. Then gradually it would also acquire variations of those stories and it would be able to use conditionals within the variations to relate to the reasons. So it would be able to make inferences about specific stories.
I would design it so that problems like interpreting the level of generalization of a referent word or phrase could be explicitly stated. Other problems like that could be resolved with the artificiality of the language. It would not know about life in general, it would only know what had been input into it and what it could infer from that input. However, the language would be so artificial that it would only be something that could be used by someone who really wanted to learn how to use it. So I think that a project like this might be useful for an experiment but I don't think it would be usable by anyone other than myself. Jim Bromer On Sun, Nov 16, 2014 at 7:45 PM, Jim Bromer <[email protected]> wrote: > I think it would be fairly easy to create an artificial language that > looked something like a natural language and which could be used to > program a computer to work with ideas about the world. There would be > programming problems, but the artificial language would be able to > attain the diversity (within its domain) that can be created with > programming. So a text-based artificial language like the one I am > thinking of would not draw pictures (unless that facility was added to > it) but it would, I am contending, be able to deal with any kind of > knowledge that can be discussed fairly reasonably. > > What is wrong with this idea? A person is able to figure out some > things for himself without being specifically programmed to figure > those things out. If a computer program lacked this ability then the > full description of a situation might be so complicated to make it > infeasible to communicate it to the program. > > The computer program running the artificial language would have to be > able to figure some things out for itself, but if those things would > tend to constitute narrow classes of kinds of situations then it would > be weak AI. > > So is that the real problem in getting more general AI programs going? > An AGI program has to be able to figure some things out for itself in > creative ways that are not narrowly constrained by constrained IO data > object typing. > Jim Bromer ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
