After looking up the term minimalist language and minimalist program
my sense is that is not what I am looking at. I need to start with
something that is very simple at first. I want to create the language
as it is being used. And I feel that I need something to help define
the language by using some kind of marks (like coloring the background
of a word or a phrase) or to annotate it with something that is
obviously artificial. All sorts of relations could be appended to the
simple but more natural text. The idea is not that everything would
have to be annotated to the same detail but after exposure to some
examples and to some variations the computer program should be able to
'figure out' some of these relations itself. However, this plan would
not eliminate the complications but it might delay them long enough so
that I could see if the plan was at all workable.

Suppose that I could get the program to sort some words in a text by
first letter of each word. Then later I might try to use this model as
an example for some other word like, "alphabetizing."  The amount of
information available to it in this particular example would not be
enough for the program to 'understand' what I am getting at. It might
'figure' that I mean the word alphabetizing refers to the particular
text that it had sorted. Or it might use the sorting algorithm on the
word "alphabetizing." Just as I might have a categorical annotation or
a subject notation that I could use with a sentence, I might also have
a similarity notation or a refers-to notation. Combined those might
represent 'refers-to something similar' notion, or something like
that. Since I could use numerous other higher-order annotative
primitives I believe I might be able to get the program to exhibit
some reasoning before it gets too complicated to run in
feasibility-time.

However, the primitive annotators would not constitute every possible
relation that could be represented in some text. When we use words we
are not only describing something but we are also defining subtle
categorical-grammatical relations. These relations are so subtle and
so conditional that they are not at all obvious. And it would be
impossible to annotate every sentence for every useful relation.

So it could use previously defined and previously acquired 'object
methods' as tools. However, since some of my ideas include associating
words or phrases with actions, and to apply and look for reasons to
act in certain ways, the program could have the potential to learn and
to figure out new ways of reacting to words, phrases sentences and
stories.

So what I am saying is that if I can define the language as I go I
won't be locked into pre-programmed linguistic methods (that tend to
put inappropriate constraints in the way of any creativity) and if I
can give the early definitions some major boosts I may be able to get
the whole thing off the ground to see if it will fly.
Jim Bromer


On Mon, Nov 17, 2014 at 6:28 PM, Jim Bromer <[email protected]> wrote:
> After looking up the term minimalist language and minimalist program
> my sense is that is not what I am looking at. I need to start with
> something that is very simple at first. I want to create the language
> as it is being used. And I feel that I need something to help define
> the language by using some kind of marks (like coloring the background
> of a word or a phrase) or to annotate it with something that is
> obviously artificial. All sorts of relations could be appended to the
> simple but more natural text. The idea is not that everything would
> have to be annotated to the same detail but after exposure to some
> examples and to some variations the computer program should be able to
> 'figure out' some of these relations itself. However, this plan would
> not eliminate the complications but it might delay them long enough so
> that I could see if the plan was at all workable.
>
> Suppose that I could get the program to sort some words in a text by
> first letter of each word. Then later I might try to use this model as
> an example for some other word like, "alphabetizing."  The amount of
> information available to it in this particular example would not be
> enough for the program to 'understand' what I am getting at. It might
> 'figure' that I mean the word alphabetizing refers to the particular
> text that it had sorted. Or it might use the sorting algorithm on the
> word "alphabetizing." Just as I might have a categorical annotation or
> a subject notation that I could use with a sentence, I might also have
> a similarity notation or a refers-to notation. Combined those might
> represent 'refers-to something similar' notion, or something like
> that. Since I could use numerous other higher-order annotative
> primitives I believe I might be able to get the program to exhibit
> some reasoning before it gets too complicated to run in
> feasibility-time.
>
> However, the primitive annotators would not constitute every possible
> relation that could be represented in some text. When we use words we
> are not only describing something but we are also defining subtle
> categorical-grammatical relations. These relations are so subtle and
> so conditional that they are not at all obvious. And it would be
> impossible to annotate every sentence for every useful relation.
>
> So what I am saying is that if I can define the language as I go I
> won't be locked into pre-programmed linguistic methods (that tend to
> put inappropriate constraints in the way of any creativity) and if I
> can give the early definitions some major boosts I may be able to get
> the whole thing off the ground to see if it will fly.
> Jim Bromer
>
>
> On Mon, Nov 17, 2014 at 4:20 PM, Mike Archbold <[email protected]> wrote:
>> I guess I don't see how an "artificial language" differs from the way
>> programming has always worked.  3rd generation being procedural, 4th
>> generation you just say "SORT" instead of writing an algorithm, etc.
>> The more languages developed, the less you had to code.  It sounds
>> like what you might be after is more of a minimalist language,
>> semi-natural.  Mike A
>>
>> On 11/17/14, Jim Bromer via AGI <[email protected]> wrote:
>>> Piaget Modeler via AGI <[email protected]> wrote:
>>>
>>>> You are confusing a language with an architecture.
>>>>
>>>> There has to be an underlying architecture to perform inferences and
>>>> "figure out
>>>> some things for himself".  This is the AGI architecture.  This is what is
>>>> missing.
>>>>
>>>> The architecture leverages the human communication language to perform
>>>> speech acts
>>>> and receive information.
>>>>
>>>> The language is not the architecture.
>>>>
>>>
>>>
>>> Thanks for your comments because it gives me a chance to develop and talk
>>> about the ideas further. I am not actually confusing language and
>>> architecture although my presentation may have been confusing.  For one
>>> thing, they are not completely distinct. I am thinking of writing a program
>>> in which the language can be defined by a 'user' through text. The language
>>> would resemble a primitive use of natural language in many ways but it
>>> would be artificial in that distinctions about how the input text should be
>>> interpreted could be defined more easily. This is just a thought experiment
>>> now but it is something that I am thinking might be worthwhile trying and
>>> which I will use in at least a limited way in the program I am working on.
>>> So then the 'user' would program the program to construct relations between
>>> the objects of the language. The idea of using a program to write a program
>>> may seem unusual to the non-programmers who might read this but it is the
>>> way it works. But then, just as a programming language is used to program a
>>> computer, I am saying that the artificial language that could be defined by
>>> the 'user' would then also be used to 'program' the computer to use
>>> knowledge that was input and shared with it. Of course, if I wrote such a
>>> program I would be able to define the artificial language as I went (as the
>>> 'user') with the central ideas that I have in mind. Not everyone would be
>>> able to do that. Using the program (to define and use an artificial
>>> AI language that I have in mind) would require specialized training. But
>>> that is also true of programming languages (the programs that implement the
>>> programming languages.) Not everyone gets programming.
>>>
>>> What is wrong with this idea? Well it would not be a true AGI (or a true
>>> text-based AGI), but I get that, it would be an experiment to better study
>>> the kinds of problems that I am interested in. But would it tend to only
>>> work for narrow types of problems? When the user-defined-language was
>>> defined well for certain classes of types of problems, then (if the idea is
>>> at all workable) it can be predicted that it would work well for those
>>> classes. However, when the user-defined-language did not have a good
>>> definition for a class of problems then it would not work. At that point
>>> the question becomes this: Could the user (a well-trained user or one
>>> who has good insights about how the language should be developed) continue
>>> to define the language in ways so that it could include these novel classes
>>> of problems or would that quickly turn into a search for solutions to
>>> intractable problems.
>>>
>>> Now that I thought about this a little more I realize that it might be a
>>> good project for me. The question then is can it be further simplified? Can
>>> I start with very simple 'definitions' (that is definition-like
>>> associations) between phrases that might be expanded with other
>>> simple examples.
>>>
>>> The idea of the artificial language is that the language could include
>>> categorical grammatical markers and subject markers to emphasize these
>>> relations easily. These markers could be special keywords but they could
>>> also be established using color coding of the text. The extent and nature
>>> of the grammar would also be defined by the 'user'. These ideas can be
>>> applied to subject markers, connections between sentences (like
>>> anaphoric-like relations), role relations and generalization levels which
>>> are often unmarked in natural language.
>>> A
>>> So yes the user-defined language is not the same as the program in the
>>> initial state but it will act as a part of the annotated-AGI architecture.
>>> It is an integral part of the annotated-AGI architecture. The term that I
>>> am using, "Annotated-AGI" is not the same as AGI but if my conjecture is
>>> reasonable it could be used for skilled 'users' to create general AI
>>> abilities. The annotation refers to the user-defined language. It would be
>>> like a natural language but with extensive annotation to detail how it
>>> should be applied.
>>>
>>> Jim Bromer
>>>
>>> On Sun, Nov 16, 2014 at 9:22 PM, Piaget Modeler via AGI <[email protected]>
>>> wrote:
>>>
>>>> You are confusing a language with an architecture.
>>>>
>>>> There has to be an underlying architecture to perform inferences and
>>>> "figure out
>>>> some things for himself".  This is the AGI architecture.  This is what is
>>>> missing.
>>>>
>>>> The architecture leverages the human communication language to perform
>>>> speech acts
>>>> and receive information.
>>>>
>>>> The language is not the architecture.
>>>>
>>>> ~PM
>>>>
>>>>
>>>> > Date: Sun, 16 Nov 2014 19:45:13 -0500
>>>> > Subject: [agi] It would be easy to write an artificial language that
>>>> > was
>>>> similar to a natural language
>>>> > From: [email protected]
>>>> > To: [email protected]
>>>>
>>>> >
>>>> > I think it would be fairly easy to create an artificial language that
>>>> > looked something like a natural language and which could be used to
>>>> > program a computer to work with ideas about the world. There would be
>>>> > programming problems, but the artificial language would be able to
>>>> > attain the diversity (within its domain) that can be created with
>>>> > programming. So a text-based artificial language like the one I am
>>>> > thinking of would not draw pictures (unless that facility was added to
>>>> > it) but it would, I am contending, be able to deal with any kind of
>>>> > knowledge that can be discussed fairly reasonably.
>>>> >
>>>> > What is wrong with this idea? A person is able to figure out some
>>>> > things for himself without being specifically programmed to figure
>>>> > those things out. If a computer program lacked this ability then the
>>>> > full description of a situation might be so complicated to make it
>>>> > infeasible to communicate it to the program.
>>>> >
>>>> > The computer program running the artificial language would have to be
>>>> > able to figure some things out for itself, but if those things would
>>>> > tend to constitute narrow classes of kinds of situations then it would
>>>> > be weak AI.
>>>> >
>>>> > So is that the real problem in getting more general AI programs going?
>>>> > An AGI program has to be able to figure some things out for itself in
>>>> > creative ways that are not narrowly constrained by constrained IO data
>>>> > object typing.
>>>> > Jim Bromer
>>>> >
>>>> >
>>>> > -------------------------------------------
>>>> > AGI
>>>> > Archives: https://www.listbox.com/member/archive/303/=now
>>>> > RSS Feed:
>>>> https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
>>>> > Modify Your Subscription: https://www.listbox.com/member/?&;
>>>> > Powered by Listbox: http://www.listbox.com
>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>>>> Modify
>>>> <https://www.listbox.com/member/?&;>
>>>> Your Subscription <http://www.listbox.com>
>>>>
>>>
>>>
>>>
>>> -------------------------------------------
>>> AGI
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>>> Modify Your Subscription:
>>> https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to