Piaget Modeler via AGI <[email protected]> wrote:

> You are confusing a language with an architecture.
>
> There has to be an underlying architecture to perform inferences and
> "figure out
> some things for himself".  This is the AGI architecture.  This is what is
> missing.
>
> The architecture leverages the human communication language to perform
> speech acts
> and receive information.
>
> The language is not the architecture.
>


Thanks for your comments because it gives me a chance to develop and talk
about the ideas further. I am not actually confusing language and
architecture although my presentation may have been confusing.  For one
thing, they are not completely distinct. I am thinking of writing a program
in which the language can be defined by a 'user' through text. The language
would resemble a primitive use of natural language in many ways but it
would be artificial in that distinctions about how the input text should be
interpreted could be defined more easily. This is just a thought experiment
now but it is something that I am thinking might be worthwhile trying and
which I will use in at least a limited way in the program I am working on.
So then the 'user' would program the program to construct relations between
the objects of the language. The idea of using a program to write a program
may seem unusual to the non-programmers who might read this but it is the
way it works. But then, just as a programming language is used to program a
computer, I am saying that the artificial language that could be defined by
the 'user' would then also be used to 'program' the computer to use
knowledge that was input and shared with it. Of course, if I wrote such a
program I would be able to define the artificial language as I went (as the
'user') with the central ideas that I have in mind. Not everyone would be
able to do that. Using the program (to define and use an artificial
AI language that I have in mind) would require specialized training. But
that is also true of programming languages (the programs that implement the
programming languages.) Not everyone gets programming.

What is wrong with this idea? Well it would not be a true AGI (or a true
text-based AGI), but I get that, it would be an experiment to better study
the kinds of problems that I am interested in. But would it tend to only
work for narrow types of problems? When the user-defined-language was
defined well for certain classes of types of problems, then (if the idea is
at all workable) it can be predicted that it would work well for those
classes. However, when the user-defined-language did not have a good
definition for a class of problems then it would not work. At that point
the question becomes this: Could the user (a well-trained user or one
who has good insights about how the language should be developed) continue
to define the language in ways so that it could include these novel classes
of problems or would that quickly turn into a search for solutions to
intractable problems.

Now that I thought about this a little more I realize that it might be a
good project for me. The question then is can it be further simplified? Can
I start with very simple 'definitions' (that is definition-like
associations) between phrases that might be expanded with other
simple examples.

The idea of the artificial language is that the language could include
categorical grammatical markers and subject markers to emphasize these
relations easily. These markers could be special keywords but they could
also be established using color coding of the text. The extent and nature
of the grammar would also be defined by the 'user'. These ideas can be
applied to subject markers, connections between sentences (like
anaphoric-like relations), role relations and generalization levels which
are often unmarked in natural language.
A
So yes the user-defined language is not the same as the program in the
initial state but it will act as a part of the annotated-AGI architecture.
It is an integral part of the annotated-AGI architecture. The term that I
am using, "Annotated-AGI" is not the same as AGI but if my conjecture is
reasonable it could be used for skilled 'users' to create general AI
abilities. The annotation refers to the user-defined language. It would be
like a natural language but with extensive annotation to detail how it
should be applied.

Jim Bromer

On Sun, Nov 16, 2014 at 9:22 PM, Piaget Modeler via AGI <[email protected]>
wrote:

> You are confusing a language with an architecture.
>
> There has to be an underlying architecture to perform inferences and
> "figure out
> some things for himself".  This is the AGI architecture.  This is what is
> missing.
>
> The architecture leverages the human communication language to perform
> speech acts
> and receive information.
>
> The language is not the architecture.
>
> ~PM
>
>
> > Date: Sun, 16 Nov 2014 19:45:13 -0500
> > Subject: [agi] It would be easy to write an artificial language that was
> similar to a natural language
> > From: [email protected]
> > To: [email protected]
>
> >
> > I think it would be fairly easy to create an artificial language that
> > looked something like a natural language and which could be used to
> > program a computer to work with ideas about the world. There would be
> > programming problems, but the artificial language would be able to
> > attain the diversity (within its domain) that can be created with
> > programming. So a text-based artificial language like the one I am
> > thinking of would not draw pictures (unless that facility was added to
> > it) but it would, I am contending, be able to deal with any kind of
> > knowledge that can be discussed fairly reasonably.
> >
> > What is wrong with this idea? A person is able to figure out some
> > things for himself without being specifically programmed to figure
> > those things out. If a computer program lacked this ability then the
> > full description of a situation might be so complicated to make it
> > infeasible to communicate it to the program.
> >
> > The computer program running the artificial language would have to be
> > able to figure some things out for itself, but if those things would
> > tend to constitute narrow classes of kinds of situations then it would
> > be weak AI.
> >
> > So is that the real problem in getting more general AI programs going?
> > An AGI program has to be able to figure some things out for itself in
> > creative ways that are not narrowly constrained by constrained IO data
> > object typing.
> > Jim Bromer
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to