Yes it is. This is, what I believe is a basis to human learning. Of
course we get a lot of outside help, but the value of education (or
instruction) is based on the ability of the human being to be able to
integrate what is being taught (or pointed out). While that seems to
be a little beyond current AI, I think it is clear that AI is already
able to learn so it is just a question of being able to integrate
certain kinds of abstractions like those that must be used in
language.
For example, I should be able to create a synthetic language that
could be used like a programming language, Then it should be possible
to create a synthetic language that does not specify all the details
of a program but which can point to idea (idea objects or subject
objects) and then the program can relate this new idea to the subject
matter that is being pointed to . This was done in early AI but they
quickly ran into a complexity barrier. Those barriers have expanded in
the last 40 years but little research is being done with these methods
because most researchers have to play it safe so they become followers
of whatever is currently working. The point of view that I just
expressed is that we have to use the results of advancements but they
do not always lead directly to other revolutionary advances because
incremental advancements are a necessary part of revolutionary
science. But these advancements have to be based on the intelligent
use of directed imagination and actual experimentation. This
experimentation may be directed at sub-goals. But, then the results of
the experiments and the nature of the sub-goals have to be analyzed.
Was the sub goal an actual prerequisite of the project goal? Was it
just a feasibility test where the sub-goal may lack some features of
the goal (like scale) so that while it may be a prerequisite of
understanding or advancement it is more like a step in the development
of the research project than it is a substantial step in the
production of a successful stage of development.
This is a rough map of how learning might take place in a feasible
concrete AI program. Notice how outside guidance would be so useful in
this process that is almost a design necessity. And yet a human being
would not be able to provide every detail to the program even if he
wanted to try. To some extent, a great extent, the program would have
to be capable of some true learning.
Jim Bromer

On Thu, Sep 13, 2018 at 11:47 AM Stefan Reich via AGI
<agi@agi.topicbox.com> wrote:
>
> Is this relating to anything concrete? I'm having a hard time processing 
> abstract essays like that...
>
> Cheers
>
> On Thu, 13 Sep 2018 at 17:42, Jim Bromer via AGI <agi@agi.topicbox.com> wrote:
>> 
>> The first stage of learning something new is mostly trial and error.
>> Of course you have to understand some prerequisites before you are
>> capable of learning something new. Simplification is useful at this
>> stage even though it might get in the way. Idealization is a method
>> which you can use to initially create some rough metrics (or something
>> that can be used in ways similar to metrics.) Exaggeration and
>> simplification have some similarities to idealization and so they are
>> useful in this process. The next stage requires that you look at your
>> results and begin to analyze them. Although idealization and
>> simplification are important tools, if they are used inappropriately
>> they can create some interference in the process. The process of
>> analysis is used to find core concepts (or core abstractions) which
>> might to be useful in discovering what went wrong or developing new
>> ideas. Adaptation is a necessary component of new learning. This is
>> the stage when stubborn adherence to some initial idealization or
>> simplification may really interfere in the process of new learning.
>> While you need to continue using simplifications and idealizations, if
>> your simplifications are stuck in the primitive mode they were in
>> during the initial stage of research they will probably interfere in
>> finding an effective adaptation. The next step is to examine some
>> sub-goals which might be useful to discover what seem like necessary
>> pre-requisites for the ultimate goal. Again, you may find that the
>> abstractions and core features of a problem or a hypothetical solution
>> that you thought you understood may be inaccurate. So you may need to
>> refine your ideas about the core features of the problem just as you
>> have to rethink the solutions that you thought might work. I have
>> found that at a later stage of work you may find that you may make
>> advances on sub-goals that go way past what you did at an earlier
>> stage. This recognition may also serve as a kind of metric. Even
>> though you may not have made any substantial progress toward the
>> project goal, the fact that you have made an unexpected advancement in
>> a sub-goal may indicate that it is something worth looking into. Over
>> a period of time, the work which has been done to idealize and
>> simplify, test and experiment, analyze and adapt, and refine the
>> idealizations and abstractions about both the problem and possible
>> solutions should help you to be understand the nature of the problem
>> and the nature of what a solution may look like. I believe that
>> incremental advances are necessary for revolutionary advances in
>> science because they are the basis for revolutionary advancements. But
>> you have to have some experience focusing your imagination on actual
>> experiments to appreciate the significance of the adaptation of
>> simplification, ideals, and abstraction.
>> Jim Bromer
>
>
>
> --
> Stefan Reich
> BotCompany.de // Java-based operating systems
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td2f16e9693de44aa-M69dbeaf3ecfa65fe7c10d284
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to