I lost my cool with some comments Steve wrote on a message thread that
I started recently. He started by reporting on some problems with a
program that he wrote 20 or 30 years ago. That was cool because it
gave me chance to use his experimental results to look at my own
ideas. I gained something from his effort. I was able to sharpen my
reply in response to his comments. My argument is reasonable and I
will state it below. But then, instead of thinking about my response
his next response was to declare that my ideas to be like Arthur
Miller's ideas and Arthur got a lot of criticism in this group. Well I
don't know how Arthur's ideas have progressed but I don't ever
remember Arthur talking about what I am talking about and Steve did
not provide any substance to demonstrate that my ideas (especially the
idea that I was proposing in the message thread) were just like
Arthur's. So I lost my cool and pointed out that Steve's dismissal was
totally lacking in substance. His comments on his own experiences did
contain substance but his dismissal that implied that I was talking
about the same thing that Arthur had talked about was shallow.

It is difficult to say how a substantial argument might be made. You
really need to be able to support your argument with some way to
validate that it is indeed substantial. So it seems that the only way
to prove that an argument contains something of substance is to show
that substantial arguments can be made to support that it did.

My argument, that I developed in the thread thanks to the more
substantial comments that I got was this.

Text-based would-be-bases-for-AGI programs have not gotten much
traction. I believe that complexity is a major problem but it should
be able to show that the program can get enough traction to prove the
concept beyond the capabilities of contemporary AI programs. A major
problem is that the program would need some way to process direction
from the user-teacher in order to teach it to look for anaphoric-like
connections and to disambiguate language. However, if hard-edged
language (or other hard-edged input) is used to designate these
directions then the program will not need to be creative enough to do
the substantial learning that it will need to do on its own. On the
other hand if the program is creative enough so that it can try lots
of possibilities then this creativity will tend to overwhelm the
program right away. So my solution is to give the program the
underlying possibility to recognize that some parts of conversation
might effectively contain directions on how it can interpret other
statements, and then if the user starts off by repeatedly using very
simple sentential forms to input this directive information for the
program it should be possible for it to gain enough traction to
produce some genuine learning. The program will need to learn the
rudiments of natural language but by programming it with the
underlying possibility to interpret some statements as directives it
should be able to start off. Of course there is more to it but this
argument of mine is substantial, reasonable and I believe it is
probably novel.
Jim Bromer


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to