It is important to try your ideas out of course. So just trying stuff out
and seeing how it works is a fundamental and essential stage of
experimentation. However, I am saying that you have certain central ideas
which call also probably be tested in stages. And you want to be able to
examine the results of your experiments as they can relate to your central
goals. So even though you might be able to add features incrementally, you
really want to test simplified models of your central ideas as those kinds
of tests become feasible. You might not have a precise plan but you are
thinking about it in the terms of expectations that you have about the
program as it gets past stages of development where a number of important
features have been implemented. I am suggesting that you should try to keep
brief records of your expectations about the development of your project
just to keep yourself honest. And if you find that you have to keep kicking
your expectations down the road then one day it may be time to change
something in your plans. For instance, I have to follow through and just
try some of my ideas out this next year so I will be able to gather some
evidence that either supports their feasibility or suggests that they are
not feasible. However, I have already accepted the idea that AGI probably
requires a method to make logical complexity somewhat more simple which
means that I don't have it all figured out (even if it turns out that my
ideas could be used with technology that will be available 50 years from
now).
For example, I believe that I will be able to teach my program a
rudimentary human language.  The evidence for this is that I will be able
to use the language to teach the program certain ideas using the terms of
that language.  However, that does not mean that those ideas will be
integrated in the way I think is necessary for actual intelligence.  So
this means that I believe that I should be able to teach the program a
rudimentary human language that can act like instructions to
suggest relations between word-data-objects.  So right away I will have an
early feasibility test.  But, there is the possibility that if the
language turns out to be like an extremely simple database language then it
will not prove that I was actually able to teach it a rudimentary human
language.  But, because I was able to deduce that problem before hand I can
now add a characteristic to the language that I can test for (or at least
which I would be able to distinguish during the testing.)  The language
cannot just be a keyword computer language that has a unique definition of
relational meaning for the usages of each contextual term.  So the language
will have some ambiguities but even more to the point the 'database'
operations that are (learned to be) associated with the words, phrases and
anaphora-like-connected parts of sentences will possess different
relational and categorical implementations (defining how the
word-based-objects are related in the much simplified AGi database.)

So this means that I will be able to conduct a feasibility test of one of
my essential concepts before the program is more fully developed.  This is
important because I am not confident that the more insightful integrated
AGi is feasible because it would be combinatorially complex.

 So, I was able to take the idea of narrow AI and recognize that it would
be easier to 'teach' a computer (through incremental trial and error) to
acquire narrow rudimentary data base instructions then it would to teach it
a more human-like language.  Then I was able to take my conjectures about
conceptual roles and structures and apply it to the characteristics that I
believe would distinguish between a rudimentary database language and a
rudimentary human-like language and from there I can create feasibility
tests that would demonstrate minimal feasibility and demonstrate minimal
effectiveness feasibility.

My conjectures are now dependent on my presumption that I have discovered a
way to distinguish between a simplified database instruction language and a
highly simplified human-like language that could be used for database
instructions (to define the relations between word-based-objects).  My
presumption might be wrong.  Maybe my idea of a highly simplified
human-like language (one in which word-based-objects can play different
roles) is somehow flawed.  (For example, perhaps my most simple definition
of a human-like language is too simple.) But the fact that I have been able
to take this discussion somewhere where it has never been and the fact that
I should be able to make a minimal feasibility test of these theories would
add credence to the conjecture - if the tests were successful.  Then I
would have to continue the to develop the program to see if I (or someone)
could create a more compelling test - before the program gets bogged
down by combinatorial complexity.

I have started to define my AGi model in ways that are somewhat novel and I
have developed early feasibility tests that would show minimal feasibility
and minimal effective feasibility.  If someone does not understand what I
am talking about I probably would not be able to convince him even if I
felt that these first two tests were successful.  However, there are some
enthusiasts who would be able to understand what I am talking about and my
experimental tests would be persuasive to some of them (if the tests were
successful). Some of the enthusiasts might be able to suggest secondary
effectiveness testing even given a small window between minimal feasibility
of concept and minimal effective feasibility that the concept could be
used to produce an AGi program.  This is a description of a part of the
scientific method. It is a little difficult to describe minimal effective
feasibility now, but it would probably be easier when we have more to work
with. So, for now I have to emphasize a theoretical construct of conceptual
roles and structures (which unfortunately no one other than me really
accepts), and try to explain that this can be seen as something that is
different from an imagined minimal database instruction that could assign
simple relations between word-concept-objects and retrieve those related
objects in a way which would look like it was almost insightful.

Jim Bromer



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to