Once again I am not saying that weighted methods cannot be used along
with my ideas.  And my ideas about Artificial Judgment and Artificial
Insight are not that complicated.  One reason I think that a
relatively simple AGI program is feasible (the code for the data
management does look a little complicated) is because I believe that
Artificial Judgment can be created in a simple way.  AJ uses different
vantage points on a subject (different ideas about a subject) in order
to examine a point that is being considered.  This can be tested with
relatively simple controlled examples - as long as it is done in the
right way.  (I do, however, think it takes a great deal of knowledge
to truly understand one simple idea.)

A Bayesian Network could exhibit artificial judgment (at this higher
level that I am thinking of) if it was designed to work with different
Conceptual Types.  But as long as it is used to come to a series of
simple conclusions (like single decisions) that do not take (something
very similar to my ideas about) conceptual types into consideration it
is not going to be capable of intelligence.  (I am trying to say that
I am not talking about some special coding or something like that, but
just that an AGI program has to work with something that is going to
be very similar to my Conceptual Types in order to demonstrate greater
'insight'.  So yes you could use weighted reasoning and yes a series
of simple decision processes could be used to create genuine
intelligence but only if the ongoing "decision" processes were able to
interact with the accumulating global data relevant to the problem and
which included (something like) Conceptual Typing.  But there is not
good reason to try to do it the hard way when there is an easier way
to go.  Jim Bromer

On Tue, Sep 3, 2013 at 7:57 AM, Jim Bromer <jimbro...@gmail.com> wrote:
> The challenge to get my own data management system working is not a waste of
> time.  One of the major issues with getting a feasible AGI program to work
> is working out the 'conceptual' management problems. However, AGI conceptual
> management cannot be done in the same way that more conventional data
> management can be.  One of my complaints against the exaggerated efficacy of
> weight-based reasoning is that they are used as simplifications with a
> predictable localization of complexity which then is narrowly fed as an
> input to the next step of some process.  My opinion is that AI has failed
> because the method of heavily reducing global interrelations and relying on
> small localizations in sequential bottleneck processing models will tend to
> narrow the range of possible variations.  On the other hand, you cannot
> expect to automate a program which is introducing hundreds of thousands of
> global interactions unchecked.  So then part of the problem is to develop
> methods which would allow the program to examine possible interactions but
> do it in a way that would insightfully find and choose the better
> possibilities.  This reasoning implies that insight is necessary as part of
> the basic strategy of conceptual management, it is not just a product that
> can be expected to emerge from traditional programming models.  What I am
> saying is that an Artificial Insight Model has to be consciously developed
> by the programmer.  So while the Data Management methods can be created
> (should be created) by relying on the traditional programming model of
> localizing the complexity (the complications) as much as possible the
> Conceptual Management methods have to rely on some kind of Artificial
> Judgment and Artificial Insight.  Jim Bromer


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to