Your prediction is about the accuracy of your self-model.  How well you 
"understand" yourself and your own capabilities.Your prediction has nothing to 
do with testing the notion of "prediction" itself.  
Your writing here is just to externalize your thought in order to make new 
inferences about them.  You're using this bulletin board as a notepad or 
journal for your own feedback.  Nothing illegal about that.  
You are in equilibrium. "A system of assimilation tends to feed itself." ~ 
Piaget
There is something to be said for ignoring what everyone else has written about 
a subject and coming up with your own ideas.That's how paradigm shifts in 
begin. 
~PM

Date: Tue, 2 Apr 2013 12:44:24 -0400
Subject: [agi] Re: Monthly Analysis of My Prediction That I Can Write an AGi 
Program Before 2015
From: [email protected]
To: [email protected]



My use of the prediction that I would
be able to create a working model of my theories by a certain time enabled me
to create a series of predictions of partial achievements which I could use
both as benchmarks for the development and as seeds for an ongoing analysis of
what went wrong.  I think the reasoning of how these predictions enabled
me to create these benchmarks should be familiar to anyone who has tried to 
finish
a major undertaking within a certain amount of time.

 

However, the question of why my
latest theory seems to have given me greater hope that I will be able use
incremental steps in the development of my AGi project was a little hard to
figure out at first.  My theory, to refine it a little further, is
that the ability to learn effective specializations is a necessary requirement
for the development of effective generalizations.  But why has this particular
theory given me the sense that it may lead to a way to gradually
develop my program when my examination of previous efforts to develop AGI seemed
to suggest that gradual development was methodologically unsound? There are a
number of important aspects to the theory. First, it is a good theory although
it might seem a little simplistic. I mean that it makes a lot of sense. 
Secondly, while people may feel that they have already implicitly incorporated
something like the theory into their own theories about AGI, the fact that I
highlighted it (in my own mind) is a step that is in some ways similar to
formalization.  It is a sensible theory and (I feel that) it would be an
important part of a formalization of a theory of AGI.  For example, a Neural
Net enthusiast might claim that Neural Nets were able to incorporate both
specializations and generalizations but my criticism of that might be since
this process is locked within the complex processes of the Neural Net itself the
implicitness of the processes do not make them readily available to the
programmer.  Because I am more interested
in using discernible specializations and generalizations the recognition that
these kinds of processes are mutually significant and that one of the challenges
in AGI was the achievement of greater generalization, the appreciation of the
theory provided me with a new means to break the program down into more 
fundamental
parts.  And since I knew that I could
write a program that would let me personally define the nature of
specializations and of generalizations (in a partially automated program) I
realized that I could test different ideas in a simple progression. So when I
realized that I could try applying simulations of learned specifications and
generalizations I realized that I could test different parts of the theory
without having to fully develop the program.

 

So there was something about my
appreciation of the nature of thought-derived generalizations that allowed me
to develop this new theory.  And there
was something about the appreciation of the theory that allowed me to break the
AGI problem down in a somewhat novel way. 
And because I realized that I could use these parts as I chose to in an
ongoing development of the program I realized that I could use different
strategies to develop and test variations in a controlled way.  But the 
development of these ideas will not
go smoothly if the theory is not a good one.

 

This analysis gives me some more
insight into how problems may be effectively broken into smaller pieces even 
when
previous efforts to do this have run into obstacles.

 

Jim Bromer





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to