Jim :This illustrates one of the things wrong with the dreary instantiations of 
the prevailing mind set of a group.  It is only a matter of time until you 
discover (through experiment) how absurd it is to celebrate the triumph of an 
overly simplistic solution to a problem that is, by its very potential, full of 
possibilities]

To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject - 
narrow AI.  Looking for the one right prediction/ explanation is narrow AI. 
Being able to generate more and more possible explanations, wh. could all be 
valid,  is AGI.  The former is rational, uniform thinking. The latter is 
creative, polyform thinking. Or, if you prefer, it's convergent vs divergent 
thinking, the difference between wh. still seems to escape Dave & Ben & most 
AGI-ers.

Consider a real world application - a footballer, Maradona, is dribbling with 
the ball - you don't/can't predict where he's going next, you have to be ready 
for various directions, including the possibility that he is going to do 
something surprising and new   - even if you have to commit yourself to 
anticipating a particular direction. Ditto if you're trying to predict the path 
of an animal prey.

Dealing only with the "predictable" as most do, is perhaps what Jim is getting 
at - predictable. And wrong for AGI. It's your capacity to deal with the open, 
unpredictable, real world that signifies you are an AGI - not the same old, 
closed predictable, artificial world. When will you have the courage to face 
this?

Sent: Sunday, June 27, 2010 4:21 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI


On Sun, Jun 27, 2010 at 1:31 AM, David Jones <davidher...@gmail.com> wrote:

  A method for comparing hypotheses in explanatory-based reasoning:Here is a 
simplified version of how we solve case study 1:
  The important hypotheses to consider are: 
  1) the square from frame 1 of the video that has a very close position to the 
square from frame 2 should be matched (we hypothesize that they are the same 
square and that any difference in position is motion).  So, what happens is 
that in each two frames of the video, we only match one square. The other 
square goes unmatched.   
  2) We do the same thing as in hypothesis #1, but this time we also match the 
remaining squares and hypothesize motion as follows: the first square jumps 
over the second square from left to right. We hypothesize that this happens 
over and over in each frame of the video. Square 2 stops and square 1 jumps 
over it.... over and over again. 
  3) We hypothesize that both squares move to the right in unison. This is the 
correct hypothesis.

  So, why should we prefer the correct hypothesis, #3 over the other two?

  Well, first of all, #3 is correct because it has the most explanatory power 
of the three and is the simplest of the three. Simpler is better because, with 
the given evidence and information, there is no reason to desire a more 
complicated hypothesis such as #2. 

  So, the answer to the question is because explanation #3 expects the most 
observations, such as: 
  1) the consistent relative positions of the squares in each frame are 
expected. 
  2) It also expects their new positions in each from based on velocity 
calculations. 
  3) It expects both squares to occur in each frame. 

  Explanation 1 ignores 1 square from each frame of the video, because it can't 
match it. Hypothesis #1 doesn't have a reason for why the a new square appears 
in each frame and why one disappears. It doesn't expect these observations. In 
fact, explanation 1 doesn't expect anything that happens because something new 
happens in each frame, which doesn't give it a chance to confirm its hypotheses 
in subsequent frames.

  The power of this method is immediately clear. It is general and it solves 
the problem very cleanly.
  Dave 
        agi | Archives  | Modify Your Subscription   


Nonsense.  This illustrates one of the things wrong with the dreary 
instantiations of the prevailing mind set of a group.  It is only a matter of 
time until you discover (through experiment) how absurd it is to celebrate the 
triumph of an overly simplistic solution to a problem that is, by its very 
potential, full of possibilities.

For one example, even if your program portrayed the 'objects' as moving in 
'unison' I doubt if the program calculated or represented those objects in 
unison.  I also doubt that their positioning was literally based on moving 
'right' since their movement were presumably calculated with incremental 
mathematics that were associated with screen positions.  And, looking for a 
technicality that represents the failure of the over reliance of the efficacy 
of a simplistic over generalization, I only have to point out that they did not 
only move to the right, so your description was either wrong or only partially 
representative of the apparent movement.

As long as the hypotheses are kept simple enough to eliminate the less useful 
hypotheses, and the underlying causes for apparent relations are kept 
irrelevant, over simplification is a reasonable (and valuable) method. But if 
you are seriously interested in scalability, then this kind of conclusion is 
just dull.

I have often made the criticism that the theories put forward in these groups 
are overly simplistic.  Although I understand that this was just a simple 
example, here is the key to determining whether a method is overly simplistic 
(or as in AIXI) based on an overly simplistic definition of insight.  Would 
this method work in discovering the possibilities of a potentially more complex 
IO data environment like those we would expect to find using AGI?
Jim Bromer.
      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to