I would like to create an initial feasibility test, using a text-based IO, that 
would show the potential for intelligence across a broad range of subject 
matters (within that IO modality.)  I am not worrying about writing something 
that would be scalable to adult human level AGI.
 
I believe that there has been something missing in AI/AGI.  Someone needs to 
show how one might create a good base for intelligently acquiring knowledge (ie 
using both rational and creative methods) which might be scaled up with some 
future computers system.  The sense that narrow AI can be pushed beyond human 
capabilities in certain human games that once seemed to demand higher general 
reasoning is a little unexpected and hard to understand without concluding that 
there must be some very basic AGI ideas that haven't been discovered.
 
I am having problems just working on the program and another fundamental 
problem is that once a program gets a little beyond the mundane world of 
contemporary programming it can bog down quickly in complexity.  In one of the 
few AI experiments that I actually tried I found that a reasonable plan for an 
analytical algorithm crunched to a stop just because there was recursive 
complexity that I did not easily see.  Even after becoming aware of it I had a 
difficult job getting around it.  And I wasn't trying to make the algorithm 
exhaustive.  But, as a result, I now know that recursive complexity is serious 
problem that is lurking behind any problem that calls for some analysis or for 
broad searching.
 
Relying on a well established method that has been hammered out across a number 
of decades can help you achieve a sense of sophistication about the problem, 
but if that method is clearly not good enough then it is obvious that more 
insights into the problem are key to making some progress.  I believe that if 
you study the problem seriously, come up with some creative ideas and run 
enough experiments you may solve some real problems even if you can't solve 
them all.
 
From the one trivial experiment that I learned that being too careful is a form 
of carelessness in AGI. This is a surprising result.  But, thinking about that 
result I recognize that the solution to the problem of initial learning is 
obvious.  So from one experiment I learned something valuable and I am ready to 
take the next step. 
 
Jim Bromer
 
From: [email protected]
Date: Tue, 16 Jul 2013 18:02:29 +0200
Subject: Re: [agi] A Very Simple AGI Project
To: [email protected]


On Tue, Jul 16, 2013 at 4:52 AM, Jim Bromer <[email protected]> wrote:


 a simple initial feasibility test may be designed around this format as a 
means of designing a way for a computer program to learn direction from a human 
user so that it can further discover interesting ways to acquire structured 
knowledge



Jim,
We have nothing against intentions to experiment, even though we slightly 
prefer the results of experiments over the intentions - not that I am not 
guilty of the same crime, mind you. Now, scalability is the number one risk of 
wannabe intelligent systems, and you are not going to escape the problem by 
avoiding the discussion or the formalization. I will kindly remind you that 
learning systems abound in the 50 year old history of the field, even though I 
believe the right mix of ambition and resources was not there 99% of the time. 
Now, what kind of learning you want to do? In my humble opinion it does not get 
any more simple than template learning or Bayesian learning. In the first you 
are more in "canned response" territory, in that you can save/remember your 
entire "lifestream" of inputs and the choose "rewarding" outputs based on 
similarity metrics (nearest neighbor, whatever) between your actual input and 
previously rewarding output. In Bayesian learning, which is not at all uncommon 
biologically despite taking mathematicians thousands of years to formulate it, 
you probably can account for a slightly more dynamic world (like one in which 
"I am hungry" only gets you food half of the time or once a day). I guess the 
Bayesian world would be driven by random number generators that will regularly 
break away from canned responses. Both "solutions" are mathematically 
multidimensional, or rather dimensionally cursed, if the problem domain is not 
a toy then the implied mathematical objects are pretty enormous.

​Do I think any such model can scale to human-level intelligence? Not really. 
Do I think it can provide a lifetime of hobbyist entertainment? I sure do. Is 
there learning of a third kind? I doubt it.


I will briefly restate my hunch; human-level intelligence will need a lot of 
real world statistics that more generally enable loads of heuristics, all at 
the service of real world simulations, possibly agent-based ones, that are 
scrutinized quickly and result in appropriate action.


AT



  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to