Have you tried any Case-Based Reasoning approaches?
If engineered correctly the cases can be human readable, yet 
extensible.Starting with simple cases and increasing complexity.
What data structures were your common templates attempting to embody? 
~PM

Date: Sun, 1 Dec 2013 10:12:18 -0500
Subject: Re: [agi] I guess I don't have AGI all figured out.
From: [email protected]
To: [email protected]

I wrote a program that used my own C++ templates that could then be 
instantiated with data types.  It did not work well because when debugging I 
could not keep track of what the code was doing.  For example, I might be using 
data in a coordinated fashion from three different arrays of data (along with 
other operations that used the same template) when I would want to step through 
the program to see what was going on.  Guess what happened?  When I would step 
through the template I often had no idea what it was doing because I kept 
losing track of which data it was working on.  One function would look through 
the common template and then it would call another which was tightly 
coordinated with a third and they would both go into the common template as 
well.  Then when some value was returned to the first that data would be used 
to look something else up.  Since I needed particular characteristics for the 
different data types that meant that I was constantly losing track of what was 
supposed to be happening.  And since I was actively designing and debugging the 
program that I meant that I sometimes made broad changes in the wrong 
algorithms. That was one reason why I gave up on the earlier version of my data 
management system.  So while the idea that an AGI program has to be able to 
evolve on its own in ways that will be particular to the situations that it 
faces makes sense the idea that this is also a good model for general 
programming or the programming that is going to serve as the basis for an AGI 
project is dubious.  C++ templates are used once you have figured out how the 
functions that they will be instantiated to are debugged.  Since it is 
reasonable to expect that an AGI program will take many years to figure out and 
since the only commonalities that could be templated would consist of 
subroutines of more complicated functions and since it is reasonable to expect 
that you will need detailed individuation for the different uses of the data 
that uses the common templates this means that the likelihood that you can wait 
until you figure out something that will just unwind itself is pretty naïve. I 
have taken the templates (of multiple functions) and repeated them for each 
kind of data that they will work with.  And I have gotten rid of the 
polymorphic overrides that I incorporated into my templates for individuation 
so  it is much easier to follow what a specific algorithm is doing.  (I have 
some polymorphism but only for really simple basic stuff.  No templates, that 
is for sure.)
 Steve said, "I suspect that if programmed intelligence is ever developed, it 
will start with something REALLY SIMPLE that is then successively modified and 
enhanced to be what we call intelligent. With this approach, each step is 
tractably doable. With a "design intelligence from scratch" approach, it 
appears to be obviously beyond human ability."
 This seems to be a concession that AGI is beyond your ability unless there was 
a REALLY SIMPLE algorithm that you could start with.  That is not how 
technological development works. It is as if Henry Ford had declared that 
industrial mechanization won't work until there is some really simple factory 
automation that would design and build an automobile itself.  I know what you 
mean but when it comes out you are simply acknowledging that you have no idea 
how to create AGI and that you are not going to bother trying until someone 
else makes it really simple for you.  I don't have it all figured out but I am 
not going to give up and wait for something Really Simple to happen.  It is 
true that advances are made when something that was difficult to do becomes 
simple.  So I am looking for more efficient techniques but I know that they are 
not going to be found without someone who is willing to do a lot of work 
looking for them.  (I am not faulting anyone for being armchair AI philosophers 
it is just that I want to actually try some of my ideas before I give up.)


On Thu, Nov 28, 2013 at 12:26 PM, Steve Richfield <[email protected]> 
wrote:

Matt and Russell, two of my favorite participants here...


I had a bit of an aha moment that I would like to share.

WAY back in the early 1970s I was part of a team that built a time sharing 
system that served many businesses and schools in the Seattle area - including 
Lakeside School where Bill Gates and Paul Allen were students - and the rest of 
that is history.



What hasn't been realized was that this "mainframe" that Bill Gates publicly 
sought to "do less with less" when microprocessors became available, had only 
64K bytes and about the hardware performance of a Commodore 64, yet was doing 
as much or more than systems with an order of magnitude more hardware. We had 
stumbled into a new principle that has been (nearly) lost to time, that I think 
could be usefully dusted off and applied to AGI and other challenging areas, 
and which may also explain the efficiency of biological development.



We built two interpreters, complete with Huffman-coded op codes. One paralleled 
metacompiler functionality for compilation, and the other extended 
stack-oriented Burroughs computer hardware structure. These were CAREFULLY 
designed so that NO add-on extensions would ever be needed, though we did 
add-in some additional capabilities before we finished. With each interpreter 
able to do everything that was needed in its domain AND NO MORE, there could be 
NO system-crashing bugs, malware, etc. After a couple years of development the 
system was live, and it NEVER CRASHED for the remaining two years, despite 
being experimentally beaten on by the students at several high schools, 
including Lakeside.



Today, programmers often repeat code several times rather than making a 
subroutine that contains the repeated code. We had no such luxury, as every 
byte was precious.

Aside from the kernel, the code for all operating system commands was written 
in the same high-level FORTRAN/ALGOL/BASIC all-in-one compiler that the 
customers used, and run on the same interpreter. Yes, we had a privileged mode 
for brain surgery, but that ONLY allowed crossing protected boundaries, and NOT 
doing entirely new things that users couldn't do.



Some customers complained about having only 8K of user space, without realizing 
that 8K of Huffman-coded logic was quite a bit of code. I wrote a chess playing 
program that ran on this system, and suggested that they first beat the program 
before complaining about the lack of space. "How complex is your application 
compared, say, with a chess playing program?" The program played a horrible 
defensive game that was guaranteed to lose, but it took the maximum amount of 
time to do so, and no one had the patience to sit around for the hours needed 
to beat it. Hence, it still stands as the only chess playing program ever put 
up on a commercial computing system, that never lost a game.



We did much of what we did because memory was SO tight. However, when it was 
all over, we realized that this was THE way to build large and highly reliable 
systems. Of course, this was all lost on Microsoft and everyone else who was 
part of the microprocessor revolution, from whom we STILL receive periodic 
security updates.



In a sense, this story is all about the power of the subroutine, and its even 
more powerful extension - interpreters. If you watch a fetus develop, you can 
see our reptilian ancestors being created and successively modified. DNA is 
clearly NOT a plan for how to build us, but rather, it is a plan for how to 
build a rudimentary bacterium, with successive modifications resulting in us. 
Much of our DNA is shared with wheat and other things are are VERY unlike us.



I suspect that if programmed intelligence is ever developed, it will start with 
something REALLY SIMPLE that is then successively modified and enhanced to be 
what we call intelligent. With this approach, each step is tractably doable. 
With a "design intelligence from scratch" approach, it appears to be obviously 
beyond human ability.



Steve
======================


On Sat, Nov 23, 2013 at 10:43 PM, Russell Wallace <[email protected]> 
wrote:


And that, mind you, is a very extreme lower bound. We don't know how strong 
anthropic selection was; for all we know, only one planet in 1e1000000 stumbled 
on an evolutionary landscape that allowed the development of intelligence. 
Certainly, every attempt at general evolutionary programming thus far, 
including mine, does _not_ encounter the gentle slope up Mount Improbable, as 
Dawkins put it, but rather a fairly neat doubling of required computation for 
every extra bit of output information.




On Sun, Nov 24, 2013 at 3:21 AM, Matt Mahoney <[email protected]> wrote:



Don't feel bad. It took evolution 10^50 molecular operations over 3.5


billion years to figure it out.



On Sat, Nov 23, 2013 at 9:04 PM, Jim Bromer <[email protected]> wrote:

> Nearly 11 months ago I made the claim that I thought that I had AGI

> all figured out.  I did not make my claim to brag about my

> accomplishments but to demonstrate the inanity of such claims.  I do

> feel that I have it figured out, but that only means that I am at the

> next stage of a long process.  I said that if someone had it all

> figured out that he should be able to get a demo working within a

> year.  However, I then modified that statement because we do have to

> give researchers, even amateur researchers, especially amateur

> researchers, some leeway.  So then, if I had it all figured out, I

> really should be able to have some kind of demo in 2 years.  I also

> pointed out that if 5 months went by and I was still working on the

> basics and had not even started working on the AI part then that would

> be a pretty sure sign that I did not have it all figured out.

> Doubling that time to 10 months makes sense since we should give

> researchers some leeway.  And I also added that since things, like

> illnesses or situations can interfere with your time that issues like

> that should also be taken into account.  And indeed, I had both

> illnesses and family situations that really interfered with my work.

> So anyway, taking that all into account, I do feel comfortable

> declaring that if I haven’t started working on the AI part by the end

> of this month then that is a pretty good sign that I did not (and

> presumably still do not) have AGI all figured out.  Well, it does not

> look too likely that I will start doing any AI stuff by the end of

> next week, so I really have to give up on the attitude that I have it

> all figured out.  I do have some good ideas, I am working hard on my

> program, and I will be able to start testing my AI ideas out early

> next year.  So I can say that I am testing my ideas out (I am working

> on the facility to test them out) and that I have some sophisticated

> ideas to work with.  But the attitude that I have it all figured out

> is not a reality and thankfully I do not waste much time with that

> kind of ego-driven delusion.

>

> I do have some feelings that other people have it wrong.  I don’t see

> any reason to deny that.  I have heard a lot of theories that were put

> forward without reasons or with insubstantial reasons (like some

> shallow ad hominem denouncement that is repeated ad nauseam) or with

> some cherry-picked reasoning that is always advanced without examining

> alternative views.  But that does not mean that I can realistically

> dismiss other people’s AGI theories. Since I don’t have it all figured

> out I have to keep an open mind when someone can talk about something

> that makes some kind of sense in programming terms.

> --

> Jim Bromer

>

>

> -------------------------------------------

> AGI

> Archives: https://www.listbox.com/member/archive/303/=now

> RSS Feed: https://www.listbox.com/member/archive/rss/303/3701026-786a0853

> Modify Your Subscription: https://www.listbox.com/member/?&;

> Powered by Listbox: http://www.listbox.com







--

-- Matt Mahoney, [email protected]





-------------------------------------------

AGI

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/1658954-f53d1a3f

Modify Your Subscription: https://www.listbox.com/member/?&;


Powered by Listbox: http://www.listbox.com






  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  




-- 
Full employment can be had with the stoke of a pen. Simply institute a six hour 
workday. That will easily create enough new jobs to bring back full employment.








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  




-- 
Jim Bromer




  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to