OK.  Let me give a system engineer's perspective . . . .

I believe that a lot of the current systems have done a lot of excellent, rigorous work both at the bottom-most and top-most levels of cognition.

The problem is, I believe, that these two levels are separated by two to five more levels and that no one is really even willing to acknowledge that these levels exist and are necessary and will require *a lot* of work and learning to implement.

We are not going to get to human-level intelligence with low-level mechanisms and a scheduler. The low-level mechanisms are not going to miraculously figure out how to assemble knowledge into a usable, scalable foundation for more discovery and knowledge.

Most of the systems that are highly touted are actually merely generic discovery systems with PLANS to extend to a complete cognitive system but nothing more -- and most of them operate at a lower (i.e. data) level (rather than a knowledge level) than makes sense if you're truly trying to build a knowledge and cognitive system rather than a data-mining and discovery system.

Most of the rest (Cyc, etc.) operate at the highest conceptual level but are massive compendiums of bolt-ons with no internal consistency, rhyme, reason, or hope of being extendable by the system itself.

Almost all of the systems are starting out way too large rather than trying for a very small seed with rational mechanisms for growth and ways to cleanly add additional mechanisms.

Most of the systems have too much Not-Invented-Here syndrome and as a result are being leap-frogged by others who are intelligently using Commercial-Off-The-Shelf or Open-Source software.

Note: Most of these complaints do *NOT* apply to Texai (except possibly the two to five level complaint -- except that Texai is actually starting at what I would call one of the middle levels and looks like it has reasonable plans for branching out.

Richard doesn't express his arguments in easy to understand terms . . . . but his core belief that we need more engineering to solve deep problems and less hacking to quickly achieve low-hanging fruit (and then stalling out afterwards) definitely needs to be given more currency.


----- Original Message ----- From: "Benjamin Johnston" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Tuesday, May 06, 2008 4:36 AM
Subject: Re: [agi] AGI-08 videos



Richard Loosemore said:

But instead of deep-foundation topics like these, what do we get? Mostly what we get is hacks. People just want to dive right and make quick assumptions about the answers to all of these issues, then they get hacking and build something - *anything* - to make it look as though they are getting somewhere.


I don't believe that this is accurate. Those who I speak to in this community (including the authors of papers at AGI-08 you claim have produced hacks) give me the clear impression that they spend every day considering deep-foundation issues. The systems that you see are not random hacks based on quick assumptions, but the by-products of people grappling with these deep-foundation issues.

This is certainly my experience. Every day I'm trying to grapple with deeper problems, but must admit that I'm unlikely to solve anything from my armchair. To create something that can be comprehended, critiqued and studied, I have to carefully reduce my ideas to a set of what may be almost laughable assumptions. However, once I've made such assumptions and implemented a system, I have a much better grasp on the problem at hand. From there, I can go back and explore ways of removing some of those assumptions, I can try to better model my ideas, and I can rethink the deeper issues with the knowledge I learnt from that experiment. When I publish work on those concrete systems, I admit that I am not directly discussing deeper issues. However, I believe that this method makes communication much more effective and clear (I've tried both ways and have experienced remarkably more success in conveying my ideas with sloppy examples than with excellent arguments) and I believe that most readers can look beyond the annoying but necessary assumptions and see the deeper ideas that I am attempting to express. As I work on the problem further, I'll create systems that are closer to my own ideas and may find ways of distilling my ideas into more formal treatments of the fundamental issues. I suspect that this experience is shared by most people here.

Ultimately, I think that any work in an area like AGI should be read with attention to the things left "between the lines". In fact, I think that expecting researchers to focus only on the fundamentals first is counterproductive: not only will you end up with a whole lot of hypothesizing with no connection to reality or experience, but you'll have a whole lot of talk and opinion but no understanding of each other.

-Ben

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to