Re: [agi] universal logical form for natural language

2008-09-30 Thread YKY (Yan King Yin)
On Tue, Sep 30, 2008 at 6:43 AM, Ben Goertzel [EMAIL PROTECTED] wrote: We are talking about 2 things: 1. Using an ad hoc parser to translate NL to logic 2. Using an AGI to parse NL I'm not sure what you mean by parse in step 2 Sorry, to put it more accurately: #1 is using an ad hoc NLP

Re: [agi] universal logical form for natural language

2008-09-30 Thread YKY (Yan King Yin)
On Tue, Sep 30, 2008 at 12:50 PM, Linas Vepstas [EMAIL PROTECTED] wrote: I'm planning to make the project opensource, but I want to have a web site that keeps a record of contributors' contributions. So that's taking some extra time. Most wiki's automatically keep tracl of who made what

Re: [agi] universal logical form for natural language

2008-09-30 Thread Bryan Bishop
On Tuesday 30 September 2008, YKY (Yan King Yin) wrote: Yeah, and I'm designing a voting system of virtual credits for working collaboratively on the project... Write a plugin to cvs, svn, git, or some other. - Bryan http://heybryan.org/ Engineers:

Re: [agi] universal logical form for natural language

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 2:58 AM, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: On Tue, Sep 30, 2008 at 6:43 AM, Ben Goertzel [EMAIL PROTECTED] wrote: We are talking about 2 things: 1. Using an ad hoc parser to translate NL to logic 2. Using an AGI to parse NL I'm not sure what you

Re: [agi] universal logical form for natural language

2008-09-30 Thread Ben Goertzel
Markov chains are one way of doing the math for spreading activation, but e.g. neural nets are another... On Tue, Sep 30, 2008 at 1:23 AM, Linas Vepstas [EMAIL PROTECTED]wrote: 2008/9/29 Ben Goertzel [EMAIL PROTECTED]: Stephen, Yes, I think your spreading-activation approach makes sense

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Terren Suydam
Hi Ben, If Richard Loosemore is half-right, how is he half-wrong? Terren --- On Mon, 9/29/08, Ben Goertzel [EMAIL PROTECTED] wrote: From: Ben Goertzel [EMAIL PROTECTED] Subject: Re: [agi] Dangerous Knowledge To: agi@v2.listbox.com Date: Monday, September 29, 2008, 6:50 PM I mean that a

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
I don't want to recapitulate that whole long tedious thread again!! However, a brief summary of my response to Loosemore's arguments is here: http://opencog.org/wiki/OpenCogPrime:FAQ#What_about_the_.22Complex_Systems_Problem.3F.22 (that FAQ is very incomplete which is why it hasn't been

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben: the reason AGI is so hard has to do with Santa Fe Institute style complexity ... Intelligence is not fundamentally grounded in any particular mechanism but rather in emergent structures and dynamics that arise in certain complex systems coupled with their environments Characterizing what

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 12:45 PM, Mike Tintner [EMAIL PROTECTED]wrote: Ben: the reason AGI is so hard has to do with Santa Fe Institute style complexity ... Intelligence is not fundamentally grounded in any particular mechanism but rather in emergent structures and dynamics that arise in

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Terren Suydam
Right, was just looking for exactly that kind of summary, not to rehash anything! Thanks. Terren --- On Tue, 9/30/08, Ben Goertzel [EMAIL PROTECTED] wrote: From: Ben Goertzel [EMAIL PROTECTED] Subject: Re: [agi] Dangerous Knowledge To: agi@v2.listbox.com Date: Tuesday, September 30, 2008,

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Jim Bromer
From: Ben Goertzel [EMAIL PROTECTED] To give a brief answer to one of your questions: analogy is mathematically a matter of finding mappings that match certain constraints. The traditional AI approach to this would be to search the constrained space of mappings using some search heuristic. A

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 2:08 PM, Jim Bromer [EMAIL PROTECTED] wrote: From: Ben Goertzel [EMAIL PROTECTED] To give a brief answer to one of your questions: analogy is mathematically a matter of finding mappings that match certain constraints. The traditional AI approach to this would be to

Re: [agi] universal logical form for natural language

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 2:43 PM, Lukasz Stafiniak [EMAIL PROTECTED]wrote: On Tue, Sep 30, 2008 at 3:38 PM, Ben Goertzel [EMAIL PROTECTED] wrote: Markov chains are one way of doing the math for spreading activation, but e.g. neural nets are another... But these are related things,

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
And if you look at your brief answer para, you will find that while you talk of mappings and constraints, (which are not necessarily AGI at all), you make no mention in any form of how complexity applies to the crossing of hitherto unconnected domains [or matrices, frames etc], which, of

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben: analogy is mathematically a matter of finding mappings that match certain constraints. The traditional AI approach to this would be to search the constrained space of mappings using some search heuristic. A complex systems approach is to embed the constraints into a dynamical system and

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
It doesn't have any application... My proof has two steps 1) Hutter's paper The Fastest and Shortest Algorithm for All Well-Defined Problems http://www.hutter1.net/ai/pfastprg.htm 2) I can simulate Hutter's algorithm (or *any* algorithm) using an attractor neural net, e.g. via Mikhail Zak's

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Can't resist, Ben.. it is provable that complex systems methods can solve **any** analogy problem, given appropriate data Please indicate how your proof applies to the problem of developing an AGI machine. (I'll allow you to specify as much appropriate data as you like - any data, of

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben, Well, funny perhaps to some. But nothing to do with AGI - which has nothing to with well-defined problems. The one algorithm or rule that can be counted on here is that AGI-ers won't deal with the problem of AGI - how to cross domains (in ill-defined, ill-structured problems). Applies

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 4:18 PM, Mike Tintner [EMAIL PROTECTED]wrote: Ben, Well, funny perhaps to some. But nothing to do with AGI - which has nothing to with well-defined problems. I wonder if you are misunderstanding his use of terminology. How about the problem of gathering as much

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben, I must assume you are being genuine here - and don't perceive that you have not at any point illustrated how complexity might lead to the solution of any given general (domain-crossing) problem of AGI. Your OpenCog design also does not illustrate how it is to solve problems - how it

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Trent Waddington
On Wed, Oct 1, 2008 at 8:03 AM, Mike Tintner [EMAIL PROTECTED] wrote: Your OpenCog design also does not illustrate how it is to solve problems - how it is, for example, to solve the problems of concept, especially speculative concept,, formation.

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
You have already provided one very suitable example of a general AGI problem - how is your pet having learnt one domain - to play fetch, - to use that knowledge to cross into another domain - to learn/discover the game of hide-and-seek.? But I have repeatedly asked you to give me your

[agi] OpenCogPrime for Dummies [NOT]

2008-09-30 Thread Ben Goertzel
Without trying to be pejorative at all, it seems that the only real way for me to address a lot of the questions being asked here would be to write a sort of OpenCogPrime for Dummies [ ... note that the ... for Dummies books are not actually written for dumb people, they just assume little