RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
Vladimir, I'm using system as kind of a general word for a set and operator(s). You are understanding it correctly except templates is not right. The templates are actually a vast internal complex of structure which includes morphisms which are like templates. But you are right it does

RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
Yeah I'm not really agreeing with you here. I feel that, though I haven't really studied other cognitive software structures, but I feel that they can built simpler and more efficient. But I shouldn't come out saying that unless I attack some of the details right? But that's a gut reaction I have a

Re: [agi] An AGI Test/Prize

2007-10-22 Thread Richard Loosemore
Benjamin Goertzel wrote: On 10/22/07, *J Storrs Hall, PhD* <[EMAIL PROTECTED] > wrote: On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote: > ... but dynamic long-term memory, in my view, is a wildly > self-organizing mess, and would best be

RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
> Holy writhing Mandelbrot sets, Batman! > > Why real and non-division? I particularly don't like real -- my computer > can't > handle the precision :-) Robin - forget all this digital stuff it's a trap, we need some analog nano-computers to help fight these crispy impostors! John - This l

Re: [agi] An AGI Test/Prize

2007-10-22 Thread Benjamin Goertzel
On 10/22/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote: > > ... but dynamic long-term memory, in my view, is a wildly > > self-organizing mess, and would best be modeled algebraically as a > quadratic > > iteration over a high-d

Re: [agi] An AGI Test/Prize

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote: > ... but dynamic long-term memory, in my view, is a wildly > self-organizing mess, and would best be modeled algebraically as a quadratic > iteration over a high-dimensional real non-division algebra whose > multiplication table is ev

Re: [agi] An AGI Test/Prize

2007-10-22 Thread Benjamin Goertzel
Hmmm... I have a feeling that real cognitive structures are by and large way messier than the abstract algebras dealt with in research papers, though... so I'm not sure how far the algebras handled in the research literature will get you, in terms of approaching AGI Perception and motorics wil

RE: [agi] An AGI Test/Prize

2007-10-22 Thread John G. Rose
Ben, That is sort of a neat kind of device. Will have to think about that as it is fairly dynamic I may have to look that one up and potentially experiment on it. The kinds of algebraic structures I'm talking about basically are as many as possible. Also things like sets w/o operators, and

Re: [agi] An AGI Test/Prize

2007-10-21 Thread Benjamin Goertzel
John Rose, As a long-lapsed mathematician, I'm curious about your system, but what you've said about it so far doesn't really tell me much... Do you have a mathematical description of your system? I did some theoretical work years ago representing complex systems dynamics in terms of abstract al

Re: [agi] An AGI Test/Prize

2007-10-21 Thread Vladimir Nesov
On 10/21/07, John G. Rose <[EMAIL PROTECTED]> wrote: > > Vladimir, > > > > That may very well be the case and something that I'm unaware of. The > system I have in mind basically has I/O that is algebraic structures. > Everything that it deals with is modeled this way. Any sort of system that > it

RE: [agi] An AGI Test/Prize

2007-10-21 Thread Edward W. Porter
l Message- From: John G. Rose [mailto:[EMAIL PROTECTED] Sent: Saturday, October 20, 2007 10:58 PM To: agi@v2.listbox.com Subject: RE: [agi] An AGI Test/Prize Edward, Oops missed that - CA (cellular automata) is something that some other people on the list could really enlighten you on a

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
. I suppose I need to elaborate - hey wait a sec how did I become the theorist on all this crap? heh John From: Edward W. Porter [mailto:[EMAIL PROTECTED] Subject: RE: [agi] An AGI Test/Prize So, do you or don't you model uncertainty, contradictory evidence, degree of similarity

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
94-1822 [EMAIL PROTECTED] -Original Message- From: John G. Rose [mailto:[EMAIL PROTECTED] Sent: Saturday, October 20, 2007 10:39 PM To: agi@v2.listbox.com Subject: RE: [agi] An AGI Test/Prize Hi Edward, I don’t see any problems dealing with either discrete or continuous. In fact in s

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Hi Edward, I don't see any problems dealing with either discrete or continuous. In fact in some ways it'd be nice to eliminate discrete and just operate in continuous mode. But discrete maps very well with binary computers. Continuous is just a lot of discrete, the density depending on resource

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
P.S. Re “CA”: maybe I am well versed in them but I don’t know what the acronym stands for. If it wouldn’t be too much trouble could you please educate me on the subject? -Original Message- From: John G. Rose [mailto:[EMAIL PROTECTED] Sent: Saturday, October 20, 2007 8:11 PM To: agi@

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Hi Edward, Haven't figured out how to get rid of the HTML line at the side in Outlook so I'll reply at the top here. Our heads are doing pure and absolute they are busily cranking away at it. Our heads are also an "instance" and a subset of a pure and absolute. Gosh I suppose I have to

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Vladimir, That may very well be the case and something that I'm unaware of. The system I have in mind basically has I/O that is algebraic structures. Everything that it deals with is modeled this way. Any sort of system that it analyzes it converts to a particular structure that represents the

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Vladimir Nesov
On 10/21/07, John G. Rose <[EMAIL PROTECTED]> wrote: > > http://en.wikipedia.org/wiki/Algebraic_structure > > > > http://en.wikipedia.org/wiki/Cellular_automata > > > > Start reading…. > John, It doesn't really help in understanding how system described by such terms is related to implementation

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
http://en.wikipedia.org/wiki/Algebraic_structure http://en.wikipedia.org/wiki/Cellular_automata Start reading.. John From: Edward W. Porter [mailto:[EMAIL PROTECTED] John, "[A]bstract algebra based engine" that's "basically an algebraic structure pump" sounds really ex

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Benjamin Goertzel
I workshop... http://www.agiri.org/forum/index.php?act=ST&f=21&t=23 -- Ben -Original Message- > *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED] > *Sent:* Saturday, October 20, 2007 5:24 PM > *To:* agi@v2.listbox.com > *Subject:* Re: [agi] An AGI Test/Prize > >

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
min Goertzel [mailto:[EMAIL PROTECTED] Sent: Saturday, October 20, 2007 5:24 PM To: agi@v2.listbox.com Subject: Re: [agi] An AGI Test/Prize Ah, gotcha... The recent book "Advances in Artificial General Intelligence" gives a bunch more detail than those, actually (though not as much of the

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
rday, October 20, 2007 4:44 PM To: agi@v2.listbox.com Subject: RE: [agi] An AGI Test/Prize Well I’m neck deep in 55,000 semi-colons of code in this AI app I’m building and need to get this bastich out the do’ and it’s probably going to grow to 80,000 before version 1.0. But at some point it need

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Benjamin Goertzel
oks is quite helpful. > > Ed Porter > > -Original Message- > *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED] > *Sent:* Saturday, October 20, 2007 4:01 PM > *To:* agi@v2.listbox.com > *Subject:* Re: [agi] An AGI Test/Prize > > > On 10/20/07, Edwar

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Well I'm neck deep in 55,000 semi-colons of code in this AI app I'm building and need to get this bastich out the do' and it's probably going to grow to 80,000 before version 1.0. But at some point it needs to grow a brain. Yes I have my AGI design in mind since late 90's and had been watching what

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
min Goertzel [mailto:[EMAIL PROTECTED] Sent: Saturday, October 20, 2007 4:01 PM To: agi@v2.listbox.com Subject: Re: [agi] An AGI Test/Prize On 10/20/07, Edward W. Porter <[EMAIL PROTECTED]> wrote: John, So rather than a definition of intelligence you want a recipe for how to make a one?

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Benjamin Goertzel
r, NH 03833 > (617) 494-1722 > Fax (617) 494-1822 > [EMAIL PROTECTED] > > -Original Message- > *From:* John G. Rose [mailto:[EMAIL PROTECTED] > *Sent:* Saturday, October 20, 2007 3:16 PM > *To:* agi@v2.listbox.com > *Subject:* RE: [agi] An AGI Test/Prize > >

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
idge S12 Exeter, NH 03833 (617) 494-1722 Fax (617) 494-1822 [EMAIL PROTECTED] -Original Message- From: John G. Rose [mailto:[EMAIL PROTECTED] Sent: Saturday, October 20, 2007 3:16 PM To: agi@v2.listbox.com Subject: RE: [agi] An AGI Test/Prize No you are not mundane. All these things on

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
No you are not mundane. All these things on the list (or most) are very well to be expected from a generally intelligent system or its derivatives. But I have this urge, being a software developer, to smash all these things up into their constituent components, partition commonalties, eliminate dup

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Robert Wensman
Regarding testing grounds for AGI. Personally I feel that ordinary computer games could provide an excellent proving ground for the early stages of AGI, or maybe even better if they are especially constructed. Computer games are usually especially designed to encourage the player towards creativity

RE: [agi] An AGI Test/Prize

2007-10-20 Thread Edward W. Porter
tober 20, 2007 1:27 PM To: agi@v2.listbox.com Subject: RE: [agi] An AGI Test/Prize Interesting background about on some thermodynamics history J. But basic definitions of intelligence, not talking about reinventing particle physics here, a basic, workable definition, not rigorous mathemati

RE: [agi] An AGI Test/Prize

2007-10-20 Thread John G. Rose
Interesting background about on some thermodynamics history J. But basic definitions of intelligence, not talking about reinventing particle physics here, a basic, workable definition, not rigorous mathematical proof just something simple. AI, AGI c'mon not asking for tooo much. In my mind it i

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Lukasz Stafiniak
Anyone willing to submit their AGI to the General Game Playing competition? Or, what do you think about Warren Smith's IQ test? Or, do you dismiss these as too challenging / too general? ;-) On 10/20/07, David McFadzean <[EMAIL PROTECTED]> wrote: > > True but how about testing intelligence by co

Re: [agi] An AGI Test/Prize

2007-10-20 Thread David McFadzean
On 10/19/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > http://www.vetta.org/documents/ui_benelearn.pdf > > Unfortunately the test is not computable. True but how about testing intelligence by comparing the performance of an agent across several computable environments (randomly-generated finite

Re: [agi] An AGI Test/Prize

2007-10-20 Thread Gabriel Recchia
Has anyone come across (or written) any papers that argue for particular low-level capabilities that any system capable of human-level intelligence must possess, and which posits particular tests for assessing whether a system possesses these prerequisites for intelligence? I'm looking for anythin

Re: [agi] An AGI Test/Prize

2007-10-19 Thread Benjamin Goertzel
> > > I largely agree. It's worth pointing out that Carnot published > "Reflections on > the Motive Power of Fire" and established the science of thermodynamics > more > than a century after the first working steam engines were built. > > That said, I opine that an intuitive grasp of some of the im

Re: [agi] An AGI Test/Prize

2007-10-19 Thread J Storrs Hall, PhD
On Friday 19 October 2007 03:32:46 pm, Benjamin Goertzel wrote: > ... my strong feeling is that we can progress straight to > powerful AGI right now without first having to do anything like > > -- define a useful, rigorous definition of intelligence > -- define a pragmatic IQ test for AGI's I lar

RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
There are different routes to AGI. "Development is a very expensive and time-consuming way to find out what we don't know." Knowing what we are trying to create could potentially help to find easier ways of creating it. Could there be an easier way? Maybe a self-modifying codebase that ta

RE: [agi] An AGI Test/Prize

2007-10-19 Thread Matt Mahoney
--- "John G. Rose" <[EMAIL PROTECTED]> wrote: > I am trying to understand what intelligence is at its smallest definable > level, mathematically. What is the minimalistic intelligence machine? Are > there non intelligent entities that need to be combined to form > intelligence? What exactly is it?

RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
> We may be misinterpreting each other. What I mean by learning being > necessary for intelligence is that a system that cannot learn is not > intelligent. Unless you posit some omnipotent, omniscient entity. Not > that a system must learn before it becomes intelligent. > > > What is the minimal i

Re: [agi] An AGI Test/Prize

2007-10-19 Thread Benjamin Goertzel
x27;s etc. -- Ben G On 10/19/07, William Pearson <[EMAIL PROTECTED]> wrote: > > On 19/10/2007, John G. Rose <[EMAIL PROTECTED]> wrote: > > > From: William Pearson [mailto:[EMAIL PROTECTED] > > > Subject: Re: [agi] An AGI Test/Prize > > > > > >

Re: [agi] An AGI Test/Prize

2007-10-19 Thread William Pearson
On 19/10/2007, John G. Rose <[EMAIL PROTECTED]> wrote: > > From: William Pearson [mailto:[EMAIL PROTECTED] > > Subject: Re: [agi] An AGI Test/Prize > > > > I do not think such things are possible. Any problem that we know > > about and can define, can be s

RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
> From: William Pearson [mailto:[EMAIL PROTECTED] > Subject: Re: [agi] An AGI Test/Prize > > I do not think such things are possible. Any problem that we know > about and can define, can be solved with a giant look up table, or > more realistically, calculated by an unlearning

RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
I don't know if an AGI level entity needs to even know that it exists. It could just know as much about everything except itself. Problems arise when it starts being concerned with its own survival. Self survival is very evolutionary as animals need to keep themselves alive to reproduce. We could t

Re: [agi] An AGI Test/Prize

2007-10-19 Thread William Pearson
On 19/10/2007, John G. Rose <[EMAIL PROTECTED]> wrote: > I think that there really needs to be more very specifically defined > quantitative measures of intelligence. If there were questions that could be > asked of an AGI that would require x units of intelligence to solve > otherwise they would b

Re: [agi] An AGI Test/Prize

2007-10-19 Thread Benjamin Goertzel
Well, one problem is that the current mathematical definition of general intelligence is exactly that -- a definition of totally general intelligence, which is unachievable by any finite-resources AGI system... On the other hand, IQ tests and such measure domain-specific capabiities as much as gen

Re: [agi] An AGI Test/Prize

2007-10-19 Thread Mike Tintner
John: >I think that there really needs to be more very specifically defined quantitative measures of intelligence. ...Other qualities like creativity and imagination would need to be measured in other ways. The only kind of intelligence you can measure with any precision is narrow AI - conv

RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
I think that there really needs to be more very specifically defined quantitative measures of intelligence. If there were questions that could be asked of an AGI that would require x units of intelligence to solve otherwise they would be unsolvable. I know that this is a hopeless foray on this list

Re: [agi] An AGI Test/Prize

2007-10-18 Thread Vladimir Nesov
I think AGI test should fundamentally be a learning ability test. When there's a specified domain in which the system should demonstrate it competency (like 'chatting' or 'playing Go'), it's likely easier to write narrow solution. If system is not a RSI AI already, resulted competency depends on qu

Re: [agi] An AGI Test/Prize

2007-10-18 Thread Russell Wallace
On 10/18/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > Hmmm... the storytelling direction is interesting. > > E.g., you could tell the first half of a story to the test-taker, and ask > them > to finish it... Or better, draw an animation of (both halves of) it. - This list is sponsored b

Re: [agi] An AGI Test/Prize

2007-10-18 Thread Benjamin Goertzel
> > > > I guess, off the top of my head, the conversational equivalent might be a > Story Challenge - asking your AGI to tell some explanatory story about a > problem that had occurred to it recently, (designated by the tester), and > then perhaps asking it to devise a solution. Just my first thou