RE: [agi] definition source?

2007-11-07 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote: From: BillK [mailto:[EMAIL PROTECTED] This article might be useful. http://www.machineslikeus.com/cms/a-collection-of-definitions-of- intelligence.html A Collection of Definitions of Intelligence Sat, 06/30/2007 By Shane Legg and

Re: [agi] Connecting Compatible Mindsets

2007-11-07 Thread Jiri Jelinek
1).. 2.. 3).. It would be nice to have a place where such work could be listed. Right. I would keep it as a separate list. Its items could be: 1) [later] referenced from the AGI project list (possibly from sections 4,5,7,8,9,..). 2) nominated/suggested by authors (or automatically by the system

RE: [agi] definition source?

2007-11-07 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] I think all of these boil down to a simple equation with just a few variables. Anyone have it? It'd be nice if it included some sort of computational complexity energy expression in it. Yes. Intelligence is the expected reward in an

RE: [agi] Connecting Compatible Mindsets

2007-11-07 Thread Derek Zahn
A large number of individuals on this list are architecting an AGI solution (or part of one) in their spare time. I think that most of those efforts do not have meaningful answers to many of the questions, but rather intend to address AGI questions from a particular perspective. Would such

RE: [agi] definition source?

2007-11-07 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote: From: Matt Mahoney [mailto:[EMAIL PROTECTED] I think all of these boil down to a simple equation with just a few variables. Anyone have it? It'd be nice if it included some sort of computational complexity energy expression in it. Yes.

Re: [agi] Connecting Compatible Mindsets

2007-11-07 Thread Linas Vepstas
On Wed, Nov 07, 2007 at 08:38:40AM -0700, Derek Zahn wrote: A large number of individuals on this list are architecting an AGI solution (or part of one) in their spare time. I think that most of those efforts do not have meaningful answers to many of the questions, but rather intend to

Re: [agi] Questions

2007-11-07 Thread Monika Krishan
On Nov 7, 2007 8:46 AM, Edward W. Porter [EMAIL PROTECTED] wrote: It is much easier to think how superhuman intelligences will outshine us in the performance arena, since all one has to do is take known human mental talents and extrapolate. It seems to me it is more difficult to understand

Re: [agi] Connecting Compatible Mindsets

2007-11-07 Thread Jiri Jelinek
Linas, revealing thier weaknesses and strengths to the competition. Putting together functional AGI is a very difficult task. So far (more/less) failure after failure. Many critically important things are hidden in detailed design = sharing high level info isn't in most cases as risky (from

Re: [agi] Connecting Compatible Mindsets

2007-11-07 Thread YKY (Yan King Yin)
On 11/8/07, Linas Vepstas [EMAIL PROTECTED] wrote: And the serious contenders are a handful of small companies that seem unlikely to fill out a self-assesment status report card revealing thier weaknesses and strengths to the competition. If you don't find some like-minded partners, you may not

RE: [agi] Questions

2007-11-07 Thread Edward W. Porter
Monika, It seems to me that all the examples you gave of new things are ones that could be handled with substantially more powerful versions of the type of thinking humans already have and which machines like Novamente should be able to deliver. Ed Portrer -Original Message- From: