Re: [agi] Poll

2007-10-19 Thread Mike Tintner
Josh: People learn best when they recieve simple, progressive, unambiguous instructions or examples. This is why young humans imprint on parent-figures, have heroes, and so forth -- heuristics to cut the clutter and reduce conflict of examples. An AGI that was trying to learn from the Internet f

Re: [agi] Human memory and number of synapses

2007-10-19 Thread Matt Mahoney
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote: > Say, each functional concept (a bit in total amount of memory) is > represented by R synapses and M neurons. When certain pattern of concepts is > observed, it creates a repeatable sequence of events. Say, pattern is one > concept being followed by an

Re: [agi] evolution-like systems

2007-10-19 Thread William Pearson
On 20/10/2007, Robert Wensman <[EMAIL PROTECTED]> wrote: > It seems your question stated on the meta discussion level, since that you > ask for a reason why a there are two different beliefs. > > I can only answer for myself, but to me some form of evolutionary learning > is essential to AGI. Actua

Re: [agi] evolution-like systems

2007-10-19 Thread Robert Wensman
It seems your question stated on the meta discussion level, since that you ask for a reason why a there are two different beliefs. I can only answer for myself, but to me some form of evolutionary learning is essential to AGI. Actually, I define intelligence to be "an Eco-system of ideas that comp

Re: [agi] Poll

2007-10-19 Thread J Storrs Hall, PhD
On Friday 19 October 2007 06:34:08 pm, Mike Tintner wrote: > In fact, there is an important, distinctive point here. AI/AGI machines may > be "uncertain," (usually quantifiably so), about how to learn an activity. > Humans are, to some extent, fundamentally "confused." We, typically, don't >

Re: [agi] Human memory and number of synapses

2007-10-19 Thread Vladimir Nesov
Edward, I'm sorry for obscurity of my message. I tried to omit some of the background that seemed irrelevant, but probably it isn't. I'll try to describe my point more systematically. I assume the following low-level model of brain operation (it's more about terminology needed to communicate intu

Re: [agi] An AGI Test/Prize

2007-10-19 Thread Benjamin Goertzel
> > > I largely agree. It's worth pointing out that Carnot published > "Reflections on > the Motive Power of Fire" and established the science of thermodynamics > more > than a century after the first working steam engines were built. > > That said, I opine that an intuitive grasp of some of the im

Re: [agi] Poll

2007-10-19 Thread Mike Tintner
Not quite sure why you responded quite so virulently. In fact, there is an important, distinctive point here. AI/AGI machines may be "uncertain," (usually quantifiably so), about how to learn an activity. Humans are, to some extent, fundamentally "confused." We, typically, don't just watch a

Re: [agi] An AGI Test/Prize

2007-10-19 Thread J Storrs Hall, PhD
On Friday 19 October 2007 03:32:46 pm, Benjamin Goertzel wrote: > ... my strong feeling is that we can progress straight to > powerful AGI right now without first having to do anything like > > -- define a useful, rigorous definition of intelligence > -- define a pragmatic IQ test for AGI's I lar

RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
There are different routes to AGI. "Development is a very expensive and time-consuming way to find out what we don't know." Knowing what we are trying to create could potentially help to find easier ways of creating it. Could there be an easier way? Maybe a self-modifying codebase that ta

RE: [agi] An AGI Test/Prize

2007-10-19 Thread Matt Mahoney
--- "John G. Rose" <[EMAIL PROTECTED]> wrote: > I am trying to understand what intelligence is at its smallest definable > level, mathematically. What is the minimalistic intelligence machine? Are > there non intelligent entities that need to be combined to form > intelligence? What exactly is it?

RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
> We may be misinterpreting each other. What I mean by learning being > necessary for intelligence is that a system that cannot learn is not > intelligent. Unless you posit some omnipotent, omniscient entity. Not > that a system must learn before it becomes intelligent. > > > What is the minimal i

Re: [agi] An AGI Test/Prize

2007-10-19 Thread Benjamin Goertzel
Just to reiterate my opinion: I think all this theorizing about definitions and tests and so forth is interesting, but not necessary at all for the creation of AGI. Any more than, say, the philosophy of quantum mechanics is necessary for building lasers or semiconductors or SQUIDS. Of course phil

Re: [agi] An AGI Test/Prize

2007-10-19 Thread William Pearson
On 19/10/2007, John G. Rose <[EMAIL PROTECTED]> wrote: > > From: William Pearson [mailto:[EMAIL PROTECTED] > > Subject: Re: [agi] An AGI Test/Prize > > > > I do not think such things are possible. Any problem that we know > > about and can define, can be solved with a giant look up table, or > > mo

RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
> From: William Pearson [mailto:[EMAIL PROTECTED] > Subject: Re: [agi] An AGI Test/Prize > > I do not think such things are possible. Any problem that we know > about and can define, can be solved with a giant look up table, or > more realistically, calculated by an unlearning TM. Unless you are o

RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
I don't know if an AGI level entity needs to even know that it exists. It could just know as much about everything except itself. Problems arise when it starts being concerned with its own survival. Self survival is very evolutionary as animals need to keep themselves alive to reproduce. We could t

Re: [agi] Poll

2007-10-19 Thread J Storrs Hall, PhD
On Friday 19 October 2007 01:30:43 pm, Mike Tintner wrote: > Josh: An AGI needs to be able to watch someone doing something and produce a > program such that it can now do the same thing. > > Sounds neat and tidy. But that's not the way the human mind does it. A vacuous statement, since I state

Re: [agi] An AGI Test/Prize

2007-10-19 Thread William Pearson
On 19/10/2007, John G. Rose <[EMAIL PROTECTED]> wrote: > I think that there really needs to be more very specifically defined > quantitative measures of intelligence. If there were questions that could be > asked of an AGI that would require x units of intelligence to solve > otherwise they would b

Re: [agi] Poll

2007-10-19 Thread Mike Tintner
Josh: An AGI needs to be able to watch someone doing something and produce a program such that it can now do the same thing. Sounds neat and tidy. But that's not the way the human mind does it. We start from ignorance and confusion about how to perform any given skill/ activity - and while we

Re: [agi] An AGI Test/Prize

2007-10-19 Thread Benjamin Goertzel
Well, one problem is that the current mathematical definition of general intelligence is exactly that -- a definition of totally general intelligence, which is unachievable by any finite-resources AGI system... On the other hand, IQ tests and such measure domain-specific capabiities as much as gen

Re: [agi] An AGI Test/Prize

2007-10-19 Thread Mike Tintner
John: >I think that there really needs to be more very specifically defined quantitative measures of intelligence. ...Other qualities like creativity and imagination would need to be measured in other ways. The only kind of intelligence you can measure with any precision is narrow AI - conv

RE: [agi] Poll

2007-10-19 Thread Edward W. Porter
Josh, Great post. Warrants being read multiple times. You said. JOSH>> I'm working on a formalism that unifies a very high-level programming language (whose own code is a basic datatype, as in lisp), spreading-activation semantic-net-like representational structures, and subsumption-style real-

RE: [agi] An AGI Test/Prize

2007-10-19 Thread John G. Rose
I think that there really needs to be more very specifically defined quantitative measures of intelligence. If there were questions that could be asked of an AGI that would require x units of intelligence to solve otherwise they would be unsolvable. I know that this is a hopeless foray on this list

Re: [agi] Poll

2007-10-19 Thread J Storrs Hall, PhD
In case anyone else is interested, here are my own responses to these questions. Thanks to all who answered ... > 1. What is the single biggest technical gap between current AI and AGI? (e.g. > we need a way to do X or we just need more development of Y or we have the > ideas, just need hardw

Re: [agi] evolution-like systems

2007-10-19 Thread Kingma, D.P.
Thanks for the information. Me and a friend came up with the exact same idea for NetFlix, along with some tweaks and were have nice results, too. A shame that our other people came up with the exact same idea :) I think that the main reason that this idea is so intuitive is that you can imagine it

RE: [agi] Poll

2007-10-19 Thread Edward W. Porter
In response to Vladimir Nesov‘s Fri 10/19/2007 5:28 AM post. Nesov>> Edward, Nesov>> Does your estimate consider only amount of information required for *representation*, or it also includes additional processing elements required in neural setting to implement learning? EWP>> The large numb

Re: [agi] evolution-like systems

2007-10-19 Thread David Orban
> the intuitiveness (or not) of evolution-like systems I had a speech recently at the Life 2.0 Conference about the "Evolution of Objects" http://www.slideshare.net/davidorban/evolving-useful-objects-life-20-summit/ which touches a similar subject, in a different context. > How many have a model

[agi] evolution-like systems

2007-10-19 Thread J Storrs Hall, PhD
There's a really nice blog at http://karmatics.com/docs/evolution-and-wisdom-of-crowds.html talking about the intuitiveness (or not) of evolution-like systems (and a nice glimpse of his Netflix contest entry using a Kohonen-like map builder). Most of us here understand the value of a market or

Re: [agi] Poll

2007-10-19 Thread Vladimir Nesov
Edward, Does your estimate consider only amount of information required for *representation*, or it also includes additional processing elements required in neural setting to implement learning? I'm not sure 10^9 is far off, because much more can be required for domain-independent association/corr

Re: [agi] More public awarenesss that AGI is coming fast

2007-10-19 Thread David Orban
Ben wrote: > Having said that, I would still prefer to avoid the VC route for Novamente. An other route that Novamente is apparently exploring, is that of open source development, with OpenCog. It will be very interesting to see how it pans out, what level of interest and involvement from the larg

RE: [agi] More public awarenesss that AGI is coming fast

2007-10-19 Thread John G. Rose
> From: J. Andrew Rogers [mailto:[EMAIL PROTECTED] > Subject: Re: [agi] More public awarenesss that AGI is coming fast > > Why does an AGI deliverable require more than 3-4 years? You better > have a good answer for that, or no one will fund you. Most people > *don't* have a good answer for that