Just to reiterate my opinion: I think all this theorizing about definitions
and tests and so forth is interesting, but not necessary at all for the
creation of AGI.  Any more than, say, the philosophy of quantum mechanics
is necessary for building lasers or semiconductors or SQUIDS.

Of course philosophical and conceptual work can be instrumental in guiding
practical work ... but my strong feeling is that we can progress straight to
powerful AGI right now without first having to do anything like

-- define a useful, rigorous definition of intelligence
-- define a pragmatic IQ test for AGI's

etc.

-- Ben G

On 10/19/07, William Pearson <[EMAIL PROTECTED]> wrote:
>
> On 19/10/2007, John G. Rose <[EMAIL PROTECTED]> wrote:
> > > From: William Pearson [mailto:[EMAIL PROTECTED]
> > > Subject: Re: [agi] An AGI Test/Prize
> > >
> > > I do not think such things are possible. Any problem that we know
> > > about and can define, can be solved with a giant look up table, or
> > > more realistically, calculated by an unlearning TM. Unless you are of
> > > the opinion that learning is unnecessary for intelligence? In which
> > > case what you want may be possible.
> > >
> > > Any appearance of learning can also be faked by GLUT and unlearning
> > > TMs, using time as an input. If you want to rigorously define
> > > intelligence, you will need to look at how the internals change and
> > > base a definition on that. My current thinking is based on which
> > > search spaces the system moves through while trying to map input to
> > > output, and how it makes use of information from the outside to change
> > > what it does.
> > >
> >
> > Whether or not learning is necessary for intelligence would depend on
> the
> > exact definition of it. The minimally intelligent engine would contain
> > internal information.
>
> We may be misinterpreting each other. What I mean by learning being
> necessary for intelligence is that a system that cannot learn is not
> intelligent. Unless you posit some omnipotent, omniscient entity. Not
> that a system must learn before it becomes intelligent.
>
> > What is the minimal internal state it would need to
> > start with if any? Is the system, before any input, intelligent?
>
> I'm not sure what you are getting at here. I am tempted to answer
> with, "Can a plane, before it has left the ground, fly?"
>
> > There could
> > be a very simple mathematical definition of intelligence.
> >
> This is also a bit opaque to me, are you talking about a definition on
> the ability to solve problems or a mathematical definition of the
> internal structure/dynamics? One I think possible, the other... not so
> much.
>
> Will Pearson
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55573222-18f4fd

Reply via email to