On Jan 12, 2008 3:04 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Every time a dispute erupts about what the real definition of
> "intelligence" is, all we really get is noise, because nobody is clear
> about the role that the definition is supposed to play.

Richard,

I fully understand how annoying this kind of debate is, but given the
fact that my AGI-08 paper
(http://nars.wang.googlepages.com/wang.AI_Definitions.pdf) happen to
be right on this topic, I have to object to your above strong
conclusion, by saying that at least I have tried to be "clear about
the role that the definition is supposed to play". ;-)

I know your target is probably not me (I never believe there is a
"real definition" as Shane), but to completely dismiss this kind of
discussion is not a good idea, as I argued in the paper. I won't
repeat the content of that paper any further, and will welcome
detailed criticism of it, either before or at AGI-08.

Pei

> If the role is to distinguish Narrow AI from AGI, Ben's definition is
> fine.  If the role is to define a class of (arbitrary) systems, any
> definition whatsoever is fine so long as there is no circularity in it
> (although the result will not necessarly have any relationship to the
> commonsense meaning of "intelligence").  If the role is to act as a
> loose organizing principle for a field of inquiry, it needs to have some
> power to act as an organizing principle.
>
> With this in mind, Shane Legg's paper is not "the canonical reference",
> it is a trivial reference, being nothing more than a naive list of
> definitions collected from elsewhere, with only the shallowest
> understanding of their context, relationships or roles.
>
>
>
> Richard Loosemore
>
> "At the University every great treatise is postponed until its author
> attains impartial judgment and perfect knowledge. If a horse could wait
> as long for its shoes and would pay for them in advance, our blacksmiths
> would all be college dons."
>    - George Bernard Shaw:  Maxims for Revolutionists (Man and Superman)
>
>
>
>
>
>
>
> Benjamin Goertzel wrote:
> > On definitions of intelligence, the canonical reference is
> >
> > http://www.vetta.org/shane/intelligence.html
> >
> > which lists 71 definitions.  Apologies if someone already pointed out
> > Shane's page in this thread, I didn't read every message carefully.
> >
> >> An AGI definition of intelligence surely has, by definition! - to be
> >> "general" rather than "complex" and emphasize "general
> >> problemsolving/learning". That seems to be what you actually mean.
> >
> > Mike:
> > Obviously, my "achieving complex goals in complex environments"
> > definition is intended to include "generality".  It could be rephrased as
> > "effectively achieving a wide variety of complex goals in various
> > complex environments", with the "general" implicit in the "wide."
> >
> > I also gave a math version of the definition in 1993, which is
> > totally unambiguous due to being math rather than words.  I have
> > not bothered to look at the precise relations btw my older math
> > definition and Shane Legg and Marcus Hutter's more recent math
> > definition of intelligence.  They are not identical but have a similar
> > spirit.
> >
> >> "Intelligence has many dimensions. A crucial dimension of a true
> >> intelligence* is that it is general. It is a general problem-solver and
> >> general learner, able to solve, and learn how to solve,  problems in many,
> >> and potentially infinite, domains - *without* being specially preprogrammed
> >> for any one of them.  All computers to date have been specialists. The goal
> >> of Artificial General Intelligence is to create the first generalist."
> >>
> >
> > The problem with your above "definition" is that it uses terms that are
> > themselves so extremely poorly-defined ;-)
> >
> > Arguably it rules out the brain, which is heavily preprogrammed by
> > evolution in order to be good at certain things like vision, arm and
> > hand movement, social interaction, language parsing, etc.
> >
> > And it does not rule out AIXItl type programs which achieve flexibility
> > trivially, at the cost of utilizing unacceptably much computational
> > resources...
> >
> > The reality is that achieving general intelligence given finite resources
> > is probably always going to involve a combination of in-built
> > biases and general learning ability.
> >
> > And where the line is drawn between "in-built biases" and
> > "preprogramming" is something that current comp/cog-sci does
> > not allow us to formally articulate in a really useful way.
> > This is  a subtle issue, as e.g.
> > a program for carrying out a specific task, coupled with a general-
> > purpose learner of the right level of capability, may in effect
> > serve as a broader inductive bias helping with a wider variety
> > of tasks.
> >
> > -- Ben
> >
> > -----
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
> >
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=85303452-97c25c

Reply via email to