I am suggesting that there are two main types of intelligence - and humans have both. "Simulating the human mind" isn't a definition of either of those types, or intelligence, period.

The two main types of intelligence have long been given names by mainstream pyschology - "convergent" or "crystallised" .vs "divergent" or "fluid" intelligence. And these two types also seem very clearly to me to identify and be more or less the same as the distinction between AI and AGI. There is a very long tradition here, and the parallelism seems obvious.

But neither of these types have yet been given proper, adequate definitions by Psychology, and nor indeed has "intelligence" generally. That, I am suggesting, is the task.

For the philosophy of AI - and this IS a discussion of philosophy - to ignore Psychology and human intelligence, and the very extensive work already done here, including on creativity - doesn't seem v. wise, given that AI/AGI still haven't got to square one in the attempt either to emulate or to satisfactorily define human-level "fluid", "adaptive" intelligence.

(It is equally foolish of Psychology to ignore the debates going on in AI).


Mike,

If you take a look at my papers, you'll see that I distinguish not 2,
but 5 different types of goals currently associated with the label
"AI". Your first type, "to simulate human mind" is also included.

Since a working definition is used to guide one's research, it doesn't
need to cover other usage of the same word.

Pei

On 5/15/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
I too very largely and strongly agree with what Pei says below.

But in all this discussion, it looks like one thing is being missed (please
correct me).

The task is to define TWO kinds of intelligence not just one - you need a
dual not just a single definition of intelligence. Everyone seems to be
aiming for only one

A dual definition must distinguish the more basic kind of problems that AI deals with, from the more sophisticated kind of problems that only AGI and humans can deal with. (That surely is the whole point here, no? - you guys (along with anyone who cares about these things) want to be able to define
what's special about AGI).

A dual definition is also important because there are, ipso facto,  two
kinds of human intelligence, and scientific pyschology has focussed almost
exclusively on only one kind -  in the form of IQ tests.  Well, AI can
probably do those. But even if you're just looking at education,  as
psychology tends to do, humans also use another different kind of
intelligence - exemplified in the ability to write essays, (like an essay on the causes of the French Revolution) - it will take an AGI to do those.

Clearly distinguishing the two kinds of intelligence is tricky - more basic AI, for example, can, strictly, be "adaptive," if in a superficial way, even
if only more sophisticated AGI is truly adaptive. How though to make the
distinction?

Strictly, also, the first basic kind of intelligence IS general in humans,
even if it isn't in AI. A human being, having learned to solve one basic
kind of problem - e.g. spelling problems and anagrams in one language, - can
and will learn to solve very different, basic kinds of problem - like
spelling and word games in a second and third language, or basic
calculations in mathematics. So what truly distinguishes AGI from AI is NOT
its generality! .(I hadn't thought of that till now).

Pei's definition of intelligence illustrates another of the many
difficulties of making a sharp distinction. "The ability to adapt and work with insufficient knowledge and resources", seems to me a definition of the
second, AGI kind of intelligence. And I think it made a contribution in
highlighting the importance for AGI of having to work with "insufficient
knowledge and resources". But, to be strict and pedantic, AI can also work
with insufficent knowledge.It too can be initially uncertain about how to
deal with a problem - as long as it knows how to deal with that uncertainty,
(if it say, knows that it need only consult so-and-so or some external
source of knowledge). To be precise, what characterises the second, AGI kind of intelligence is "metacognitive uncertainty" - being uncertain about how to deal with uncertainty, or having insufficient knowledge about how to deal
with insufficient knowledge.

And just to complicate matters even more, the second kind of intelligence
needs to be FURTHER divided. Clearly AGI involves some kind of truly
"adaptive intelligence", (although as I've just said, we need to be more
precise than that) - but that in turn, has two subdivisions. First there
is the more ordinary kind, that can, find a way, say,  to pack a suitcase
with a whole set of items that won't at first fit, and then there is the
more extraordinary kind, which we call true creativity, that can like
Archimedes discover a way to measure the volume of an irregular solid that
no one ever had ever thought of before and is radically surprising .

A truly comprehensive definition of intelligence must include creativity and
define it too. (Creativity does seem to have been left out from all these
definitions, no?).But it should also show that, in the very final analysis,
there is no distinction in terms of intellectual capacities - i.e. if you
have the ordinary adaptive intelligence to pack that  suitcase,  then you
also have the capacity to solve any extraordinary, creative problem .

So ... suggestions?



----- Original Message -----
From: "Pei Wang" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Tuesday, May 15, 2007 4:02 PM
Subject: [agi] definitions of intelligence, again?!


> In late April I was too busy to join the thread "Circular definitions
> of intelligence". However, since some of you know that I proposed my
> working definition of intelligence before, I have no choice but to
> take Richard's challenge. ;-)
>
> Before addressing Richard's (very good) points, let me summarize my
> opinions presented in
> http://www.cogsci.indiana.edu/pub/wang.intelligence.ps and
> http://nars.wang.googlepages.com/wang.AI_Definitions.pdf , especially
> for the people who don't want to read papers.
>
> First, I know that many people think this discussion is a waste of
> time. I agree that spending all the time arguing about definitions
> won't get AGI to anywhere, but the other extreme is equally bad. The
> recent discussions in this mailing list make me think that it is still
> necessary to spend some time on this issue, since the definition of
> intelligence one accepts directly determines one's research goal and
> criteria in evaluating other people's work. Nobody can do or even talk
> about AI or AGI without an idea about what it means.
>
> Though at the current time we cannot expect a perfect definition (we
> don't know that much yet), it doesn't mean any definition or vague
> notion are equally good. A good definition should be (1) clear, (2)
> simple, (3) instructive, and (4) close to the common usage of the term
> in everyday language. Since these requests often conflict with each
> other, our choice must be based on a balance among them, rather than
> on a single factor.
>
> Unlike in many other fields where the definition of the field doesn't
> matter too much, in AI it is the root of many other problems, since
> the only widely accepted sample of intelligence, human intelligence,
> can be specified and duplicated in several aspects or perspectives,
> and each of them lead the research to a different direction. Though
> all these directions are fruitful, they produce very different fruits,
> and cannot encompass one another (though partial overlaps exist).
>
> Based on the above general consideration, I define "intelligence" as
> "the ability to adapt and work with insufficient knowledge and
> resources", which requires the system to depend on finite
> computational capacity, to open to novel observations and tasks, to
> respond in real time, and to learn from experience.
>
> NARS is designed and implemented according to this working definition
> of intelligence.
>
> In the following I'll comment on Richard's opinions.
>
> On 4/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>>
>> I spent a good deal of effort, yesterday, trying to get you to "define
>> intelligence in an abstract way that is not closely coupled to human
>> intelligence" and yet, in the end, the only thing you could produce >> was
>> a definition that either:
>>
>> a) Contained a term that had to be interpreted by an intelligence - so
>> this was not an objective definition, it was circular,
>
> Though "circular definition" should be rejected in general, this
> notion cannot be interpreted too widely. I'll say that defining
> "intelligence" by "mind", "cognition", "thinking", or "consciences"
> doesn't contribute much, but I don't mind people to use concepts like
> "goal" in their definitions (though I don't do that for other
> reasons), because "goal" is a much simpler and more clear concept than
> "intelligence", though like all human concepts, it has its own
> fuzziness and vagueness.
>
> Richard is right when saying that intelligence is required to
> recognize goal, but in that sense, all human concepts are created by
> human intelligence, rather than obtained from the objective world.
> Under that consideration, all meaningful definitions of intelligence
> will be judged as "circular". Even so, to define "intelligence" using
> "goal" is much less circular than using "intelligence" itself.
>
> Again, for our current question, no answer is perfect, but it doesn't
> mean all answers are equally bad (or equally good).
>
>> b) Was a definition of such broad scope that it did not even slightly
>> coincide with the commonsense usage of the word "intelligent" ... for
>> example, it allowed an algorithm that optimized ANYTHING WHATSOEVER to
>> be have the word 'intelligent' attached to it,
>
> Agree. If all computers are already intelligent, then we should
> continue to go with computer science, since the new label "AI"
> contribute nothing.
>
> According to my definition, a thermostat is not intelligence, and nor
> is an algorithm that provide "optimum" solutions by going through all
> possibilities and pick the best.
>
> To me, whether a system is intelligent is not determined by what
> practical problems it can solve at a given moment, but by how it
> solves problems --- by design or via learning. Among learning systems,
> to me the most important thing is not how complex the results are, but
> how realistic the situation is. For example, to me, a system assuming
> sufficient resources is not intelligent, no matter how great the
> result is.
>
> I don't think intelligence should be measured by problem-solving
> capabilities. For example, Windows XP is much more capable than
> Windows 3.1, though I don't think it is more intelligent --- to me,
> both of them have little intelligence. Yes, intelligence is a matter
> of degree, but it doesn't mean that any system will have a non-zero
> degree in this scale.
>
> BTW, I think it is too early to talk about numerical measurement of
> intelligence, though we can use the term qualitatively and
> comparatively.
>
>> c) Was couched in terms of a pure mathematical formalism (Hutter's),
>> about which I cannot even *say* whether it coincides with the
>> commonsense usage of the word "intelligent" because there is simply no
>> basis for comparing this definition with anything in the real world --
>> as meaningless as defining a unicorn in terms of measure theory!
>
> I think two issues are mixed here.
>
> To criticize the formalness of Hutter's work is not fair, because he
> makes its relation with computer system quite clear. It is true that
> he definition doesn't fully match the commonsense usage of the word,
> but no clear definition will --- we need a definition exactly because
> the commonsense usage of the word is too messy to guide our research.
>
> To criticize his assumption as "too far away from reality" is a
> different matter, which is also why I don't agree with Hutter and
> Legg. Formal systems can be built on different assumptions, some of
> which are closer to reality than some others. For example, it is
> possible to build a formal model with the assumption of infinite
> resources, and another one with the assumption of finite resources. We
> cannot say that they are equally unrealistic just because they are
> both formal.
>
>> In all other areas of science, a formal scientific definition often >> does
>> extend the original (commonsense) meaning of a term - you cite the
>> example of gravity, which originally only meant something that >> happened
>> on the Earth.  But one thing that a formal scientific definition NEVER
>> does is to make a mockery of the original commonsense definition.
>
> Again, it is a balance. I believe my definition capture the essence of
> intelligence in a deep level, though I acknowledge its difference on
> the surface level with the CURRENT commonsense usage of the word ---
> the commonsense usage of words do evolve with the progress of science.
>
>> I am eagerly awaiting any definition from you that does not fall into
>> one of these traps.  Instead, it seems to me, you give only assertions
>> that such a definition exists, without actualy showing it.
>>
>> *********
>>
>> Unless you or someone else comes up with a definition that does not >> fall
>> into one of these traps, I am not going to waste any more time arguing
>> the point.
>>
>> Consider that, folks, to be a challenge:  to those who think there is
>> such a definition, I await your reply.
>>
>> Richard Loosemore
>
> So I've tried. I won't challenge people to find imperfectness in my
> definition (I know there are many), but do want to challenge people to
> propose better ones. I believe this is how this field can move forward
> --- not only by finding problems in the existing ideas, but also by
> suggesting better ones.
>
> Pei
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
>
> --
> No virus found in this incoming message.
> Checked by AVG Free Edition. Version: 7.5.467 / Virus Database:
> 269.7.0/804 - Release Date: 14/05/2007 16:46
>
>


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 269.7.0/804 - Release Date: 14/05/2007 16:46




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to