Yes, all intelligent systems (i.e. living creatures) have a psychoeconomy of goals - important point.

But in solving any particular problem, they may be dealing with only one or two goals at a time.

Have measures of intelligence mentioned, included:

1) the depth of the problem - the number of obstacles/ forked paths that have to be negotiated on the way to the goal and/ or things that have to be combined to satisfy the goal

2) the profundity of the problem - if it's a problem involving generalisation, then the classes being handled may involve ever larger trees of members,. (philosophy is difficult because while the ideas involved are often quite simple, (from the point of view of 1) ) they usually have an immense scope of reference - "free will," for example, refers to millions of different kinds of decisions)

3) the grasp of the problem - the complexity of the model needed to deal with the problem - like the map, needed to travel through a territory


----- Original Message ----- From: "Kaj Sotala" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Friday, April 27, 2007 4:18 PM
Subject: Re: [agi] Circular definitions of intelligence


On 4/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Can you point to an objective definition that is clear about which
things are more intelligent than others, and which does not accidentally
include things that manifestly conflict with the commonsense definition
(by false negatives or false positives)?

(Disclaimer: I do not claim to know the sort of maths that Ben and
Hutter and others have used in defining intelligence. I'm fully aware
that I'm dabbling in areas that I have little education in, and might
be making a complete fool of myself. Nonetheless...)

In thinking about human rationality, I've found it useful to consider
intelligence and goals as two separate things. Goals are what you want
to do, intelligence is how you achieve them. An intelligent system is
one that can be given a wide variety of goals, which it will then
figure out how to achieve.

Therefore, I suggest the following amendment of Ben's formulation of
intelligence:

The intelligence of a system is a function of the amount of different
arbitrary goals (functions that the system maximizes as it changes
over time) it can carry out and the degree by which it can succeed in
those different goals (how much it manages to maximize the functions
in question) in different environments as compared to other systems.

This would eliminate a thermostat from being an intelligent system,
since a thermostat only carries out one goal. Humans would be
classified as relatively intelligent, since they can be given a wide
variety of goals to achieve. It also has the benefit of assigning
narrow-AI systems a very low intelligence, which is what we want it to
do.



--
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.463 / Virus Database: 269.6.1/777 - Release Date: 26/04/2007 15:23




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to