You guys are driving me nuts.

Jumping in at the middle, here goes:

"Intelligence is the capacity to solve problems.
(An intelligent agent solves problems in order to reach its goals)
Problems occur when an agent must select between two or more paths to reach its goals.

Cognitive problems (in the head) derive from physical problems in the real world.

[In Addition:

There are 2 main kinds of intelligence, or problem-solving.

1) "Crystallised"/ convergent/ closed-ended problem-solving, where the agent already has a method of solution, and there is a right answer, (e.g. mathematical calculations, anagrams etc)

2) "Fluid"/ divergent/ open-ended problem-solving, where there is no definitive method of solution, or answer, only more or less effective solutions (e.g. how am I going to give my partner an orgasm tonight? how am I going to make money on the stockmarket?).

[IN addition:

Matter is not intelligent. It has no capacity to solve problems, or to negotiate forked paths. It is motile but not mobile (can move but not "move around").

Living creatures are intelligent. They do have the capacity to solve problems, and to negotiate forked paths, both cognitively and, being mobile, physically].

Comment: It's important to have an embodied approach to defining intelligence - to relate cognitive problem-solving to physical problem-solving.

----- Original Message ----- From: "Richard Loosemore" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Thursday, April 26, 2007 6:20 PM
Subject: Re: [agi] Circular definitions of intelligence


Benjamin Goertzel wrote:


2) The definition clearly says at least something about how to measure
    this degree of intelligence (rather than just handwaving about the
    possibility that there might be different degrees), and



This is the shortcoming of the optimization-based approach to defining
intelligence, at present.

Let's say we define intelligence as "the ability to solve complex optimization
problems"

and let's say we define the intensity of a pattern P in an entity X as the
degree to which P compresses X,

and let's say we define the complexity of an objective function f as

"the total intensity of the set of patterns in the graph of f, subtracting off
for overlap"

(ignoring for compactness the subtleties of subtracting off for overlap
in this context)

Then, this is all dandy mathematically speaking, but how do we actually
calculate all the patterns in the graph of a complex real-world objective
function, let alone a whole bunch of such?

But this just falls straight into a glorified version of the same hidden-subjectivity trap that the behaviorists dug themselves into:

- When you try to cash out that compression function, I claim, you will end up in a situation where the system's real world behavior depends on exactly which 'patterns' it chooses to go hunting for, and how it deploys them. The devil is in the details that you do not specify here, so any decision about whether this formalism really is coextensive with commonsense intelligence is pure speculation.

- Now, you can certainly get around that criticism I just leveled, by taking the Hutter route and postulating systems that have infinite amounts of time and resources to decide what to do: but postulating such an infinite-resources definition of intelligence is not a definition that any scientific endeavor has ever done before. That is not a definition: it is a mathematical fantasy.

Note my previous clarification, BTW: I only care about attempts to build formal definitions. So maybe if that is not what you care about (see your comments below), we might not be disagreeing.


Richard Loosemore


Pragmatically, we can posit a particular pattern-recognition system S, and
talk about "all the patterns in the graph of f that S can recognize."

If we let S = a typical human, then I suggest that the above definition of
intelligence captures a lot of the commonsense human language notion
of "intelligence."

However, letting S=a typical human, is obviously not the only choice
one could make.

According to some other assumed pattern recognition system, different
judgments of intelligence might be made.

I do not claim that this approach captures all aspects of the common
language notion of "intelligence" -- and I'm not that interested in trying to precisely formalize natural language concepts, either. That seems like
a dead-end pursuit, as someone already noted.

My suggestion is that this is the right sort of conceptualization of
"intelligence" to use for guiding AGI research.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.463 / Virus Database: 269.6.1/776 - Release Date: 25/04/2007 12:19




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to