Shane,

Would the following be possible with your notion of intelligence:
There is a computer system that does a reasonable job of solving
some optimization problem.  We go along and keep on plugging
 more and more RAM and CPUs into the computer.  At some point
the algorithm sees that it has enough resources to always solve
the problem perfectly through brute force search and thus drops
its more efficient but less accurate search strategy.

No. To me that is not intelligence, though it works even better. In
this situation I'll prefer the brute force solution for the given
problem, though it has little to do with AI.

Even though I agree with you that in its everyday usage "intelligent"
does often mean "optimal", it is not always used in the absolute sense
(that is, with respect to the problem only), but in the relative sense
(that is, with respect to the problem as well as the available
knowledge and resources).

As the system is now solving the optimization problem in a much
simpler way (brute force search), according to your perspective it
has actually become less intelligent?

Yes. That is why I stress the difference between "what the system can
do" (capability) and "how the system does it" (principle). The two are
often positively correlated (as in human beings, where capabilities
are mostly learned), but not always (as in computers, where where
capabilities are either learned or preprogrammed).

Let is also why I think the definition of intelligence in psychology
cannot be directly accepted in AI. For human beings, problem-solving
ability at a certain age can be used approximately to indicate the
learning ability of a system (person), since we all start with similar
innate problem-solving ability. However, for computers,
problem-solving ability at a certain moment doesn't indicate learning
ability anymore. The two abilities are both valuable, but very
different. For practical usage, they should be mixed, but in theory,
they should be clearly distinguished.

> Though NARS has the potential to work in the environment you
> specified, it is not designed to maximize a reward measurement given
> by the environment.

Of course.  If I want a general test, I can't assume that the
systems to be tested were designed with my test in mind.

They don't need to have the test in mind, indeed, but how can you
justify the authority and fairness of the testing results, if many
systems are not built to achieve what you measure?

Pei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to