On 3/5/07, John Ku <[EMAIL PROTECTED]> wrote:

On 3/4/07, Ben Goertzel <[EMAIL PROTECTED] > wrote:

>
> Richard, I long ago proposed a working definition of intelligence as
> "Achieving complex goals in complex environments."  I then went through
> a bunch of trouble to precisely define all the component terms of that
> definition; you can consult the Appendix to my 2006 book "The Hidden
> Pattern"....


I'm not sure if your "working definition" is supposed to be significantly
less ambitious than a philosophical definition or perhaps you even address
something like this in your appendix, but I'm wondering whether the
hypothetical example of Blockhead from philosophy of mind creates problems
for your definition. Imagine that a computer has a huge memory bank of what
actions to undertake given what inputs. With a big enough memory, it seems
it could be perfectly capable of "achieving complex goals in complex
environments." Yet in doing so, there would be very little internal
processing, just the bare minimum needed to look up and execute the part of
its memory corresponding to its current inputs.

I think any intuitive notion of intelligence would not count such a
computer as being intelligent to any significant degree no matter how large
its memory bank is or how complex and diverse an environment its memory
allows it to navigate. There's simply too little internal processing going
on for it to count as much more intelligent than any ordinary database
application, though it might of course, do a pretty good job of fooling us
into thinking it is intelligent if we don't know the details.

I think this example actually poses a problem for any purely behavioristic
definition of intelligence. To fit our ordinary notion of intelligence, I
think there would have to be at least some sort of criteria concerning how
the internal processing for the behavior is being done.

I think the Blockhead example is normally presented in terms of looking up
information from a huge memory bank, but as I'm thinking about it just now
as I'm typing this up, I'm wondering if it could also be run with similar
conclusions for simple brute search algorithms. If instead of a huge memory
bank, it had enormous processing power and speed such that it could just
explore every single chain of possibilities for the one that will lead to
some specified goal, I'm not sure that would really count as intelligent to
any significant degree either.


You seem to be equating intelligence with consciousness. Ned Block also
seems to do this in his original paper. I would prefer to reserve
"intelligence" for third person observable behaviour, which would make the
Blockhead intelligent, and "consciousness" for the internal state: it is
possible that the Blockhead is unconscious or at least differently conscious
compared to the human.

Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to