>
> OK, this brings up something that I'd like to pose to the list as a whole.
> I realize this will be a somewhat antagonistic question - my intent here
> is not to offend (or to single anyone out), especially since I could be
> wrong.
>
> But my impression is that with some exceptions, AI researchers in general
> don't want to touch philosophy. And that astounds me, because of all the
> possible domains of engineering, AI research has to be the domain of the
> most philosophical consequence. Trying to build AI without doing
> philosophy, to me, is like trying to build a rocketship without doing
> math.


As was pointed out the problem seems to stem from the comp-sci
computationalist approach.

Most people seem to take a suspicious view of philosophy.  I remember a
(regular IT) coworker laughing at my copy of "Being and Nothingness." 
Think about it though:  few concepts could be more essential to
understanding reality than the idea that something IS or IS-NOT.  How many
AI systems make a distinction even that simple?  Yes, you could argue that
any binary system does, via a '1' or '0', but where is it represented in a
given system that something "IS"?  The answer is nowhere, in general,
because it is assumed.  People are in such a hurry to deliver a product,
no consideration is given to just representing explicitly that something
IS.

Mike


>
> I believe there are a few reasons for why this is. One, philosophy is hard
> and very often boring. Two, there is a bias against philosophers that
> don't build things as being somehow irrelevant. And three, subjecting your
> own ideas to the philosophical scrutiny of others is threatening. There's
> a kind of honor in testing your ideas by building it, so one can save some
> face in the event of failure (it was an unsuccessful experiment). But a
> philosophical rejection that demonstrates through careful logic the
> infeasibility of your design before you even build it - well, that just
> makes you feel stupid.
>
> I invite those of you who feel like this is unfair to correct my
> perceptions.
>
> Terren
>
> --- On Tue, 8/5/08, John G. Rose <[EMAIL PROTECTED]> wrote:
>> > Searle's Chinese Room argument is one of those
>> things that makes me
>> > wonder if I'm living in the same (real or virtual)
>> reality as everyone
>> > else. Everyone seems to take it very seriously, but to
>> me, it seems like
>> > a transparently meaningless argument.
>> >
>>
>> I think that the Chinese Room argument is an AI
>> philosophical anachronistic
>> meme that is embedded in the AI community and promulgated
>> by monotonous
>> drone-like repetitivity. Whenever I hear it I'm like
>> let me go read up on
>> that for the n'th time and after reading I'm like
>> WTF are they talking
>> about!?!? Is that one the grand philosophical hang-ups in
>> AI thinking?
>>
>> I wish I had a mega-meme expulsion cannon and could expunge
>> that mental knot
>> of twisted AI arterialsclerosis.
>>
>> John
>>
>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to