--- candice schuster <[EMAIL PROTECTED]> wrote: > In all of my previous posts, most of them anyhow I have mentioned > consciousness, today I found myself reading some of John Searle's theories, > he poses exactly the same type of question...The reason computers can't do > semantics is because semantics is about meaning; meaning derives from > original intentionality, and original intentionality derives from feelings - > qualia - and computers don't have any qualia. How does consciousness get > added to the AI picture Richard ?
Searle and Roger Penrose don't believe that machines can duplicate what the human brain does. For example, Penrose believes that there are uncomputable quantum effects or some other unknown physical processes going on in the brain. Most other AI researchers believe that the brain works according to known physical principles and could therefore in principle be simulated by a computer. And computers can do semantics, for example, pass the (no longer used) word analogy section of the SAT exam. http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-47422.pdf The difference between human and machine semantics is that machines generally associate words only with other words, but humans also associate words with nonverbal stimuli such as images or actions. But in principle there is no reason that machines with sensors and effectors could not do that too. Qualia and consciousness are not rooted in semantics, but in biology. By consciousness, I mean that which makes you different from a P-zombie. http://en.wikipedia.org/wiki/Philosophical_zombie There is no known test for consciousness. You cannot tell if a machine or animal really feels pain or happiness, or only behaves as though it does. You could argue the same about humans, even yourself. But you believe that your own feelings are real and that you have control over your thoughts and actions because evolution favors animals that behave this way. You do not have the option to turn off pain or hunger. If you did, you would not pass on your DNA. It is no more possible for you to not believe in your own consciousness than it would be for you to memorize a list of a million numbers. That is just how your brain works. I believe this is why Searle and Penrose hold the positions they do. Before computers, their beliefs were universally held. Turing was very careful to separate the issue of consciousness from the possibility of AI. -- Matt Mahoney, [EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=57737187-d7ae0a