"Hal Finney" wrote: > Part of what I wanted to get at in my thought experiment is the > bafflement and confusion an AI should feel when exposed to human ideas > about consciousness. Various people here have proffered their own > ideas, and we might assume that the AI would read these suggestions, > along with many other ideas that contradict the ones offered here. > It seems hard to escape the conclusion that the only logical response > is for the AI to figuratively throw up its hands and say that it is > impossible to know if it is conscious, because even humans cannot agree > on what consciousness is. > > In particular I don't think an AI could be expected to claim that it > knows that it is conscious, that consciousness is a deep and intrinsic > part of itself, that whatever else it might be mistaken about it could > not be mistaken about being conscious. I don't see any logical way it > could reach this conclusion by studying the corpus of writings on the > topic. If anyone disagrees, I'd like to hear how it could happen. > > And the corollary to this is that perhaps humans also cannot legitimately > make such claims, since logically their position is not so different > from that of the AI. In that case the seemingly axiomatic question of > whether we are conscious may after all be something that we could be > mistaken about. > > Hal
--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [EMAIL PROTECTED] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list?hl=en -~----------~----~----~----~------~----~------~--~---