Benjamin Johnston wrote:

I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at:


Hi Richard,

I don't have any comments yet about what you have written, because I'm not sure I fully understand what you're trying to say... I hope your answers to these questions will help clarify things.

It seems to me that your core argument goes something like this:

That there are many concepts for which an introspective analysis can only return the concept itself.
That this recursion blocks any possible explanation.
That consciousness is one of these concepts because "self" is inherently recursive. Therefore, consciousness is explicitly blocked from having any kind of explanation.

Is this correct? If not, how have I misinterpreted you?


This is pretty much accurate, but only up to the end of the first phase of the paper, where I asked the question: "Is explaining why we cannot explain something the same as explaining it?"

The next phase is crucial, because (as I explained a little more in my parallel reply to Ben) the conclusion of part 1 is really that the whole notion of 'explanation' is stretched to breaking point by the concept of consciousness.

So in the end what I do is argue that the whole concept of "explanation" (and "meaning, etc) has to be replaced in order to deal with consciousness. Eventually I come to a rather strange-looking conclusion, which is that we are obliged to say that "consciousness" is a real thing like any other in the universe, but the exact content of it (the subjective core) is truly inexplicable.



I have a thought experiment that might help me understand your ideas:

If we have a robot designed according to your molecular model, and we then ask the robot "what exactly is the nature of red" or "what is it like to experience the subjective essense of red", the robot may analyze this concept, ultimately bottoming out on an "incoming signal line".

But what if this robot is intelligent and can study other robots? It might then examine other robots and see that when their analysis bottoms out on an "incoming signal line", what actually happens is that the incoming signal line is activated by electromagnetic energy of a certain frequency, and that the object recognition routines identify patterns in "signal lines" and that when an object is identified it gets annotated with texture and color information from its sensations, and that a particular software module injects all that information into the foreground memory. It might conclude that the experience of "experiencing red" in the other robot is to have sensors inject atoms into foreground memory, and it could then explain how the current context of that robot's foreground memory interacts with the changing sensations (that have been injected into foreground memory) to make that experience 'meaningful' to the robot.

What if this robot then turns its inspection abilities onto itself? Can it therefore further analyze "red"? How does your theory interpret that situation?

-Ben

Ahh, but that *is* the way that my theory analyzes the situation, no? :-) What I mean is, I would use a human (me) in place of the first robot.

Bear in mind that we must first separate out the "hard" problem (the pure subjective experience of red) from any easy problems (mere radiation sensititivity, etc). From the point of view of that first robot, what will she get from studying the second robot (other robots in general), if the question she really wants to answer is "What is the explanation for *my* subjective experience of redness?"

She could talk all about the foreground and the way the analysis mechanism works in other robots (and humans), but the question is, what would that avail her is she wanted to answer the hard problem of where her subjective conscious experience comes from?

After reading the first part of my paper, she would say (I hope!): "Ah, now I see how all my questions about the subjective experience of things are actually caused by my analysis mechanism doing somethig weird".

But the (again, I hope) she would say: "Hmmmm, does it meta-explain my subjective experiences if I know why I cannot explain these experiences?"

And thence to part two of the paper....




Richard Loosemore




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to