Hey Bruno,

I have done some thinking and reformulated my thoughts about our ongoing
discussion.

To sum up my (intuitive) objection, I have struggled to understand how you
make the leap from the consciousness of abstract logical machines to human
consciousness. I now have an argument that I think formalizes this
intuition. 

First, I grant that the computation at the neuron-level is at least
universal, since neurons are capable of addition and multiplication, and as
you say, these are the only operations a machine is required to be able to
perform to be considered universal. I could even see how neural computation
may be Löbian, where the induction operations are implemented in terms of
synaptic strengths (as 'confidence' in the synaptic connections that mediate
particular 'beliefs'). Furthermore, I grant that a kind of consciousness
might be associated with Löbianity (and perhaps even universality).

I will argue however that that is not the consciousness we as humans
experience, and we cannot know - solely on the basis of abstract logical
machines - how to characterize human consciousness. 

The critical point is that human psychology (which I will refer to
henceforth as 'psy') emerges from vast assemblages of neurons. When we talk
about emergence, we recognize that there is a higher-order level that has
its own dynamics which are completely independent (what I refer to as
'causally orthogonal') to the dynamics of the lower-order level. The Game of
Life CA (cellular automata) has very specific dynamics at the cell level,
and the dynamics that emerges at the higher-order level cannot be predicted
or explained in terms of those lower-order dynamics. The higher order is an
emergence of a new 'ontology'.

The neural correlates of psy experiences can indeed be traced down to the
firings of (vast numbers of) individual neurons, in the same way that a
hurricane can be traced down to the interactions of (vast numbers of) water
and air molecules. But I'm saying the dynamics of human psychology will
never be understood in terms of the firings of neurons. Psy can be thought
of as 'neural weather'. True understanding of psy may one day be enabled by
an understanding of the dynamics of the structures that emerge from the
neuronal level, in the same way that weather forecasters understand the
weather in terms of the dynamics of low/high pressure systems, fronts,
troughs, jet-streams, and so on.

To put this in more mathematical terms, propositions about psy are not
expressible in the 'machine language' of neurons. Propositions about 'psy'
are in fact intrinsic to the particular 'program' that the neural machinery
runs. It is a form of level confusion, in other words, to attribute the
human consciousness that is correlated with emergent structures to the
consciousness of neural machinery. 

What I think is most likely is that there are several levels of
psychological emergence related to increasingly encompassing aspects of
experience. Each of these levels are uniquely structured, and in a "form
follows function" kind of way, each correspond with a different character of
consciousness. Human consciousness is a sum over each of those layers
(including perhaps the base neuronal level).

Given that the only kind of consciousness we have any direct knowledge of is
human consciousness, we cannot say anything about the character of the
consciousness of abstract logical machines. To truly "explain"
consciousness, we're going to have to understand the dynamics that emerge
from assemblages of (large) groups of neurons, and how psy phenomenon
correlate to those dynamics. 

A little more below...


Bruno Marchal wrote:
> 
>> If no, do you think it is important to explain how
>> biological machines like us do have access to our beliefs?
> 
> That is crucial indeed. But this is exactly what Gödel did solve. A  
> simple arithmetical prover has access to its belief, because the laws  
> of addition and multiplication can define the prover itself. That  
> definition (the "Bp") can be implicit or explicit, and, like a patient  
> in front of the description of the brain, the machine cannot recognize  
> itself in that description, yet the access is there, by virtue of its  
> build in ability. The machine itself only identifies itself with the  
> Bp & p, and so, will not been able to ever acknowledge the identity  
> between Bp and Bp & p. That identity belongs to G* minus G. The  
> machine will have to bet on it (to say "yes" to the doctor).
> 

This seems like an evasive answer because Gödel only proved this for the
logical machine. 

I am saying that we can assume comp but still not have access to the
propositions of a level that emerges from the computed substrate.


Bruno Marchal wrote:
> 
> For the qualia, I am using the classical theory of Theaetetus, and its  
> variants. So I define new logical operator, by Bp & p, Bp & Dt, Bp &  
> Dt & p. The qualia appears with Bp & p (but amazingly enough those  
> qualia are communicable, at least between Löbian entities). 
> 

Doesn't their communicability (between Löbian entities) represent a
contradiction?  I'm not sure how you can call them qualia anymore.


Bruno Marchal wrote:
> 
> The hallucination existence is counter-intuitive because it seems to  
> imply that our consciousness is statical, and that the time is a  
> complex product of the brain activity (or of the existence of some  
> number relation). I thought that consciousness needs the illusion of  
> time, but salvia makes possible an hallucination which is out of time.  
> How could we hallucinate that? I see only one solution, we are  
> conscious even before we build our notion of time.
> 

I don't see why this is counter-intuitive for you Bruno, given that
(assuming comp) all experiences of time as experienced by infinities of
universal numbers are happening in Platonia, which is by definition
timeless. The self-consciousness you attribute to Löbian machines does not
require time either, correct?

Thanks for you interesting write ups of your salvia experiences...
definitely food for thought.

Best,
Terren
-- 
View this message in context: 
http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p32117606.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to