Hi Terren,
On 22 Jul 2011, at 20:51, terren wrote:
I have done some thinking and reformulated my thoughts about our
ongoing
discussion.
To sum up my (intuitive) objection, I have struggled to understand
how you
make the leap from the consciousness of abstract logical machines to
human
consciousness.
Well, this should follow (intuitively) from the UDA. Humans are
abstract being themselves.
I now have an argument that I think formalizes this
intuition.
First, I grant that the computation at the neuron-level is at least
universal, since neurons are capable of addition and multiplication,
and as
you say, these are the only operations a machine is required to be
able to
perform to be considered universal. I could even see how neural
computation
may be Löbian, where the induction operations are implemented in
terms of
synaptic strengths (as 'confidence' in the synaptic connections that
mediate
particular 'beliefs'). Furthermore, I grant that a kind of
consciousness
might be associated with Löbianity (and perhaps even universality).
I will argue however that that is not the consciousness we as humans
experience, and we cannot know - solely on the basis of abstract
logical
machines - how to characterize human consciousness.
I agree with this. No machine can know its level of substitution.
Löbian consciousness is to human consciousness like the Escherichia
Coli genome is to human genome. Humans and mammals are *much* more
complex.
The critical point is that human psychology (which I will refer to
henceforth as 'psy') emerges from vast assemblages of neurons.
But vast assemblage of neurons are still Turing emulable, and that is
what counts in the reasoning.
When we talk
about emergence, we recognize that there is a higher-order level
that has
its own dynamics which are completely independent (what I refer to as
'causally orthogonal') to the dynamics of the lower-order level.
Yes. Bp is already at a higher level than numbers and + and *. There
are many levels. The logic does not depend on the level, but of the
correct choice of *some* level.
The Game of
Life CA (cellular automata) has very specific dynamics at the cell
level,
and the dynamics that emerges at the higher-order level cannot be
predicted
or explained in terms of those lower-order dynamics. The higher
order is an
emergence of a new 'ontology'.
The neural correlates of psy experiences can indeed be traced down
to the
firings of (vast numbers of) individual neurons, in the same way
that a
hurricane can be traced down to the interactions of (vast numbers
of) water
and air molecules. But I'm saying the dynamics of human psychology
will
never be understood in terms of the firings of neurons.
That's comp! You are completely right. Note that this is already true
for the chess player machine DEEP BLUE. It makes no sense to explain
its high level strategy, heuristic and program in terms of NAND gates
behavior.
Psy can be thought
of as 'neural weather'.
Yes. Or much above. Psy is not anything capable of being entirely
described by 3-things in general, given that it refers to person
points of view, like the Bp & p is not describable in the whole of
arithmetic.
True understanding of psy may one day be enabled by
an understanding of the dynamics of the structures that emerge from
the
neuronal level, in the same way that weather forecasters understand
the
weather in terms of the dynamics of low/high pressure systems, fronts,
troughs, jet-streams, and so on.
That is what psychologists try to do. They are 100% right in their
critics of neuronal reductionism.
To put this in more mathematical terms, propositions about psy are not
expressible in the 'machine language' of neurons.
Nor is any of the arithmetical hypostases, except for Bp and Bp & Dt.
Those are exceptional, and no machine can recognize them in those
views. That is why the 1-I (Bp & p) has to make a risky bet when
saying "yes" to the doctor. The machine will bet on some level where
Bp is equivalent with Bp & p. That bet is probably counter-intuitive
for the machine.
Propositions about 'psy'
are in fact intrinsic to the particular 'program' that the neural
machinery
runs. It is a form of level confusion, in other words, to attribute
the
human consciousness that is correlated with emergent structures to the
consciousness of neural machinery.
The neural machinery is not conscious, and if it is, such
consciousness might have nothing to do with "my consciousness".
What I think is most likely is that there are several levels of
psychological emergence related to increasingly encompassing aspects
of
experience. Each of these levels are uniquely structured, and in a
"form
follows function" kind of way, each correspond with a different
character of
consciousness. Human consciousness is a sum over each of those layers
(including perhaps the base neuronal level).
That might be true.
Given that the only kind of consciousness we have any direct
knowledge of is
human consciousness,
Why human? That is your choice. You could have said mammals, animals,
earth creature, Milky Wayan, or Löbian machine, etc.
we cannot say anything about the character of the
consciousness of abstract logical machines.
Why? On the contrary, some are enough simple so that we can say a lot
of things.
To truly "explain"
consciousness, we're going to have to understand the dynamics that
emerge
from assemblages of (large) groups of neurons, and how psy phenomenon
correlate to those dynamics.
I don't think so at all. Consciousness does not depend on any of its
particular implementations.
A little more below...
Bruno Marchal wrote:
If no, do you think it is important to explain how
biological machines like us do have access to our beliefs?
That is crucial indeed. But this is exactly what Gödel did solve. A
simple arithmetical prover has access to its belief, because the laws
of addition and multiplication can define the prover itself. That
definition (the "Bp") can be implicit or explicit, and, like a
patient
in front of the description of the brain, the machine cannot
recognize
itself in that description, yet the access is there, by virtue of its
build in ability. The machine itself only identifies itself with the
Bp & p, and so, will not been able to ever acknowledge the identity
between Bp and Bp & p. That identity belongs to G* minus G. The
machine will have to bet on it (to say "yes" to the doctor).
This seems like an evasive answer because Gödel only proved this for
the
logical machine.
Not at all. It works for any self-referentially correct machine. At
any level. The very fact that you argue shows that you are enough
logical for that. Now "the ideally correct machine" provides a
simplification. But the surprise is that even with such simplification
we get a surprising rich theology, having physics as a part.
I am saying that we can assume comp but still not have access to the
propositions of a level that emerges from the computed substrate.
We don't. It is a consequence of assuming comp. That is why I insist
that comp is a sort of rational religion.
Bruno Marchal wrote:
For the qualia, I am using the classical theory of Theaetetus, and
its
variants. So I define new logical operator, by Bp & p, Bp & Dt, Bp &
Dt & p. The qualia appears with Bp & p (but amazingly enough those
qualia are communicable, at least between Löbian entities).
Doesn't their communicability (between Löbian entities) represent a
contradiction? I'm not sure how you can call them qualia anymore.
Not at all. Most qualia are non communicable (notably in the Bp & Dt &
p logics). It is amazing, hard to prove, but not contradictory (that
Bp & p) is communicable. In fine it is normal because it corresponds
probably to the qualia of being convinced by a rational argument. that
seems communicable, even if we cannot communicate that we have the
qualia as such.
Bruno Marchal wrote:
The hallucination existence is counter-intuitive because it seems to
imply that our consciousness is statical, and that the time is a
complex product of the brain activity (or of the existence of some
number relation). I thought that consciousness needs the illusion of
time, but salvia makes possible an hallucination which is out of
time.
How could we hallucinate that? I see only one solution, we are
conscious even before we build our notion of time.
I don't see why this is counter-intuitive for you Bruno, given that
(assuming comp) all experiences of time as experienced by infinities
of
universal numbers are happening in Platonia, which is by definition
timeless.
But normally the time is an inside view. We don't have access
(normally) to the 3-D view of the timeless Platonia. That would be
like stepping out of the complete reality, seeing GOD, etc. despite
salvia, I still doubt that this is possible, and especially to come
back after similar experiences.
The self-consciousness you attribute to Löbian machines does not
require time either, correct?
I thought it cannot not create time, once conscious. Apparently it
can, in some hallucinated state. I can't avoid astonishment!
Thanks for you interesting write ups of your salvia experiences...
definitely food for thought.
Thanks. Best,
Bruno
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.