On 2/16/2012 3:02 PM, acw wrote:
On 2/16/2012 22:37, meekerdb wrote:
On 2/16/2012 1:00 PM, acw wrote:
On 2/16/2012 20:40, Stephen P. King wrote:
On 2/16/2012 2:32 PM, meekerdb wrote:
On 2/16/2012 11:09 AM, Stephen P. King wrote:

All of this substitution stuff is predicated upon the possibility
that the brain can be emulated by a Universal Turing Machine. It
would be helpful if we first established that a Turing Machine is
capable of what we are assuming it do be able to do. I am pretty well
convinced that it cannot based on all that I have studied of QM and
its implications.

This where the paradox of the philosophical zombie arises. It seems
pretty certain that a TM, given the right program, can exhibit
intelligence. So can we then deny that it is conscious based on
unobservable quantum entanglements (i.e. those that make its
computation classical)?

Brent
So is intelligence and consciousness, ala having 1p, qualia and all that
subjective experience stuff, the same thing in your mind?
Surely they must be related. If not, you do indeed get the p. zombie
problem: someone who acts in all respects like a different person with
(assumed) consciousness, indistinguishable in behavior, yet without
consciousness. The question boils down to: let's say you knew some
person well, they one day got a digital brain transplant, they still
behave more or less as you remember them, do you think they are now
without consciousness or merely that their consciousness is a bit
changed due to different quantum entanglements?

I think substituting for neurons or even groups of neurons in the human
brain would preserve consciousness with perhaps minor changes.
Probably, otherwise, the nature of consciousness is really fickle and doesn't match our introspection ( http://consc.net/papers/qualia.html ).

But when
it comes to the question of whether an intelligent behaving robot is
necessarily conscious, I'm not so sure. I think it would depend on the
structure and programming. It would have *some kind* or consciousness,
but it might be rather different from human consciousness.

It would depend on the cognitive architecture and structures involved. If the cognitive architecture is something really different from ours, it might be hard to fathom a guess. I can also imagine some optimizers which are capable of giving intelligent answers, but I have trouble attributing it any meaningful consciousness (for example an AI which just brute-forces the problem and performs no induction or anything similar to how we think), while I'd potentially attribute similar consciousness to ours to some neuromorphic AI, and something stranger/not directly comprehensible to me to an AI which is based on our high-level psychology, but different in most other ways in implementation. I suppose if/when we do crack the AGI problem, there will be a lot of interesting things to investigate about the nature of such foreign consciousness.

Which is why I think we'll solve the artificial *intelligence* problems and we'll learn to create different intelligent and emotive behaviors, different personalities, and how they depend on architecture; and questions about 'consciousness' will become otiose.

Brent


Note that Bruno answers the concern that interaction/entanglement with
the environment by saying that the correct level of substitution may
include arbitrarily large parts of the environment. I think this is
problematic because the substitution (and the computation) are
necessarily classical.
In a way, that would keep some of COMP's conclusions still valid (weakening of the theory), but it's not very practical. I tend to instead think that machines implementing the observer below the substitution level can vary as much as they want as long as the observer is consistently implemented (a continuation where the observer isn't consistently implemented either no longer is a continuation of the observer or is a low-measure one, although some of these details do need to be worked out). One question that bothers me is if the observer is actually entangled quite a bit with these lower-level machines and if a digital substitution is performed at a higher level, the functionality may remain the same, but the measure/consistent extensions may get altered - better hope there's not too many white rabbits if the subst. level is too high, otherwise it would lead to unstable "jumpy" realities to SIMs.

Brent



Onward!

Stephen







--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to