Hi Stephen P. King The physical is, and only is, what you can measure.
Roger Clough, rclo...@verizon.net 9/17/2012 Leibniz would say, "If there's no God, we'd have to invent him so that everything could function." ----- Receiving the following content ----- From: Stephen P. King Receiver: everything-list Time: 2012-09-16, 12:13:52 Subject: Re: Zombieopolis Thought Experiment On 9/16/2012 8:42 AM, Craig Weinberg wrote: On Saturday, September 15, 2012 6:21:14 AM UTC-4, stathisp wrote: On Sat, Sep 15, 2012 at 2:55 AM, Craig Weinberg <whats...@gmail.com> wrote: > What you think third party observable behavior means is the set of all > properties which are externally discoverable. I am saying that is a > projection of naive realism, and that in reality, there is no such set, and > that in fact the process of discovery of any properties supervenes on the > properties of all participants and the methods of their interaction. Of course there is a set of all properties that are externally discoverable, even if you think this set is very small! No, there isn't. That is what I am telling you. Nothing exists outside of experience, which is creating new properties all of the time. There is no set at all. There is no such thing as a generic externality...each exterior is only a reflection of the interior of the system which discovers the interior of other systems as exteriors. Hi Craig! EXACTLY! Moreover, this set has subsets, and we can limit our discussion to these subsets. For example, if we are interested only in mass, we can simulate a human perfectly using the right number of rocks. Even someone who believes in an immortal soul would agree with this. No, I don't agree with it at all. You are eating the menu. A quantity of mass doesn't simulate anything except in your mind. Mass is a normative abstraction which we apply in comparing physical bodies with each other. To reduce a human being to a physical body is not a simulation is it only weighing a bag of organic molecules. Thus we can realistically claim that the physical world is exactly and only all things that we (as we truly are) have in common. What must be understood is that as the number of participating entities increase to infinity, the number of "things in common" goes to zero. Only for a large but finite set of entities will there be a semi-large number of relations that the entities have in common and not have a degeneracy relation between them. A black Hole is a nice demonstration of the degeneracy idea. The effect of gravity is the force of degeneracy, when all the ground states are forces to normalize and become identical with each other, the "space" and "delay" (time) that is different between them collapses to zero and thus we get singularity in the limit of the degeneracy. > My point of using cats in this thought experiment is to specifically point > out our naivete in assuming that instruments which extend our perception in > only the most deterministic and easy to control ways are sufficient to > define a 'third person'. If we look at the brain with a microscope, we see > those parts of the brain that microscopes can see. If we look at New York > with a swarm of cats, then we see the parts of New York that cats can see. Yes, but there are properties of the brain that may not be relevant to behaviour. Which properties are in fact important is determined by experiment. For example, we may replace the myelin sheath with a synthetic material that has similar electrical properties and then test an isolated nerve to see if action potentials propagate in the same way. If they do, then the next step is to incorporate the nerve in a network and see if the pattern of firing in the network looks normal. The step after that is to replace the myelin in the brain of a rat to see if the animal's behaviour changes. The modified rats are compared to unmodified rats by a blinded researcher to see if he can tell the difference. If no-one can consistently tell the difference then it is announced that the synthetic myelin appears to be a functionally identical substitute for natural myelin. Craig point here is that if we are going to perform a substitution then the artificial component must be capable of reproducing *all* of the functions of the neuron unless we are going to ignore the fact that neurons are not *just transistors*. We cannot fail to recognize that a neuron is not just one thing to each other and to the rest of the body and environment beyond it. We need to drop the idea that the universe is made up of gears and levers and springs and understand that it is not uniquely decomposable into isolate entities that can somehow retain their set of unique properties in isolation. Except it isn't identical. No imitation substance is identical to the original. Sooner or later the limits of the imitation will be found - or they could be advantages. Maybe the imitation myelin prevents brain cancer or heat stroke or something, but it also maybe prevents sensation in cold weather or maybe certain amino acids now cause Parkinson's disease. There is no such thing as identical. There is only 'seems identical from this measure at this time'. Exactly. If we are going to invoke functional equivalence then we must invoke all functions that are involved, not just some of them. As is the nature of science, another team of researchers may then find some deficit in the behaviour of the modified rats under conditions the first team did not examine. Scientists then make modifications to the formula of the synthetic myelin and do the experiments again. Which is great for medicine (although ultimately maybe unsustainably expensive), but it has nothing to do with the assumption of identical structure and the hard problem of consciousness. There is no such thing as identical experience. Indeed! WE can easily see that the principle of identity of indiscernibles is involved here. Minds, the "things that are conscious", do not exist "in space" as physical objects and thus do not have positions or momenta or spin or duration quantities that can be used to externally locate them in different places so that the PII can be safely ignored. OTOH, minds must be implemented or else they are just the "presupposition of a possible thought". They have to be functionally implemented "in the flesh" for only the possibility of being able to interact with each other and thus gain knowledge of themselves and the world (of other minds). I have suggested that in fact we can perhaps define consciousness as that which has never been repeated. It is the antithesis of that which can be repeated, (hence the experience of "now"), even though experiences themselves can seem very repetitive. The only seem so from the vantage point of a completely novel moment of consideration of the memories of previous iterations. The postulate of "No Doppelgangers" by Gordon Pask and the "no cloning" theorem of Kochen & Specker speak to this directly. > This is the point of the thought experiment. The limitations of all forms of > measurement and perception preclude all possibility of there ever being a > such thing as an exhaustively complete set of third person behaviors of any > system. > > What is it that you don't think I understand? What you don't understand is that an exhaustively complete set of behaviours is not required. Yes, it is. Not for prosthetic enhancements, or repairs to a nervous system, but to replace a nervous system without replacing the person who is using it, yes, there is no set of behaviors which can ever be exhaustive enough in theory to accomplish that. True if and only if the set of behaviors (functions) is truly infinite. What needs to be understood, is that we can safely ignore all of the infinity except for a finite subset in our models of interactions. We must pay a price for doing this and it is the price of not having a completely deterministic theory. You might be able to do it biologically, but there is no reason to trust it unless and until someone can be walked off of their brain for a few weeks or months and then walked back on. LOL! Indeed! I don't access an exhaustively complete set of behaviours to determine if my friends are the same people from day to day, and in fact they are *not* the same systems from day to day, as they change both physically and psychologically. I have in mind a rather vague set of behavioural behavioural limits and if the people who I think are my friends deviate significantly from these limits I will start to worry. Which is exactly why you would not want to replace your friends with devices capable only of programmed deviations. Are simulated friends 'good enough'. Will it be good enough when your friends convince you to be replaced by your simulation? He want complete predictability, Craig. That is why. TO predict exactly what something is going to do is to be able to control it. We humans have this hang up about having to control everything.... Craig -- Stathis Papaioannou -- You received this message because you are subscribed to the Google Groups "Everything List" group. To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/QZP5OE1BqSoJ. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en. -- Onward! Stephen http://webpages.charter.net/stephenk1/Outlaw/Outlaw.html -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.