Why is there something rather than nothing?
Hi. I used to post to this list but haven't in a long time. I'm a biochemist but like to think about the question of Why is there something rather than nothing? as a hobby. If you're interested, some of my ideas on this question and on Why do things exist?, infinite sets and on the relationships of all this to mathematics and physics are at: https://sites.google.com/site/ralphthewebsite/ An abstract of the Why do things exist and Why is there something rather than nothing? paper is below. Thank you in advance for any feedback you may have. Sincerely, Roger Granet (roger...@yahoo.com) Abstract: In this paper, I propose solutions to the questions Why do things exist? and Why is there something rather than nothing? In regard to the first question, Why do things exist?, it is argued that a thing exists if the contents of, or what is meant by, that thing are completely defined. A complete definition is equivalent to an edge or boundary defining what is contained within and giving “substance” and existence to the thing. In regard to the second question, Why is there something rather than nothing?, nothing, or non-existence, is first defined to mean: no energy, matter, volume, space, time, thoughts, concepts, mathematical truths, etc.; and no minds to think about this lack-of-all. It is then shown that this non-existence itself, not our mind's conception of non-existence, is the complete description, or definition, of what is present. That is, no energy, no matter, no volume, no space, no time, no thoughts, etc., in and of itself, describes, defines, or tells you, exactly what is present. Therefore, as a complete definition of what is present, nothing, or non-existence, is actually an existent state. So, what has traditionally been thought of as nothing, or non-existence, is, when seen from a different perspective, an existent state or something. Said yet another way, non-existence can appear as either nothing or something depending on the perspective of the observer. Another argument is also presented that reaches this same conclusion. Finally, this reasoning is used to form a primitive model of the universe via what I refer to as philosophical engineering. -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On Sun, Aug 7, 2011 at 11:07 PM, Craig Weinberg whatsons...@gmail.com wrote: That, as I keep saying, is the question. Assume that the bot can behave like a person but lacks consciousness. No. You have it backwards from the start. There is no such thing as 'behaving like a person'. There is only a person interpreting something's behavior as being like a person. There is no power emanating from a thing that makes it person-like. If you understand this you will know because you will see that the whole question is a red herring. If you don't see that, you do not understand what I'm saying. Interpreting something's behaviour as being like a [person's] is what I mean by behaving like a person. Then it would be possible to replace parts of your brain with non-conscious components that function otherwise normally, which would lead to you lacking some important aspect aspect of consciousness but being unaware of it. This is absurd, but it is a corollary of the claim that it is possible to separate consciousness from function. Therefore, the claim that it is possible to separate consciousness from function is shown to be false. If you don't accept this then you allow what you have already admitted is an absurdity. It's a strawman of consciousness that is employed in circular thinking. You assume that consciousness is a behavior from the beginning and then use that fallacy to prove that behavior can't be separated from consciousness. Consciousness drives behavior and vice versa, but each extends beyond the limits of the other. No, I do NOT assume that consciousness follows from behaviour (and certainly not that it IS behaviour) from the beginning!! I've lost count of the number of times I have said assume that it has the behaviour, but not the consciousness, of a brain component. How can I make it clearer? What other language can I use to convey that the thing is unconscious but to an external observer, who can't know its subjective states, it does the same sorts of mechanical things as its conscious counterpart? The human race has already been supplanted by a superhuman AI. It's called law and finance. They are not entities and not intelligent, let alone intelligent in the way humans are. What make you think that law and finance are any less intelligent than a contemporary AI program? Law and finance are abstractions. A computer may be programmed to solve financial problems, and then it has a limited intelligence, but it's incorrect to say that finance is therefore intelligent. When you say that intelligence can 'fake' non-intelligence, you imply an internal experience (faking is not an external phenomenon). Intelligence is a broad, informal term. It can mean subjectivity, intersubjectivity, or objective behavior, although I would say not truly objective but intersubjectively imagined as objective. I agree that consciousness or awareness is different from any of those definitions of intelligence which would actually be categories of awareness. I would not say that a zombie is intelligent. Intelligence implies understanding, which is internal. What a computer or a zombie has is intelliform mechanism. If a computer or zombie can solve the same wide range of problems as a human then it is ipso facto as intelligent as a human. If you discover that your friend whom you have known for twenty years is actually a robot you may doubt in the light of this knowledge that he is conscious, but you can't doubt that he is intelligent, since that is based purely on your observations of his behaviour and not on internal state. -- Stathis Papaioannou -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On Mon, Aug 8, 2011 at 4:35 AM, meekerdb meeke...@verizon.net wrote: On 8/7/2011 4:42 AM, Stathis Papaioannou wrote: That, as I keep saying, is the question. Assume that the bot can behave like a person but lacks consciousness. Then it would be possible to replace parts of your brain with non-conscious components that function otherwise normally, which would lead to you lacking some important aspect aspect of consciousness but being unaware of it. Put that way it seems absurd. But what about lacking consciousness but *acting as if you were unaware* of it? The philosophical zombie says he's conscious and has an internal narration and imagines and dreams...but does he? Can we say that he must? If he says he doesn't, can we be sure he's lying? Even though I think functionalism is right, I think consciousness may be very different depending on how the internal functions are implemented. I go back to the example of having an inner narration in language (which most of us didn't have before age 4). I think Julian Jaynes was right to suppose that this was an evolutionary accident in co-opting the perceptual mechanism of language. In a sense all thought may be perception; it's just that some of it is perception of internal states. The trick is to consider not full-blown zombies but partial zombies based on partial brain replacement. If your visual cortex is replaced with zombie neurons your visual qualia will disappear but the rest of the brain will receive normal input, so you will declare that you can see normally. The possibilities are: (a) You can in fact see normally. In general, if the behaviour of the brain is replicated then the consciousness is also replicated. (b) You are blind but don't realise it, believe you have normal sight and declare that you have normal sight. (c) You are blind and realise you are blind but can't do anything about it, observing helplessly as your vocal cords apparently of their own accord declare that everything is normal. I think (a) is the only plausible one of theses possibilities. -- Stathis Papaioannou -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On Aug 8, 8:19 am, Stathis Papaioannou stath...@gmail.com wrote: On Sun, Aug 7, 2011 at 11:07 PM, Craig Weinberg whatsons...@gmail.com wrote: That, as I keep saying, is the question. Assume that the bot can behave like a person but lacks consciousness. No. You have it backwards from the start. There is no such thing as 'behaving like a person'. There is only a person interpreting something's behavior as being like a person. There is no power emanating from a thing that makes it person-like. If you understand this you will know because you will see that the whole question is a red herring. If you don't see that, you do not understand what I'm saying. Interpreting something's behaviour as being like a [person's] is what I mean by behaving like a person. I know that's what you mean, but I'm trying to explain why those two phrases are polar opposites in this context, because the whole thread is about the difference between subjectivity and objectivity. If a chip could behave like a person, then we wouldn't be having this conversation right now. We'd be hanging out with our digital friends instead. Every chip we make would have it's own perspective and do what it wanted to do, like an infant or a pollywog would. If we want to make a chip that impersonates something that does have it's own perspective and does what it wants to, then we can try to do that with varying levels of success depending upon who you are trying to fool, how you are trying to fool them, and for how long. The fact that any particular person interprets the thing as being alive or conscious for some period of time is not the same thing as the thing being actually alive or conscious. Then it would be possible to replace parts of your brain with non-conscious components that function otherwise normally, which would lead to you lacking some important aspect aspect of consciousness but being unaware of it. This is absurd, but it is a corollary of the claim that it is possible to separate consciousness from function. Therefore, the claim that it is possible to separate consciousness from function is shown to be false. If you don't accept this then you allow what you have already admitted is an absurdity. It's a strawman of consciousness that is employed in circular thinking. You assume that consciousness is a behavior from the beginning and then use that fallacy to prove that behavior can't be separated from consciousness. Consciousness drives behavior and vice versa, but each extends beyond the limits of the other. No, I do NOT assume that consciousness follows from behaviour (and certainly not that it IS behaviour) from the beginning!! I've lost count of the number of times I have said assume that it has the behaviour, but not the consciousness, of a brain component. How can I make it clearer? What other language can I use to convey that the thing is unconscious but to an external observer, who can't know its subjective states, it does the same sorts of mechanical things as its conscious counterpart? Isn't the whole point of the gradual neuron substitution example to prove that consciousness must be behavior? That if behavior of the neurons are the same, and accepted as the same then the conscious experience of the brain as a whole must be the same? Sorry if I'm not getting your position right, and it is a subtle thing to try to dissect. I think the word 'behavior' implies a certain level of normative repetition which is not sufficient to describe the ability of neurological awareness to choose whether to respond in the same way or a new and unpredictable way. When you look at what neurons are actually like, I think the idea of them having a finite set of behaviors is not realistic. It's like saying that because speech can be translated into words and letters, that words and letters should be able to automatically produce the voice of their speakers. The human race has already been supplanted by a superhuman AI. It's called law and finance. They are not entities and not intelligent, let alone intelligent in the way humans are. What make you think that law and finance are any less intelligent than a contemporary AI program? Law and finance are abstractions. A computer may be programmed to solve financial problems, and then it has a limited intelligence, but it's incorrect to say that finance is therefore intelligent. Computer programming languages are abstractions too. Law and finance are machine logics that program the computer of civilization, and as such, no more or less intelligent than any other machine. When you say that intelligence can 'fake' non-intelligence, you imply an internal experience (faking is not an external phenomenon). Intelligence is a broad, informal term. It can mean subjectivity, intersubjectivity, or objective behavior, although I would say not truly objective but intersubjectively imagined as objective. I agree that
Re: COMP refutation paper - finally out
John Mikes wrote: benjayk wrote: *Sorry, I can't follow you... You do not accept the concept of consciousness **and then want an origin for it?* I see you did not follow me... I asked for some identification to that mystical noumenon we are talking about exactly* to make it acceptable for discussion*. T H E N - I F it turns out to BE acceptable, we may well contemplate an origination for it - if???... Better followable now? Sorry for not having been clearer. Ah, OK. As I see it, (what I mean when I say) consciousness is simply self-evident, obvious - you might even say it's obviousness itself. There can be no remotely exact definition of it, as it is too simple (it can't be cut into analyzable pieces) and complex (it has many different facets) for that. It is that in which definitions arise. Just as one sentence in a book cannot capture the whole book, no definition can capture consciousness. To define consciousness and talk about it's properties means labeling and representing it. It's not wrong, but we should clear that it's ultimately undefinable and not even understandable. If you ask me what consciousness is, then I can just invite you too look at what already is obvious. In order to become more aware of how obvious it really is, it might be useful to not conceptualize it, and not jump to the conclusion It's trivial that I am conscious.. If we always search for consciousness as something concretely graspable (by the mind) we will miss the obvious fact that we simply are conscious and that the mind can't really grasp it. You might say that if we don't know what exactly we are talking about it makes no sense to talk about it. But I don't think that's necessarily true. When we first learn about something, we don't know what exactly we talk about and then learn more about it through asking questions, or contemplating. John Mikes wrote: BTW I never said that I do not accept the term consciousness - if it is identified in a way that makes sens (to me). I even worked on it (1992) to apply the word to something *more general* than e.g. awareness or similar 'human' peculiarities. When I say consciousness I just mean ability to experience (in the broadest sense). benjayk -- View this message in context: http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p32218486.html Sent from the Everything List mailing list archive at Nabble.com. -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: Math Question
On 07 Aug 2011, at 21:41, Craig Weinberg wrote: On Aug 1, 2:29 pm, Bruno Marchal marc...@ulb.ac.be wrote: Bruno Stephen, Isn't there a concept of imprecision in absolute physical measurement and drift in cosmological constants? Are atoms and molecules all infinitesimally different in size or are they absolutely the same size? Certainly individual cells of the same type vary in all of their measurements, do they not? If so, that would seem to suggest my view - that arithmetic is an approximation of feeling, and not the other way around. Cosmos is a feeling of order, or of wanting to manifest order, but it is not primitively precise. Make sense? Not really. The size of a molecule can be considered infinite, if you describe the molecule by its quantum wave. I don't see why arithmetic would approximate feeling, nor what that could mean. I don't see what you mean by cosmos, etc. Biological processes then, could be conceived as a 'levelling up' of molecular arithmetic having been formally actualized, I don't understand. What do you mean by molecular arithmetic, etc. a more significant challenge is attempted on top of the completed molecular canvas - with more elasticity and unpredictibility, and a host of newer, richer feelings which expand upon the molecular range, becoming at once more tangible and concrete, more real, and more unreal and abstract. The increased potential for unreality in the subjective interiority of the cells is what creates the perspective necessary to conceive of the molecular world as objectively real by contrast. The nervous system does the same trick one level higher. I see the words, but fail to see any precise meaning. It seems to me that you postulate all the notions that I think we should explain from simpler notions we agree on. Bruno http://iridia.ulb.ac.be/~marchal/ -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: Simulated Brains
On 08 Aug 2011, at 03:03, Craig Weinberg wrote: Interesting article: Residents of the brain: Scientists turn up startling diversity among nerve cells http://www.sciencenews.org/view/feature/id/332400/title/Residents_of_the_brain_ No two cells are the same. Zoom in, and the brain’s wrinkly, pinkish- gray exterior becomes a motley collection of billions of cells, each with personalized quirks and idiosyncrasies. New results suggest, for instance, that a population of nerve cells in which individual responses to an electrical poke differ can process more information than a group in which responses are the same. in addition to losing neurons, the brain would lose diversity, a deficit that could usher in even more damage. I would say this tends to support my view that the idea of replacement neurons or normative behavior modeling is likely to be a dead end as far as functionalism is concerned. It's more appropriate to consider your brain a civilization of individual organisms (only some of which are the conscious 'I') You mean some neuron are me? That is worst that the grandmother neuron idea. rather than a powerful computer executing complicated instructions. This is just a question of making the comp level lower, and has no incidence on the consequences of comp. The molecules themselves have no individual differences, and in case they have, again, this would only put the level of substitution lower. No machines can know-for-sure its own substitution level, and the obligation to reduce physics to the arithmetical biology and theology of numbers follows only from the *existence* of a level, not on the choice of such a level. Bruno http://iridia.ulb.ac.be/~marchal/ -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: COMP refutation paper - finally out
On 07 Aug 2011, at 21:50, benjayk wrote: Bruno Marchal wrote: Bruno Marchal wrote: Bruno Marchal wrote: Then computer science provides a theory of consciousness, and explains how consciousness emerges from numbers, How can consciousness be shown to emerge from numbers when it is already assumed at the start? In science we assume at some meta-level what we try to explain at some level. We have to assume the existence of the moon to try theories about its origin. That's true, but I think this is a different case. The moon seems to have a past, so it makes sense to say it emerged from its constituent parts. In the past, it was already there as a possibility. OK, I should say that it emerges arithmetically. I thought you did already understand that time is not primitive at all. More on this below. Yeah, the problem is that consciousness emerging from arithmetics means just that we manage to point to its existence within the theory. Er well, OK. But arithmetic explains also why it exist, why it is undoubtable yet non definable, how it brings matter in the picture, etc. We have no reason to suppose this expresses something more fundamental, that is, that consciousness literally emerges from arithmetics. Honestly, I don't even know how to interpret this literally. It means that the arithmetical reality is full of conscious entities of many sorts, so that we don't have to postulate the existence of consciousness, nor matter, in the ontological part of the TOE. We recover them, either intuitively, with the non-zombie rule, or formally, in the internal epistemology canonically associated to self- referring numbers. Bruno Marchal wrote: But consciousness as such has no past, so what would it mean that it emerges from numbers? Emerging is something taking place within time. Otherwise we are just saying we can deduce it from a theory, but this in and of itself doesn't mean that what is derived is prior to what it is derived from. To the contrary, what we call numbers just emerges after consciousness has been there for quite a while. You might argue that they were there before, but I don't see any evidence for it. What the numbers describe was there before, this is certainly true (or you could say there were implicitly there). OK. That would be a real disagreement. I just assume that the arithmetical relations are true independently of anything. For example I consider the truth of Goldbach conjecture as already settled in Platonia. Either it is true that all even number bigger than 2 are the sum of two primes, or that this is not true, and this independently on any consideration on time, spaces, humans, etc. Humans can easily verify this for little even numbers: 4 = 2+2, 6 = 3+3, 8 = 3+5, etc. But we don't have found a proof of this, despite many people have searched for it. I can see that the expression of such a statement needs humans or some thinking entity, but I don't see how the fact itself would depend on anything (but the definitions). My point is subtle, I wouldn't necessarily completly disagree with what you said. The problem is that in some sense everything is already there in some form, so in this sense 1+1=2 and 2+2=4 is independently, primarily true, but so is everything else. The theory must explains why and how relative contingencies happen, and it has to explain the necessities (natural laws), etc. Consciousness is required for any meaning to exist, That is ambiguous. If you accept that some proposition can be true independently of us, it can mean that some meanings are true independently of us. If not you need some one to observe the big bang to make it happen, or the numbers to make them existing. and ultimately is equivalent to it (IMO), so we derive from the meaning in numbers that meaning exist. It's true, but ultimately trivial. No, we derive from numbers+addition+multiplication a theory of meaning, consciousness, matter. You should not confuse a theory, and its meaning, interpretation, etc. I happens that we can indeed explain how numbers develop meanings for number relations, etc. Either everything is independently true, which doesn't really seem to be the case, or things are generally interdependent. 1+1=2 is just true because 2+2=4 and I can just be conscious because 1+1=2, but 1+1=2 is just true because I am conscious, and 1+1=2 is true because my mouse pad is blue, etc... This view makes sense to me, because it is so simple. One particular statement true statement is true, only because every particular statement true statement is true, and because what is true is true. In this sense every statement is true because of every other statement. If we derive something, we just explain how we become aware of the truth (of a statement). There is no objective hierarchy of emergence (but apparently necessarily a subjective progression, we will first
Re: Why is there something rather than nothing?
On 8/7/2011 11:40 PM, Roger wrote: Hi. I used to post to this list but haven't in a long time. I'm a biochemist but like to think about the question of Why is there something rather than nothing? as a hobby. If you're interested, some of my ideas on this question and on Why do things exist?, infinite sets and on the relationships of all this to mathematics and physics are at: https://sites.google.com/site/ralphthewebsite/ An abstract of the Why do things exist and Why is there something rather than nothing? paper is below. Thank you in advance for any feedback you may have. Sincerely, Roger Granet (roger...@yahoo.com) Abstract: In this paper, I propose solutions to the questions Why do things exist? and Why is there something rather than nothing? In regard to the first question, Why do things exist?, it is argued that a thing exists if the contents of, or what is meant by, that thing are completely defined. Things that are completely defined are mathematical abstractions: like a differentiable manifold or the natural numbers. One might even argue that an essential characteristic of things that exist is that they can have unknown properties. But perhaps I'm misreading what you mean by defined. Maybe you just mean that things that exist either have a property or not, independent of our knowledge. So Vic either has a mole on his left side or he doesn't, even though we don't know which; whereas is makes no sense to even wonder whether Sherlock Holmes has a mole on his left side. Brent A complete definition is equivalent to an edge or boundary defining what is contained within and giving “substance” and existence to the thing. In regard to the second question, Why is there something rather than nothing?, nothing, or non-existence, is first defined to mean: no energy, matter, volume, space, time, thoughts, concepts, mathematical truths, etc.; and no minds to think about this lack-of-all. It is then shown that this non-existence itself, not our mind's conception of non-existence, is the complete description, or definition, of what is present. That is, no energy, no matter, no volume, no space, no time, no thoughts, etc., in and of itself, describes, defines, or tells you, exactly what is present. Therefore, as a complete definition of what is present, nothing, or non-existence, is actually an existent state. So, what has traditionally been thought of as nothing, or non-existence, is, when seen from a different perspective, an existent state or something. Said yet another way, non-existence can appear as either nothing or something depending on the perspective of the observer. Another argument is also presented that reaches this same conclusion. Finally, this reasoning is used to form a primitive model of the universe via what I refer to as philosophical engineering. -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: Simulated Brains
Craig, Now I agree that my example was not good. I have searched some more. What about phantom pain, that is, pain in a limb that has been removed by amputation? What your theory says about such a thing? Evgenii On 07.08.2011 22:28 Evgenii Rudnyi said the following: On 07.08.2011 21:26 Craig Weinberg said the following: On Aug 7, 11:47 am, Evgenii Rudnyiuse...@rudnyi.ru wrote: On 07.08.2011 17:12 Craig Weinberg said the following: It seems that pain is some brain function, see for example http://www.thenakedscientists.com/HTML/content/interviews/interview/651/ I have just searched in Google people that do not experience pain and this was the first link. It's saying that the amplification of pain is a molecular function: It seems there are a whole series of *proteins that detect* various types of damage, be it hot, cold, pressure, etc. These seem to be integrated together by this *SCN9A, which seems to be an amplifier* that takes these small initial tissue damage signals and turns them into a much larger sodium impulse and a nerve can fire. What WE feel as pain are what our brain cells feel from other neurons when they are functioning properly. This genetic mutation affects the neuron's ability to amplify the pain, not the ability for the other cells of the body to feel the micro-pain that they might feel when repairing themselves from damage, and the proteins of the cell that detect that damage... which suggests that awareness is operating robustly at the molecular level. Thanks, I have to read it more carefully. Evgenii -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: Simulated Brains
On 08.08.2011 00:03 meekerdb said the following: On 8/7/2011 11:07 AM, Evgenii Rudnyi wrote: On 07.08.2011 19:58 meekerdb said the following: On 8/6/2011 11:44 PM, Evgenii Rudnyi wrote: ... Please note that according to experimental results (see the book mentioned in my previous message), pain comes after the event. For example when you touch a hotplate, you take your hand back not because of the pain. The action actually happens unconsciously, conscious pain comes afterward. Evgenii http://blog.rudnyi.ru Which invites the question, was it pain before you were conscious of it? Would it have been pain if you'd never become conscious of it? I would say just a series of neuron spikes, what else? I mean that in the skin there is some receptor that when it is hot excites some neuron. That neuron excites some other neurons and eventually your muscle move your hand. You see it differently? No, but some neuron excites some other neuron is all that happens later in your brain too. So where does it become pain? Is it when those neurons in your brain connect the afferent signal with the language modes for pain or with memories of injuries or with a vocal cry? This is exactly the Hard Problem. Another example is our visual experience. What we see is reconstructed by our brains. The question is however who observes it. How brain creates the Cartesian theater and who sits there? These are questions that are considered from the viewpoint of neuroscience in the book that I have already mentioned: http://blog.rudnyi.ru/2011/08/consciousness-creeping-up-on-the-hard-problem.html Evgenii -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On 8/8/2011 5:34 AM, Stathis Papaioannou wrote: On Mon, Aug 8, 2011 at 4:35 AM, meekerdbmeeke...@verizon.net wrote: On 8/7/2011 4:42 AM, Stathis Papaioannou wrote: That, as I keep saying, is the question. Assume that the bot can behave like a person but lacks consciousness. Then it would be possible to replace parts of your brain with non-conscious components that function otherwise normally, which would lead to you lacking some important aspect aspect of consciousness but being unaware of it. Put that way it seems absurd. But what about lacking consciousness but *acting as if you were unaware* of it? The philosophical zombie says he's conscious and has an internal narration and imagines and dreams...but does he? Can we say that he must? If he says he doesn't, can we be sure he's lying? Even though I think functionalism is right, I think consciousness may be very different depending on how the internal functions are implemented. I go back to the example of having an inner narration in language (which most of us didn't have before age 4). I think Julian Jaynes was right to suppose that this was an evolutionary accident in co-opting the perceptual mechanism of language. In a sense all thought may be perception; it's just that some of it is perception of internal states. The trick is to consider not full-blown zombies but partial zombies based on partial brain replacement. If your visual cortex is replaced with zombie neurons your visual qualia will disappear but the rest of the brain will receive normal input, so you will declare that you can see normally. The possibilities are: (a) You can in fact see normally. In general, if the behaviour of the brain is replicated then the consciousness is also replicated. (b) You are blind but don't realise it, believe you have normal sight and declare that you have normal sight. (c) You are blind and realise you are blind but can't do anything about it, observing helplessly as your vocal cords apparently of their own accord declare that everything is normal. I think (a) is the only plausible one of theses possibilities. I think so too. But that doesn't show that some different arrangement of functions in the brain could not produce a qualitatively different kind of consciousness. Indeed it seems that sociopaths, for example, are different in lacking an empathy module in their brain. Some people claim that the ability to understand higher mathematics is built-in and some people have it an some don't. If we build more an more intelligent, autonomous Mars Rovers I think we will necessarily instantiate consciousness - but not consciousness like our own. So I'm interested in the question of how we can know how similar and in what ways? Brent -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On 8/8/2011 6:13 AM, Craig Weinberg wrote: The machine doesn't care if it's right or wrong But my thermostat cares whether it's hot or cold. Brent -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: COMP refutation paper - finally out
Bruno Marchal wrote: On 07 Aug 2011, at 21:50, benjayk wrote: Bruno Marchal wrote: Bruno Marchal wrote: Bruno Marchal wrote: Then computer science provides a theory of consciousness, and explains how consciousness emerges from numbers, How can consciousness be shown to emerge from numbers when it is already assumed at the start? In science we assume at some meta-level what we try to explain at some level. We have to assume the existence of the moon to try theories about its origin. That's true, but I think this is a different case. The moon seems to have a past, so it makes sense to say it emerged from its constituent parts. In the past, it was already there as a possibility. OK, I should say that it emerges arithmetically. I thought you did already understand that time is not primitive at all. More on this below. Yeah, the problem is that consciousness emerging from arithmetics means just that we manage to point to its existence within the theory. Er well, OK. But arithmetic explains also why it exist, why it is undoubtable yet non definable, how it brings matter in the picture, etc. Well, if I try to interpret your words favourably I can bring myself to agree. But I will insist that it only explains why it exists (ultimately because of itself), and does not make sense without consciousness. I am getting a bit tired of labouring this point, but honestly your theory is postulating something that seems nonsensical to me. Why on earth would I believe in the truth of something that *can never be known in any way* (namely, that arithmetics is true without / prior to consciousness)? Bruno Marchal wrote: We have no reason to suppose this expresses something more fundamental, that is, that consciousness literally emerges from arithmetics. Honestly, I don't even know how to interpret this literally. It means that the arithmetical reality is full of conscious entities of many sorts, so that we don't have to postulate the existence of consciousness, nor matter, in the ontological part of the TOE. We recover them, either intuitively, with the non-zombie rule, or formally, in the internal epistemology canonically associated to self- referring numbers. But what you do is assuming consciousness (you have to!) and then formulate a theory that claims itself to be primary and ontologically real that derives that consciousness is just epistemlogically true, by virtue of hiding the assumption that consciousness already exists! It seems you are just bullshitting yourself by not mentioning consciousness as an assumption in the theory and then claim it follows without assuming it. What you call ontological part of the theory are just the axioms you make explicit. I don't see how this make them ontological, and the implicit assumption epistemological. If anything, it would be the opposite. What is implicit in everything, ie that which cannot be removed, is ontological, and what can (apparently) be removed (or not mentioned) is epistemological. We can be conscious without any notion of numbers, but there is no notion of numbers without consciousness. Bruno Marchal wrote: Bruno Marchal wrote: OK. That would be a real disagreement. I just assume that the arithmetical relations are true independently of anything. For example I consider the truth of Goldbach conjecture as already settled in Platonia. Either it is true that all even number bigger than 2 are the sum of two primes, or that this is not true, and this independently on any consideration on time, spaces, humans, etc. Humans can easily verify this for little even numbers: 4 = 2+2, 6 = 3+3, 8 = 3+5, etc. But we don't have found a proof of this, despite many people have searched for it. I can see that the expression of such a statement needs humans or some thinking entity, but I don't see how the fact itself would depend on anything (but the definitions). My point is subtle, I wouldn't necessarily completly disagree with what you said. The problem is that in some sense everything is already there in some form, so in this sense 1+1=2 and 2+2=4 is independently, primarily true, but so is everything else. The theory must explains why and how relative contingencies happen, and it has to explain the necessities (natural laws), etc. OK. It can theoretically explain that, no doubt about that. But from this it doesn't follow that the means of explanation (numbers) are primary. I can explain with words why humans have legs, this doesn't mean my words are the reason that humans have legs. Bruno Marchal wrote: Consciousness is required for any meaning to exist, That is ambiguous. If you accept that some proposition can be true independently of us, it can mean that some meanings are true independently of us. If not you need some one to observe the big bang to make it happen, or the numbers to make them existing. Well, independently of
Re: Math Question
On Aug 8, 12:03 pm, Bruno Marchal marc...@ulb.ac.be wrote: On 07 Aug 2011, at 21:41, Craig Weinberg wrote: On Aug 1, 2:29 pm, Bruno Marchal marc...@ulb.ac.be wrote: Bruno Stephen, Isn't there a concept of imprecision in absolute physical measurement and drift in cosmological constants? Are atoms and molecules all infinitesimally different in size or are they absolutely the same size? Certainly individual cells of the same type vary in all of their measurements, do they not? If so, that would seem to suggest my view - that arithmetic is an approximation of feeling, and not the other way around. Cosmos is a feeling of order, or of wanting to manifest order, but it is not primitively precise. Make sense? Not really. The size of a molecule can be considered infinite, if you describe the molecule by its quantum wave. Wouldn't the quantum wave describe the character of groups of the molecule rather than an actual instance of the molecule? Don't individual molecules have measurable finite sizes? For instance, here http://www.quantum.at/research/molecule-interferometry-applications/molecular-quantum-lithography.html we can see C60 molecules are in the range of 2nm each. I don't see why arithmetic would approximate feeling, nor what that could mean. I don't see what you mean by cosmos, etc. For instance, a chef might make a meal by adding informal quantities of the ingredients and procedures according to how she feels. A pinch of salt, a chunk of butter, mix well, heat until crispy, etc. If she wants to publish this as a recipe, she might want to get more quantitatively precise with ingredient amounts, time and temp, etc. If however, the quantities were arithmetically precise to begin with, there would not be any need to blur them into informal terms. If the recipe for the universe is a book of numbers, there would be no need for blurry feelings to arise to mask them. Biological processes then, could be conceived as a 'levelling up' of molecular arithmetic having been formally actualized, I don't understand. What do you mean by molecular arithmetic, etc. I'm characterizing the mechanics of molecules as being more arithmetic and deterministic than that of organisms. Saying that molecular mechanics represent one level of feeling actualized into form, and that the next level is form actualizing a more powerful experience of feeling. a more significant challenge is attempted on top of the completed molecular canvas - with more elasticity and unpredictibility, and a host of newer, richer feelings which expand upon the molecular range, becoming at once more tangible and concrete, more real, and more unreal and abstract. The increased potential for unreality in the subjective interiority of the cells is what creates the perspective necessary to conceive of the molecular world as objectively real by contrast. The nervous system does the same trick one level higher. I see the words, but fail to see any precise meaning. I'm saying that it's the difference between feeling and it's opposite - arithmetic, which gives rise to the experience of 'reality'. It seems to me that you postulate all the notions that I think we should explain from simpler notions we agree on. Not sure what you mean. If you're saying that I postulate that feeling is not reducible but that you think we should reduce it to arithmetic, I agree. I think the idea that feeling seems like it should be reduced to something else is a consequence of the fact that our thoughts of reduction are themselves a feeling. Craig -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: Simulated Brains
On Aug 8, 12:04 pm, Bruno Marchal marc...@ulb.ac.be wrote: You mean some neuron are me? That is worst that the grandmother neuron idea. All of your neurons are you, but the only some groups are aware that they are you at any given time. If you're drunk, for instance, some parts of you are not online and what remains is more in the limbic region. rather than a powerful computer executing complicated instructions. This is just a question of making the comp level lower, and has no incidence on the consequences of comp. The molecules themselves have no individual differences, and in case they have, again, this would only put the level of substitution lower. No machines can know-for-sure its own substitution level, and the obligation to reduce physics to the arithmetical biology and theology of numbers follows only from the *existence* of a level, not on the choice of such a level. Not sure what you're saying here. I was trying to point out that how neurons actually behave seems not to be generic or mechanical on any level. Each neuron would have to be it's own individualized simulation, which itself may have to be based upon individualized intra-cellular simulations, etc. Craig -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: Simulated Brains
On Aug 8, 2:10 pm, Evgenii Rudnyi use...@rudnyi.ru wrote: Craig, Now I agree that my example was not good. I have searched some more. What about phantom pain, that is, pain in a limb that has been removed by amputation? What your theory says about such a thing? I think that phantom limb pain is about the somatic-proprioceptive neurons in the brain letting you know that they have not heard from the neurons in your missing limb in a long time, and they want you to check it out. It's also processing the trauma of the loss, as if to say I'm not crazy, right? There used to be a limb here and somehow you, uh, don't have it anymore. Craig -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On Aug 8, 2:21 pm, meekerdb meeke...@verizon.net wrote: On 8/8/2011 6:13 AM, Craig Weinberg wrote: The machine doesn't care if it's right or wrong But my thermostat cares whether it's hot or cold. I see what you are saying, but no, it doesn't care. If it cared then it would not have to be programmed or set, it would just try to keep the temperature comfortable for itself. If it's the dead of winter and you don't have it set to heat, you will freeze to death and nobody will pay the electric bill and the thermostat won't ever do anything again. It has no capacity to understand what it is or what it's doing. Craig -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: Simulated Brains
Brent wrote: *No, but some neuron excites some other neuron is all that happens later in your brain too. So where does it become pain? Is it when those neurons in your brain connect the afferent signal with the language modes for pain or with memories of injuries or with a vocal cry?* PAIN and more such We are talking here - I suppose - about a complexity and should not single out individual ingredients for desultory explanation, or any 'Occamized' characterization 'shaved off' from the rest of the complex. If we can 'analyze' a complexity it is not a complexity, only that portion of it what we discovered up to yesterday. The classic kaon? if a branch falls in the forest and nobody is there to hear, does it make a noise? and please spare us the physicalist explanation for 'noise' as airwaves undulating. frequencies etc. etc. - it is only a description of the mechanism attached to it. Pain is not a thing (Ding an sich) it is a complex outcome of - among others - neuronal excitements and memories of injuries etc. that occurred in connection with a 'feeling'(?). I would not attempt to describe 'feeling' upon those physical/physiological data our science so far disclosed as attached to the more complex phenomena. Think of the inventory a long time ago: 5 senses? Last I read it was 64 and counting. Now maybe hundreds. John M On Sun, Aug 7, 2011 at 6:03 PM, meekerdb meeke...@verizon.net wrote: On 8/7/2011 11:07 AM, Evgenii Rudnyi wrote: On 07.08.2011 19:58 meekerdb said the following: On 8/6/2011 11:44 PM, Evgenii Rudnyi wrote: ... Please note that according to experimental results (see the book mentioned in my previous message), pain comes after the event. For example when you touch a hotplate, you take your hand back not because of the pain. The action actually happens unconsciously, conscious pain comes afterward. Evgenii http://blog.rudnyi.ru Which invites the question, was it pain before you were conscious of it? Would it have been pain if you'd never become conscious of it? I would say just a series of neuron spikes, what else? I mean that in the skin there is some receptor that when it is hot excites some neuron. That neuron excites some other neurons and eventually your muscle move your hand. You see it differently? No, but some neuron excites some other neuron is all that happens later in your brain too. So where does it become pain? Is it when those neurons in your brain connect the afferent signal with the language modes for pain or with memories of injuries or with a vocal cry? Brent -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.**comeverything-list@googlegroups.com . To unsubscribe from this group, send email to everything-list+unsubscribe@ **googlegroups.com everything-list%2bunsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/** group/everything-list?hl=enhttp://groups.google.com/group/everything-list?hl=en . -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On 8/8/2011 1:33 PM, Craig Weinberg wrote: On Aug 8, 2:21 pm, meekerdbmeeke...@verizon.net wrote: On 8/8/2011 6:13 AM, Craig Weinberg wrote: The machine doesn't care if it's right or wrong But my thermostat cares whether it's hot or cold. I see what you are saying, but no, it doesn't care. If it cared then it would not have to be programmed or set, it would just try to keep the temperature comfortable for itself. Since you're programmed to keep yourself alive I guess that implies you don't care whether you live or die. If it's the dead of winter and you don't have it set to heat, you will freeze to death and nobody will pay the electric bill and the thermostat won't ever do anything again. It has no capacity to understand what it is or what it's doing. It has the capacity to understand what the temperature is and what it's supposed to be. You're just showing your carbon racism because my thermostat is made of glass and metal. Brent -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On Aug 8, 7:18 pm, meekerdb meeke...@verizon.net wrote: On 8/8/2011 1:33 PM, Craig Weinberg wrote: On Aug 8, 2:21 pm, meekerdbmeeke...@verizon.net wrote: On 8/8/2011 6:13 AM, Craig Weinberg wrote: The machine doesn't care if it's right or wrong But my thermostat cares whether it's hot or cold. I see what you are saying, but no, it doesn't care. If it cared then it would not have to be programmed or set, it would just try to keep the temperature comfortable for itself. Since you're programmed to keep yourself alive I guess that implies you don't care whether you live or die. We're not programmed to keep ourselves alive, we have to learn how to do that ourselves. There is some vestigial programming from thousands of years of hominid evolution, but that programming is actually hostile to our survival now and we are having to hack into our own diet to keep it from making us obese. A thermostat doesn't do that. If it's the dead of winter and you don't have it set to heat, you will freeze to death and nobody will pay the electric bill and the thermostat won't ever do anything again. It has no capacity to understand what it is or what it's doing. It has the capacity to understand what the temperature is and what it's supposed to be. You're just showing your carbon racism because my thermostat is made of glass and metal. It has no understanding. The only point of understanding is that you are able to use it to make choices. A thermostat is just an electric circuit that opens or closes depending on whether or not a two particular pieces of metal expand or contract enough to tip a mercury switch one way or another. No choice is made. I'm assuming that you're trolling me on this...you don't really believe that a thermostat understands temperature right? Craig -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On Aug 8, 7:18 pm, meekerdb meeke...@verizon.net wrote: It has the capacity to understand what the temperature is and what it's supposed to be. You're just showing your carbon racism because my thermostat is made of glass and metal. To be clear, I do suspect that each metal strip, and any metal strip, may be 'aware' of temperature (it's own) but that's not an abstract 'understanding'. The thermostat device as a whole is only a coherent machine from our perspective. Without us, each part of the thermostat is just a coincidentally adjacent object, having no unifying subjective sense amongst the parts. Craig -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On 8/8/2011 5:12 PM, Craig Weinberg wrote: On Aug 8, 7:18 pm, meekerdbmeeke...@verizon.net wrote: On 8/8/2011 1:33 PM, Craig Weinberg wrote: On Aug 8, 2:21 pm, meekerdbmeeke...@verizon.netwrote: On 8/8/2011 6:13 AM, Craig Weinberg wrote: The machine doesn't care if it's right or wrong But my thermostat cares whether it's hot or cold. I see what you are saying, but no, it doesn't care. If it cared then it would not have to be programmed or set, it would just try to keep the temperature comfortable for itself. Since you're programmed to keep yourself alive I guess that implies you don't care whether you live or die. We're not programmed to keep ourselves alive, we have to learn how to do that ourselves. Learning how and learning to want to are two different things. There is some vestigial programming from thousands of years of hominid evolution, but that programming is actually hostile to our survival now and we are having to hack into our own diet to keep it from making us obese. A thermostat doesn't do that. If it's the dead of winter and you don't have it set to heat, you will freeze to death and nobody will pay the electric bill and the thermostat won't ever do anything again. It has no capacity to understand what it is or what it's doing. It has the capacity to understand what the temperature is and what it's supposed to be. You're just showing your carbon racism because my thermostat is made of glass and metal. It has no understanding. The only point of understanding is that you are able to use it to make choices. A thermostat is just an electric circuit that opens or closes depending on whether or not a two particular pieces of metal expand or contract enough to tip a mercury switch one way or another. No choice is made. I'm assuming that you're trolling me on this...you don't really believe that a thermostat understands temperature right? Just as much as I believe a neuron feels pain. Brent Craig -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On 8/8/2011 5:28 PM, Craig Weinberg wrote: On Aug 8, 7:18 pm, meekerdbmeeke...@verizon.net wrote: It has the capacity to understand what the temperature is and what it's supposed to be. You're just showing your carbon racism because my thermostat is made of glass and metal. To be clear, I do suspect that each metal strip, and any metal strip, may be 'aware' of temperature (it's own) but that's not an abstract 'understanding'. The thermostat device as a whole is only a coherent machine from our perspective. Without us, each part of the thermostat is just a coincidentally adjacent object, having no unifying subjective sense amongst the parts. Unlike those carbon atoms in neurons? Brent Craig -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On Mon, Aug 8, 2011 at 11:13 PM, Craig Weinberg whatsons...@gmail.com wrote: No. You have it backwards from the start. There is no such thing as 'behaving like a person'. There is only a person interpreting something's behavior as being like a person. There is no power emanating from a thing that makes it person-like. If you understand this you will know because you will see that the whole question is a red herring. If you don't see that, you do not understand what I'm saying. Interpreting something's behaviour as being like a [person's] is what I mean by behaving like a person. I know that's what you mean, but I'm trying to explain why those two phrases are polar opposites in this context, because the whole thread is about the difference between subjectivity and objectivity. If a chip could behave like a person, then we wouldn't be having this conversation right now. We'd be hanging out with our digital friends instead. Every chip we make would have it's own perspective and do what it wanted to do, like an infant or a pollywog would. If we want to make a chip that impersonates something that does have it's own perspective and does what it wants to, then we can try to do that with varying levels of success depending upon who you are trying to fool, how you are trying to fool them, and for how long. The fact that any particular person interprets the thing as being alive or conscious for some period of time is not the same thing as the thing being actually alive or conscious. The chip is not alive because it doesn't meet a definition for life. It may or may not be conscious - that isn't obvious and it is what we are arguing about. However, it may objectively behave like a living or conscious entity. For example, if it seeks food and reproduces it is behaving like a living thing even though it isn't, and if it has a conversation with you about its feelings and desires it is behaving like a conscious thing even though it isn't. I don't think the phrase does what it wants to do adds anything to the discussion if you say that only a conscious thing can do what it wants to do - it is back to arguing whether something is conscious. Then it would be possible to replace parts of your brain with non-conscious components that function otherwise normally, which would lead to you lacking some important aspect aspect of consciousness but being unaware of it. This is absurd, but it is a corollary of the claim that it is possible to separate consciousness from function. Therefore, the claim that it is possible to separate consciousness from function is shown to be false. If you don't accept this then you allow what you have already admitted is an absurdity. It's a strawman of consciousness that is employed in circular thinking. You assume that consciousness is a behavior from the beginning and then use that fallacy to prove that behavior can't be separated from consciousness. Consciousness drives behavior and vice versa, but each extends beyond the limits of the other. No, I do NOT assume that consciousness follows from behaviour (and certainly not that it IS behaviour) from the beginning!! I've lost count of the number of times I have said assume that it has the behaviour, but not the consciousness, of a brain component. How can I make it clearer? What other language can I use to convey that the thing is unconscious but to an external observer, who can't know its subjective states, it does the same sorts of mechanical things as its conscious counterpart? Isn't the whole point of the gradual neuron substitution example to prove that consciousness must be behavior? That if behavior of the neurons are the same, and accepted as the same then the conscious experience of the brain as a whole must be the same? Sorry if I'm not getting your position right, and it is a subtle thing to try to dissect. I think the word 'behavior' implies a certain level of normative repetition which is not sufficient to describe the ability of neurological awareness to choose whether to respond in the same way or a new and unpredictable way. When you look at what neurons are actually like, I think the idea of them having a finite set of behaviors is not realistic. It's like saying that because speech can be translated into words and letters, that words and letters should be able to automatically produce the voice of their speakers. I *assume* that behaviour and consciousness can be separated and show that it leads to absurdity. This means that the initial assumption was wrong. If you disagree you can try to show that the assumption does not in fact lead to absurdity, but you haven't attempted to do that. Instead, you restate your own assumption. The form of argument is similar to assuming that sqrt(2) is rational and showing that this assumption leads to contradiction, therefore sqrt(2) cannot be rational. The only way to respond to this argument if you
Re: bruno list
On Tue, Aug 9, 2011 at 4:19 AM, meekerdb meeke...@verizon.net wrote: I think so too. But that doesn't show that some different arrangement of functions in the brain could not produce a qualitatively different kind of consciousness. Indeed it seems that sociopaths, for example, are different in lacking an empathy module in their brain. Some people claim that the ability to understand higher mathematics is built-in and some people have it an some don't. If we build more an more intelligent, autonomous Mars Rovers I think we will necessarily instantiate consciousness - but not consciousness like our own. So I'm interested in the question of how we can know how similar and in what ways? The only way to guarantee identical consciousness would be to replicate behaviour perfectly. Two entities that produce the same outputs for all inputs would have the same consciousness. -- Stathis Papaioannou -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On Aug 8, 8:42 pm, meekerdb meeke...@verizon.net wrote: On 8/8/2011 5:28 PM, Craig Weinberg wrote: On Aug 8, 7:18 pm, meekerdbmeeke...@verizon.net wrote: It has the capacity to understand what the temperature is and what it's supposed to be. You're just showing your carbon racism because my thermostat is made of glass and metal. To be clear, I do suspect that each metal strip, and any metal strip, may be 'aware' of temperature (it's own) but that's not an abstract 'understanding'. The thermostat device as a whole is only a coherent machine from our perspective. Without us, each part of the thermostat is just a coincidentally adjacent object, having no unifying subjective sense amongst the parts. Unlike those carbon atoms in neurons? Right. A neuron does have a unifying subjective sense which is a cumulative entanglement of the sense of it's organic molecules which includes the sense of it's atoms. Organisms want to survive and reproduce, metal strips do not. Craig -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On Aug 8, 8:50 pm, Stathis Papaioannou stath...@gmail.com wrote: On Mon, Aug 8, 2011 at 11:13 PM, Craig Weinberg whatsons...@gmail.com wrote: No. You have it backwards from the start. There is no such thing as 'behaving like a person'. There is only a person interpreting something's behavior as being like a person. There is no power emanating from a thing that makes it person-like. If you understand this you will know because you will see that the whole question is a red herring. If you don't see that, you do not understand what I'm saying. Interpreting something's behaviour as being like a [person's] is what I mean by behaving like a person. I know that's what you mean, but I'm trying to explain why those two phrases are polar opposites in this context, because the whole thread is about the difference between subjectivity and objectivity. If a chip could behave like a person, then we wouldn't be having this conversation right now. We'd be hanging out with our digital friends instead. Every chip we make would have it's own perspective and do what it wanted to do, like an infant or a pollywog would. If we want to make a chip that impersonates something that does have it's own perspective and does what it wants to, then we can try to do that with varying levels of success depending upon who you are trying to fool, how you are trying to fool them, and for how long. The fact that any particular person interprets the thing as being alive or conscious for some period of time is not the same thing as the thing being actually alive or conscious. The chip is not alive because it doesn't meet a definition for life. It may or may not be conscious - that isn't obvious and it is what we are arguing about. However, it may objectively behave like a living or conscious entity. For example, if it seeks food and reproduces it is behaving like a living thing even though it isn't, and if it has a conversation with you about its feelings and desires it is behaving like a conscious thing even though it isn't. At the top you are saying that there is a definition of life to be met, but then you are saying that there are behaviors which are 'objectively' living or conscious. The two assertions are mutually exclusive and both in opposition to my view. If life can be observed as objective behaviors, then it doesn't need a definition, it just is observably either alive or it isn't. If it needs a definition then you admit that life cannot be determined objectively and must be defined subjectively - guessed at. What I'm saying is completely different. I am taking the latter view and going much further to say that not only is life defined subjectively, but that definition is based upon perceived isomorphism and as a general principle of all phenomena in the universe. As a living creature, we recognize other phenomena as other living creatures to the extent that they remind us of ourselves and our own behaviors. This would normally serve us well, except when hijacked by intentional technological impersonations designed to remind us of our own behaviors. I don't think the phrase does what it wants to do adds anything to the discussion if you say that only a conscious thing can do what it wants to do - it is back to arguing whether something is conscious. We can't say whether a chip does what it wants to do but the fact that it must be programmed by an outside source if it is to do anything would suggest that it either cannot do what it wants or that it cannot want to do much. A chip without firmware or software won't ever learn, grow, or change itself. Then it would be possible to replace parts of your brain with non-conscious components that function otherwise normally, which would lead to you lacking some important aspect aspect of consciousness but being unaware of it. This is absurd, but it is a corollary of the claim that it is possible to separate consciousness from function. Therefore, the claim that it is possible to separate consciousness from function is shown to be false. If you don't accept this then you allow what you have already admitted is an absurdity. It's a strawman of consciousness that is employed in circular thinking. You assume that consciousness is a behavior from the beginning and then use that fallacy to prove that behavior can't be separated from consciousness. Consciousness drives behavior and vice versa, but each extends beyond the limits of the other. No, I do NOT assume that consciousness follows from behaviour (and certainly not that it IS behaviour) from the beginning!! I've lost count of the number of times I have said assume that it has the behaviour, but not the consciousness, of a brain component. How can I make it clearer? What other language can I use to convey that the thing is unconscious but to an external observer, who can't know its subjective states, it does the same
Re: bruno list
On Aug 8, 8:53 pm, Stathis Papaioannou stath...@gmail.com wrote: The only way to guarantee identical consciousness would be to replicate behaviour perfectly. Two entities that produce the same outputs for all inputs would have the same consciousness. What is an entity and an output? If one entity is made of wood, then it can output flames when I set it on fire. If it's made of solid rock, it cannot. Both could be sculpted into some kind of machine that sorts clothespins by size. If there is any consciousness going on, to me, it clearly takes place at the chemical level, where the material itself has to spontaneously reveal it's nature in it's native response to an energetic change. It takes place in the aesthetic difference between wood and stone - the texture and weight, the sound and durability against wind and rain. That is the awareness that the machine shares with us and with animals and plants, heat and light. There is no consciousness of the clothespins though. Even though that's what the machine's 'outputs' means to us. That's not a machine making sense, being intelligent, consciousness, or understanding. You've got to be kidding. All it is is human intelligence riding on the back of an unsuspecting pile of minerals or cellulose. To say that there might be some kind of understanding of clothespins going on there that is in some way comparable to a human understanding of clothespins is flat out sophistry. Craig -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On 8/8/2011 5:53 PM, Stathis Papaioannou wrote: On Tue, Aug 9, 2011 at 4:19 AM, meekerdbmeeke...@verizon.net wrote: I think so too. But that doesn't show that some different arrangement of functions in the brain could not produce a qualitatively different kind of consciousness. Indeed it seems that sociopaths, for example, are different in lacking an empathy module in their brain. Some people claim that the ability to understand higher mathematics is built-in and some people have it an some don't. If we build more an more intelligent, autonomous Mars Rovers I think we will necessarily instantiate consciousness - but not consciousness like our own. So I'm interested in the question of how we can know how similar and in what ways? The only way to guarantee identical consciousness would be to replicate behaviour perfectly. Two entities that produce the same outputs for all inputs would have the same consciousness. That's what I'm questioning. At what level are input, output, and behavior defined? Does it include a slight twitch of the eye? a change in a hormone level in the blood? a transmission via this nerve instead of that? Does the behavior only have to be similar enough to fool the attentive observer, or does it have to be the same all the way down to neuron, or sub-neuron level. I'm content to say that fooling the attentive observer is enough to bet on consciousness. But to be identical consciousness you would have to go much lower. Maybe even to neuron level. Brent -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: bruno list
On 8/8/2011 6:14 PM, Craig Weinberg wrote: On Aug 8, 8:42 pm, meekerdbmeeke...@verizon.net wrote: On 8/8/2011 5:28 PM, Craig Weinberg wrote: On Aug 8, 7:18 pm, meekerdbmeeke...@verizon.netwrote: It has the capacity to understand what the temperature is and what it's supposed to be. You're just showing your carbon racism because my thermostat is made of glass and metal. To be clear, I do suspect that each metal strip, and any metal strip, may be 'aware' of temperature (it's own) but that's not an abstract 'understanding'. The thermostat device as a whole is only a coherent machine from our perspective. Without us, each part of the thermostat is just a coincidentally adjacent object, having no unifying subjective sense amongst the parts. Unlike those carbon atoms in neurons? Right. A neuron does have a unifying subjective sense which is a cumulative entanglement of the sense of it's organic molecules which includes the sense of it's atoms. Organisms want to survive and reproduce, metal strips do not. Craig I guess reductio ad absurdum arguments don't help with those who accept the absurd. Brent -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
Re: COMP refutation paper - finally out
On Mon, Aug 8, 2011 at 1:56 PM, benjayk benjamin.jaku...@googlemail.comwrote: I am getting a bit tired of labouring this point, but honestly your theory is postulating something that seems nonsensical to me. Why on earth would I believe in the truth of something that *can never be known in any way* (namely, that arithmetics is true without / prior to consciousness)? Ben, Do you think that the 10^10^100th digit of Pi has a certain value even though we can never know what it is and no one has ever or will ever (in this universe at least) be conscious of it? If I assert the digit happens to be 8, would you agree that my assertion must be either true or false? If so, where does this truth exist? Note that one cannot say it has an indefinite or value, or that its value is inconsequential because that level of precision will never make a difference in any equation we work with. Euler's identity: e^(Pi * i) + 1 = 0, would be false without each of the infinite digits of Pi having a definite and certain value. These values that are unknown to use, but nonetheless must be there. Jason -- You received this message because you are subscribed to the Google Groups Everything List group. To post to this group, send email to everything-list@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.