RE: RE: FW: [agi] A paper that actually does solve the problem of consciousness
Hector, I skimmed your paper linked to in the post below. From my quick read it appears the only meaningful way it suggests a brain might be infinite was that since the brain used analogue values --- such as synaptic weights, or variable time intervals between spikes (and presumably since those analogue values would be determined by so many factors, each of which might modify their values slightly) --- the brain would be capable of computing many values each of which could arguably have infinite gradation in value. So arguably its computations would be infinitely complex, in terms of the number of bits that would be required to describe them exactly. If course, it is not clear the universe itself supports infinitely fine gradation in values, which your paper admits is a questions. But even if the universe and the brain did support infinitely fine gradations in value, it is not clear computing with weights or signals capable of such infinitely fine gradations, necessarily yields computing that is meaningfully much more powerful, in terms of the sense of experience it can provide --- unless it has mechanisms that can meaningfully encode and decode much more information in such infinite variability. You can only communicate over a very broad bandwidth communication medium as much as your transmitting and receiving mechanisms can encode and decode. For example, it is not clear a high definition TV capable of providing an infinite degree of variation in its colors, rather than only say 8, 16, 32, or 64 bits for each primary color, would provide any significantly greater degree of visual experience, even though one could claim the TV was sending out a signal of infinite complexity. I have read and been told by neural net designers that typical neural nets operate by dividing a high dimensional space into subspaces. If this is true, then it is not clear that merely increasing the resolution at which such neural nets were computed, say beyond 64 bits, would change the number of subspaces that could be represented with a given number, say 100 billion, of nodes --- or that the minute changes in boundaries, or the occasional difference in tipping points that might result from infinite precision math, if it were possible, would be of that great a significance with regard to the overall capabilities of the system. Thus, it is not clear that infinite resolution in neural weights and spike timing would greatly increase the meaningful (i.e., having grounding), rememberable, and actionable number of states the brain could represent. My belief --- and it is only a belief at this point in time --- is that the complexity a finite human brain could deliver is so great --- arguably equal to 1000 millions simultaneous DVD signals that interact with each other and memories --- that such a finite computation is enough to create the sense of experiential awareness we humans call consciousness. I am not aware of anything that modern science says with authority about external reality --- or that I have sensed from my own experiences of my own consciousness --- that would seem to require infinite resources. Something can have a complexity far beyond human comprehension, far beyond even the most hyperspeed altered imaginings of a drugged mind, arguably far beyond the complexity of the observable universe, without requiring for its representation more than an infinitesimal fraction of anything that could be accurately called infinite. Ed Porter -Original Message- From: Hector Zenil [mailto:[EMAIL PROTECTED] Sent: Sunday, November 30, 2008 10:42 PM To: agi@v2.listbox.com Subject: Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. Not really. They're limitations on what measurements of physical reality can be simultaneously made. Quantum systems can compute *exactly* the class of Turing computable functions ... this has been proved according to standard quantum mechanics math. however, there are some things they can compute faster than any Turing machine, in the average case but not the worst case. Sorry, I am not really following the discussion but I just read that there is some misinterpretation here. It is the standard model of quantum computation that effectively computes exactly the Turing computable functions, but that was almost hand tailored to do so, perhaps because adding to the theory an assumption of continuum measurability was already too much (i.e. distinguishing infinitely close quantum states). But that is far from the claim that quantum systems can compute exactly the class of Turing
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
On Dec 2, 2008, at 8:31 AM, Ed Porter wrote: From my quick read it appears the only meaningful way it suggests a brain might be infinite was that since the brain used analogue values --- such as synaptic weights, or variable time intervals between spikes (and presumably since those analogue values would be determined by so many factors, each of which might modify their values slightly) --- the brain would be capable of computing many values each of which could arguably have infinite gradation in value. So arguably its computations would be infinitely complex, in terms of the number of bits that would be required to describe them exactly. If course, it is not clear the universe itself supports infinitely fine gradation in values, which your paper admits is a questions. The universe has a noise floor (see: Boltzmann, Planck, et al), from which it follows that all analog values are equivalent to some trivial number of bits. Since digital deals with the case of analog at the low end of signal to noise ratios, digital usually denotes a proper subset of analog, making the equivalence unsurprising. The obvious argument against infinite values is that the laws of thermodynamics would no longer apply if that were the case. Given the weight of the evidence for thermodynamics being valid, it is probably prudent to stick with models that work when restricted to a finite dynamic range for values. The fundamental non-equivalence of digital and analog is one of those hard-to-kill memes that needs to die, along with the fundamental non- equivalence of parallel and serial computation. Persistent buggers, even among people who should know better. Cheers, J. Andrew Rogers --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
RE: RE: FW: [agi] A paper that actually does solve the problem of consciousness
J., Your arguments seem to support my intuitive beliefs, so my instinctual response is to be thankful for them. But I have to sheepishly admit I don't totally understand them. Could you please give me a simple explanation for why it is an obvious argument against infinite values ... that the laws of thermodynamics would no longer apply if that were the case. I am not disagreeing, just not understanding. For example, I am not knowledgeable enough about the subject to understand why the laws of thermodynamics could not apply in a classical model of the world in which atoms and molecules have positions and velocities defined with infinite precision, which I think many people who believed in them for years thought before the rise of quantum mechanics. I addition --- although I do understand how noise provides a limit to what can be encoded and decoded as intended communication between an encoding and decoding entity even on a hypothetical infinite bandwidth medium --- it is not clear to me that, at least, that at some physical level, the noise itself might be considered information, and might play a role in the computations of reality. That is not an argument that proves infinite variability, but it might be viewed as an arguments that limits the range of applicability of your noise-floor argument. As anybody who has listened to noisy radio, or watched noisy TV reception can, hear or see, noise can be perceived as signal, even if not an intended one. To the extent that I am wrong in this devil's advocacy, please enlighten me. (Despite his obvious deficiencies, the devil is a most interesting client, and I am sure I have offended many people --- but, I hope, not you --- by arguing his cause too strenuously out of intellectual curiosity.) Ed Porter -Original Message- From: J. Andrew Rogers [mailto:[EMAIL PROTECTED] Sent: Tuesday, December 02, 2008 4:15 PM To: agi@v2.listbox.com Subject: Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness On Dec 2, 2008, at 8:31 AM, Ed Porter wrote: From my quick read it appears the only meaningful way it suggests a brain might be infinite was that since the brain used analogue values --- such as synaptic weights, or variable time intervals between spikes (and presumably since those analogue values would be determined by so many factors, each of which might modify their values slightly) --- the brain would be capable of computing many values each of which could arguably have infinite gradation in value. So arguably its computations would be infinitely complex, in terms of the number of bits that would be required to describe them exactly. If course, it is not clear the universe itself supports infinitely fine gradation in values, which your paper admits is a questions. The universe has a noise floor (see: Boltzmann, Planck, et al), from which it follows that all analog values are equivalent to some trivial number of bits. Since digital deals with the case of analog at the low end of signal to noise ratios, digital usually denotes a proper subset of analog, making the equivalence unsurprising. The obvious argument against infinite values is that the laws of thermodynamics would no longer apply if that were the case. Given the weight of the evidence for thermodynamics being valid, it is probably prudent to stick with models that work when restricted to a finite dynamic range for values. The fundamental non-equivalence of digital and analog is one of those hard-to-kill memes that needs to die, along with the fundamental non- equivalence of parallel and serial computation. Persistent buggers, even among people who should know better. Cheers, J. Andrew Rogers --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
Hi Ed, I am glad you have read the paper with such detail. You have summarized quite well what it is about. I have no objection to the points you make. It is only important to bear in mind that the paper is about studying the possible computational power of the mind by using the model of an artificial neural network. The question of whether the mind is something else was not in the scope of that paper. Assuming that the brain is a neural network we wanted to see what features may take the neural network to achieve certain computational power. We found, effectively, that either an encoding at the level of the neuron (space, e.g. a natural encoding of a real number) or at the neuron firing time. In both cases, to reach any computational power beyond the Turing limit one would need either infinite or infinitesimal space or time, assuming finite brain resources (number of neurons and connections). My personal opinion (perhaps not reflected in the paper itself) is that such super capabilities does not really hold, but the idea was to explore all the possibilities. It is also very important to highlight, that such a power beyond the computational power of Turing machines, does not require to communicate, encode or decode any infinite value in order to compute a non-computable function. It suffices to posit a natural encoding either in the space or time in which the neurons work, and then make questions in the form of characteristic functions encoding a non-computable function. A characteristic function is one of the type yes or no, so it only needs to transmit a finite amount of information even if the answer required an infinite amount. So a set of neurons may be capable of taking advantage of infinitesimals, and answer yes or no to a non-computable function, even if I think that is not the case it might be. That seems perhaps compatible with your ideas about consciousness. - Hector On Tue, Dec 2, 2008 at 5:31 PM, Ed Porter [EMAIL PROTECTED] wrote: Hector, I skimmed your paper linked to in the post below. From my quick read it appears the only meaningful way it suggests a brain might be infinite was that since the brain used analogue values --- such as synaptic weights, or variable time intervals between spikes (and presumably since those analogue values would be determined by so many factors, each of which might modify their values slightly) --- the brain would be capable of computing many values each of which could arguably have infinite gradation in value. So arguably its computations would be infinitely complex, in terms of the number of bits that would be required to describe them exactly. If course, it is not clear the universe itself supports infinitely fine gradation in values, which your paper admits is a questions. But even if the universe and the brain did support infinitely fine gradations in value, it is not clear computing with weights or signals capable of such infinitely fine gradations, necessarily yields computing that is meaningfully much more powerful, in terms of the sense of experience it can provide --- unless it has mechanisms that can meaningfully encode and decode much more information in such infinite variability. You can only communicate over a very broad bandwidth communication medium as much as your transmitting and receiving mechanisms can encode and decode. For example, it is not clear a high definition TV capable of providing an infinite degree of variation in its colors, rather than only say 8, 16, 32, or 64 bits for each primary color, would provide any significantly greater degree of visual experience, even though one could claim the TV was sending out a signal of infinite complexity. I have read and been told by neural net designers that typical neural nets operate by dividing a high dimensional space into subspaces. If this is true, then it is not clear that merely increasing the resolution at which such neural nets were computed, say beyond 64 bits, would change the number of subspaces that could be represented with a given number, say 100 billion, of nodes --- or that the minute changes in boundaries, or the occasional difference in tipping points that might result from infinite precision math, if it were possible, would be of that great a significance with regard to the overall capabilities of the system. Thus, it is not clear that infinite resolution in neural weights and spike timing would greatly increase the meaningful (i.e., having grounding), rememberable, and actionable number of states the brain could represent. My belief --- and it is only a belief at this point in time --- is that the complexity a finite human brain could deliver is so great --- arguably equal to 1000 millions simultaneous DVD signals that interact with each other and memories --- that such a finite computation is enough to create the sense of experiential awareness we humans call consciousness. I am not aware of
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
Suppose that the gravitational constant is a non-computable number (it might be, we don't know because as you say, we can only measure with finite precision). Planets compute G as part of the law of gravitation that rules their movement (you can of course object, that G is part of a model that has been replaced by a another theory --General Relativity-- and that neither one nor the other can be taken as full and ultimate descriptions, but then I can change my argument to whichever theory turns out to be the ultimate and true, even if we never have access to it). Planets don't necessarily have to encode and decode G, because it is given by granted, it is already naturally encoded, they just follow the law in which it is given. The same, if a non-computable number is already encoded in the brain, to compute with such a real number the neuron would not need necessarily to encode or decode the number. The neuron could then carry out a non-computable computation (no measurement involved) and then give a no/yes answer, just as a planet would hit or not another a planet by following a non-computable gravitational constant. But even in the case of need of measurement, it is only the most significant part relevant to the computation that is performing that is actually needed, since we are not interested in infinitely long computations, that's also why, even when noise is of course a practical problem, it is not an infrangible one. Now you can argue that if only a finite (the most significant part) of the real number is necessary to perform the computation, it would have sufficed to store only a rational (computable) number since the beginning, rather than a non-computable number. However, it is this potential access to an infinite number that makes the system more powerful and not the fact of be able to infinite precision measurements. For more about these results you can take a look at Hava Siegelman's work on Recurrent Analogical Neural Networks, which more than a work on hypercomputation, I consider it a work on computational complexity with pretty nice scientific results. On the other hand, I would say that I may have many objections, mainly those pointed out by Davis in his paper The Myth of Hypercomputation, which I also recommend you in case you haven't read it. The only thing that from my point of view Davis is trivializing is that whether there are non-computable numbers in nature, taking advantage of their computational power, is an open question, so it is still plausible. On Wed, Dec 3, 2008 at 12:17 AM, Ed Porter [EMAIL PROTECTED] wrote: Hector, Thank you for your reply saying my description of your paper was much better than clueless. I am, however, clueless about how to interpret the second paragraph of your reply (all of which is copied below). For example, I am confused by your statements that: such a power beyond the computational power of Turing machines, does not require to communicate, encode or decode any infinite value in order to compute a non-computable function. considering that you then state: A characteristic function is one of the type yes or no, so it only needs to transmit a finite amount of information even if the answer required an infinite amount. What I don't understand is how a system does not require to communicate, encode or decode any infinite value in order to compute a non-computable function if its answer required an infinite amount [of information]. It seems like the computing of an infinite amount of information was required somewhere, even if not in communicating the answer, so how does such a system not¸ as you said require to communicate, encode or decode any infinite value in order to compute a non-computable function even if only internally? Ed Porter -Original Message- From: Hector Zenil [mailto:[EMAIL PROTECTED] Sent: Tuesday, December 02, 2008 5:14 PM To: agi@v2.listbox.com Subject: Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness Hi Ed, I am glad you have read the paper with such detail. You have summarized quite well what it is about. I have no objection to the points you make. It is only important to bear in mind that the paper is about studying the possible computational power of the mind by using the model of an artificial neural network. The question of whether the mind is something else was not in the scope of that paper. Assuming that the brain is a neural network we wanted to see what features may take the neural network to achieve certain computational power. We found, effectively, that either an encoding at the level of the neuron (space, e.g. a natural encoding of a real number) or at the neuron firing time. In both cases, to reach any computational power beyond the Turing limit one would need either infinite or infinitesimal space or time, assuming finite brain resources (number of neurons and
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
Hector, Yes, it's possible that the brain uses uncomputable neurons to predict uncomputable physical dynamics in the observed world However, even if this is the case, **there is no possible way to verify or falsify this hypothesis using science**, if science is construed to involve evaluation of theories based on finite sets of finite-precision data ... So, this hypothesis has much the same status as the hypothesis that the brain has an ineffable soul inside it, which can never be measured. This is certainly possible too, but we have no way to verify or falsify it using science. You may say the hypothesis of neural hypercomputing valid in the sense that it helps guide you to interesting, falsifiable theories. That's fine. But, then you must admit that the hypothesis of souls could be valid in the same sense, right? It could guide some other people to interesting, falsifiable theories -- even though, in itself, it stands outside the domain of scientific validation/falsification. It is possible that the essence of intelligence lies in something that can't be scientifically addressed. If so, no matter how many finite-precision measurements of the brain we record and analyze, we'll never get at the core of intelligence that way. So, in that hypothesis, if we succeed at making AGI, it will be due to some non-scientific, non-computable force somehow guiding us. However, I doubt this is the case. I strongly suspect the essence of intelligence lies in properties of systems that can be measured, and therefore *not* in hypercomputing. Consciousness is another issue -- I do happen to think there is an aspect of consciousness that, like hypercomputing, lies outside the realm of science. However, I don't fall for the argument that X and Y must be equal just because they're both outside the realm of science... -- Ben G On Tue, Dec 2, 2008 at 6:54 PM, Hector Zenil [EMAIL PROTECTED] wrote: Suppose that the gravitational constant is a non-computable number (it might be, we don't know because as you say, we can only measure with finite precision). Planets compute G as part of the law of gravitation that rules their movement (you can of course object, that G is part of a model that has been replaced by a another theory --General Relativity-- and that neither one nor the other can be taken as full and ultimate descriptions, but then I can change my argument to whichever theory turns out to be the ultimate and true, even if we never have access to it). Planets don't necessarily have to encode and decode G, because it is given by granted, it is already naturally encoded, they just follow the law in which it is given. The same, if a non-computable number is already encoded in the brain, to compute with such a real number the neuron would not need necessarily to encode or decode the number. The neuron could then carry out a non-computable computation (no measurement involved) and then give a no/yes answer, just as a planet would hit or not another a planet by following a non-computable gravitational constant. But even in the case of need of measurement, it is only the most significant part relevant to the computation that is performing that is actually needed, since we are not interested in infinitely long computations, that's also why, even when noise is of course a practical problem, it is not an infrangible one. Now you can argue that if only a finite (the most significant part) of the real number is necessary to perform the computation, it would have sufficed to store only a rational (computable) number since the beginning, rather than a non-computable number. However, it is this potential access to an infinite number that makes the system more powerful and not the fact of be able to infinite precision measurements. For more about these results you can take a look at Hava Siegelman's work on Recurrent Analogical Neural Networks, which more than a work on hypercomputation, I consider it a work on computational complexity with pretty nice scientific results. On the other hand, I would say that I may have many objections, mainly those pointed out by Davis in his paper The Myth of Hypercomputation, which I also recommend you in case you haven't read it. The only thing that from my point of view Davis is trivializing is that whether there are non-computable numbers in nature, taking advantage of their computational power, is an open question, so it is still plausible. On Wed, Dec 3, 2008 at 12:17 AM, Ed Porter [EMAIL PROTECTED] wrote: Hector, Thank you for your reply saying my description of your paper was much better than clueless. I am, however, clueless about how to interpret the second paragraph of your reply (all of which is copied below). For example, I am confused by your statements that: such a power beyond the computational power of Turing machines, does not require to communicate, encode or decode any infinite value in order to compute a
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
On Wed, Dec 3, 2008 at 1:51 AM, Ben Goertzel [EMAIL PROTECTED] wrote: Hector, Yes, it's possible that the brain uses uncomputable neurons to predict uncomputable physical dynamics in the observed world However, even if this is the case, **there is no possible way to verify or falsify this hypothesis using science**, if science is construed to involve evaluation of theories based on finite sets of finite-precision data ... So, this hypothesis has much the same status as the hypothesis that the brain has an ineffable soul inside it, which can never be measured. This is certainly possible too, but we have no way to verify or falsify it using science. You may say the hypothesis of neural hypercomputing valid in the sense that it helps guide you to interesting, falsifiable theories. That's fine. But, then you must admit that the hypothesis of souls could be valid in the same sense, right? It could guide some other people to interesting, falsifiable theories -- even though, in itself, it stands outside the domain of scientific validation/falsification. I understand the point, but I insist that it is not that trivial. You could apply the same argument against the automated proof of the four-color theorem. Since there is no human capable of verifying it in a lifetime (and even if a group of people try to verify it, no single mind would ever have the intellectual capacity to get convinced by its own), then the four-color proof is not science... and me, I am pretty convinced that it is, including computer science and proof theory. Actually I think that that kind of proofs and approaches to science will happen more and more often, as we can already witness. Just as the four-color theorem was proved and then verified by another computer program, the outcome of a hypercomputer could be verified by another hypercomputer. And just as for the finite case of the four-color theorem, you would not be able to verify it but by trusting on another system. I am not hypercomputationalist, all the opposite! but closed definitions about what is science and people trying to have the good definition of science, look to me pretty narrow. However, if I were director of a computer science department, I wouldn't probably put any money into hypercomputationism research. But even if it is just philosophy, that doesn't make it less valid or less plausible. On the other hand, the scientific arguments against it often sound very weak, perhaps just as weak as the arguments in favor, but sometimes even weaker. What if a hypercomputer provides you, each time you ask, the answer to whether a Turing machine halts. You effectively cannot verify that it works for all cases (it is of course a problem of induction very spread in science in general), but I am pretty sure you would believe that it is what it says it is, if for any Turing machine, as complicated as you may want, it tells you whether it halts and when (you could argue for example that it is just simulating the Turing machine extremely fast, but let's suppose it does it instantaneously). How this prediction power would make it less science than, let's say, quantum mechanics? To me, that would be much more scientific than people doing string theory... The same about noise. People use to think about it as a constraint, but some of recent results in computational complexity and serious interpretations suggest that actually, as I was telling before, if it nature is indeterministic, noise is actually a computation carried out by something more powerful (even if it seems meaningful) than a universal Turing machine, so by itself, rather than subtracting computational power, it might add up! One would need of course to conciliate this with thermodynamics, but there are actually some interpretations that would easily allow this interpretation of noise. However I don't think I will take that thread of discussion. Together with the bibliography I've provided before, I recommend also a very recent paper by Karl Svozil in the Complex Systems journal about whether hypercomputation is falsifiable. It is possible that the essence of intelligence lies in something that can't be scientifically addressed. If so, no matter how many finite-precision measurements of the brain we record and analyze, we'll never get at the core of intelligence that way. So, in that hypothesis, if we succeed at making AGI, it will be due to some non-scientific, non-computable force somehow guiding us. However, I doubt this is the case. I strongly suspect the essence of intelligence lies in properties of systems that can be measured, and therefore *not* in hypercomputing. Consciousness is another issue -- I do happen to think there is an aspect of consciousness that, like hypercomputing, lies outside the realm of science. However, I don't fall for the argument that X and Y must be equal just because they're both outside the realm of science... -- Ben G On Tue, Dec 2,
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
Hi Hector, You may say the hypothesis of neural hypercomputing valid in the sense that it helps guide you to interesting, falsifiable theories. That's fine. But, then you must admit that the hypothesis of souls could be valid in the same sense, right? It could guide some other people to interesting, falsifiable theories -- even though, in itself, it stands outside the domain of scientific validation/falsification. I understand the point, but I insist that it is not that trivial. You could apply the same argument against the automated proof of the four-color theorem. Since there is no human capable of verifying it in a lifetime (and even if a group of people try to verify it, no single mind would ever have the intellectual capacity to get convinced by its own), then the four-color proof is not science... So, the distinction here is that -- in one case, **no possible finite set of observations** can verify or falsify the hypothesis at hand [hypercomputing] -- in the other case, some finite set of observations could verify or falsify the hypothesis at hand ... but this observation set wouldn't fit into the mind of a certain observer O [four color theorem] So, to simplify a bit, do I define X has direct scientific meaning as I can personally falsify X or as Some being could potentially falsify X; and I can use science to distinguish those being capable of falsifying X from those that are incapable ?? If the former, then the four color theorem isn't human science If the latter, it is... I choose the latter... ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
We cannot ask Feynman, but I actually asked Deutsch. He does not only think QM is our most basic physical reality (he thinks math and computer science lie in quantum mechanics), but he even takes quite seriously his theory of parallel universes! and he is not alone. Speaking by myself, I would agree with you, but I think we would need to relativize the concept of agreement. I don't think QM is just another model of merely mathematical value to make finite predictions. I think physical models say something about our physical reality. If you deny QM as part of our physical reality then I guess you deny any other physical model. I wonder then what is left to you. You perhaps would embrace total skepticism, perhaps even solipsism. Current trends have moved from there to a more relativized positions, where models are considered so, models, but still with some value as part of our actual physical reality (just as Newtonian physics is not just completely wrong after General Relativity since it still describes a huge part of our physical reality). Well, I don't embrace solipsism, but that is really a philosophic and personal rather than scientific matter ... and, I'm not going talk here about what is, which IMO is not a matter for science ... but merely about what science can tell us. And, science cannot tell us whether QM or some empirically-equivalent, wholly randomness-free theory is the right one... ben g --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
2008/12/1 Ben Goertzel [EMAIL PROTECTED]: And, science cannot tell us whether QM or some empirically-equivalent, wholly randomness-free theory is the right one... If two theories give identical predictions under all circumstances about how the real world behaves, then they are not two separate theories, they are merely rewordings of the same theory. And choosing between them is arbitrary; you may prefer one to the other because human minds can visualise it more easily, or it's easier to calculate, or you have an aethetic preference for it. -- Philip Hunt, [EMAIL PROTECTED] Please avoid sending me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
If two theories give identical predictions under all circumstances about how the real world behaves, then they are not two separate theories, they are merely rewordings of the same theory. And choosing between them is arbitrary; you may prefer one to the other because human minds can visualise it more easily, or it's easier to calculate, or you have an aethetic preference for it. -- Philip Hunt, [EMAIL PROTECTED] However, the two theories may still have very different consequences **within the minds of the community of scientists** ... Even though T1 and T2 are empirically equivalent in their predictions, T1 might have a tendency to lead a certain community of scientists in better directions, in terms of creating new theories later on However, empirically validating this property of T1 is another question ... which leads one to the topic of scientific theories about the sociological consequences of scientific theories ;-) ben g --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: FW: [agi] A paper that actually does solve the problem of consciousness
Ed, they used to combine ritalin with lsd for psychotherapy. It assists in absorbing insights achieved from psycholitic doses, which is a term for doses that are not fully psychedelic. Those are edifying on their own but are less organized. I don't know if you can get this in a clinical setting today. But these molecules are gradually being apprehended as tools On 11/30/08, Ben Goertzel [EMAIL PROTECTED] wrote: Ed, Unfortunately to reply to your message in detail would absorb a lot of time, because there are two issues mixed up 1) you don't know much about computability theory, and educating you on it would take a lot of time (and is not best done on an email list) 2) I may not have expressed some of my weird philosophical ideas about computability and mind and reality clearly ... though Abram, at least, seemed to get them ;) [but he has a lot of background in the area] Just to clarify some simple things though: Pi is a computable number, because there's a program that would generate it if allowed to run long enough Also, pi has been proved irrational; and, quantum theory really has nothing directly to do with uncomputability... About How can several pounds of matter that is the human brain model the true complexity of an infinity of infinitely complexity things? it is certainly thinkable that the brain is infinite not finite in its information content, or that it's a sort of antenna that receives information from some infinite-information-content source. I'm not saying I believe this, just saying it's a logical possibility, and not really ruled out by available data... Your reply seems to assume that the brain is a finite computational system and that other alternatives don't make sense. I think this is an OK working assumption for AGI engineers but it's not proved by any means. My main point in that post was, simply, that science and language seem intrinsically unable to distinguish computable from uncomputable realities. That doesn't necessarily mean the latter don't exist but it means they're not really scientifically useful entities. But, my detailed argument in favor of this point requires some basic understanding of computability math to appreciate, and I can't review those basics in an email, it's too much... ben g On Sun, Nov 30, 2008 at 4:20 PM, Ed Porter [EMAIL PROTECTED] wrote: Ben, On November 19, 2008 5:39 you wrote the following under the above titled thread: -- Ed, I'd be curious for your reaction to http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-forhtml which explores the limits of scientific and linguistic explanation, in a different but possibly related way to Richard's argument. -- In the below email I asked you some questions about your article, which capture my major problem in understanding it, and I don't think I ever receive a reply The questions were at the bottom of such a long post you may well never have even seen them. I know you are busy, but if you have time I would be interested in hearing your answers to the following questions about the following five quoted parts (shown in red if you are seeing this in rich text) from you article. If you are too busy to respond just say so, either on or off list. - (1) In the simplest case, A2 may represent U directly in the language, using a single expression How, can U be directly represented in the language if it is uncomputable? I assume you consider any irational number, such as pi to be uncomputable (although, at least pi has a forumula that with enough computation can approach it as a limit –I assume that for most real numbers if there is such a formula, we do not know it.) (By the way, do we know for a fact that pi is irational, and if so how do we know other than that we have caluclated it to millions of places and not yet found an exact solution?) Merely communicating the symbol pi only represents the number if the agent receiving the communication has a more detailed definition, but any definition, such as a formula for iteratively approaching pi, which presumably is what you mean by R_U would only be an approximation. So U could never by fully represented unless one had infinite time --- and I generally consider it a waste of time to think about infinate time unless there is something valuable about such considerations that has a use in much more human-sized chunks of time. In fact, it seems the major message of quantum mechanics is that even physical reality doesn't have the time or machinery to compute uncomputable things, like a space constructed of dimensions each correspond to all the real numbers within some astronomical range . So the real number line is not really real. It is at best a construct of the human mind that can at best only be approximated in part.
Re: FW: [agi] A paper that actually does solve the problem of consciousness
Ed, Unfortunately to reply to your message in detail would absorb a lot of time, because there are two issues mixed up 1) you don't know much about computability theory, and educating you on it would take a lot of time (and is not best done on an email list) 2) I may not have expressed some of my weird philosophical ideas about computability and mind and reality clearly ... though Abram, at least, seemed to get them ;) [but he has a lot of background in the area] Just to clarify some simple things though: Pi is a computable number, because there's a program that would generate it if allowed to run long enough Also, pi has been proved irrational; and, quantum theory really has nothing directly to do with uncomputability... About How can several pounds of matter that is the human brain model the true complexity of an infinity of infinitely complexity things? it is certainly thinkable that the brain is infinite not finite in its information content, or that it's a sort of antenna that receives information from some infinite-information-content source. I'm not saying I believe this, just saying it's a logical possibility, and not really ruled out by available data... Your reply seems to assume that the brain is a finite computational system and that other alternatives don't make sense. I think this is an OK working assumption for AGI engineers but it's not proved by any means. My main point in that post was, simply, that science and language seem intrinsically unable to distinguish computable from uncomputable realities. That doesn't necessarily mean the latter don't exist but it means they're not really scientifically useful entities. But, my detailed argument in favor of this point requires some basic understanding of computability math to appreciate, and I can't review those basics in an email, it's too much... ben g On Sun, Nov 30, 2008 at 4:20 PM, Ed Porter [EMAIL PROTECTED] wrote: Ben, On November 19, 2008 5:39 you wrote the following under the above titled thread: -- Ed, I'd be curious for your reaction to http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-forhtml which explores the limits of scientific and linguistic explanation, in a different but possibly related way to Richard's argument. -- In the below email I asked you some questions about your article, which capture my major problem in understanding it, and I don't think I ever receive a reply The questions were at the bottom of such a long post you may well never have even seen them. I know you are busy, but if you have time I would be interested in hearing your answers to the following questions about the following five quoted parts (shown in red if you are seeing this in rich text) from you article. If you are too busy to respond just say so, either on or off list. - (1) In the simplest case, A2 may represent U directly in the language, using a single expression How, can U be directly represented in the language if it is uncomputable? I assume you consider any irational number, such as pi to be uncomputable (although, at least pi has a forumula that with enough computation can approach it as a limit –I assume that for most real numbers if there is such a formula, we do not know it.) (By the way, do we know for a fact that pi is irational, and if so how do we know other than that we have caluclated it to millions of places and not yet found an exact solution?) Merely communicating the symbol pi only represents the number if the agent receiving the communication has a more detailed definition, but any definition, such as a formula for iteratively approaching pi, which presumably is what you mean by R_U would only be an approximation. So U could never by fully represented unless one had infinite time --- and I generally consider it a waste of time to think about infinate time unless there is something valuable about such considerations that has a use in much more human-sized chunks of time. In fact, it seems the major message of quantum mechanics is that even physical reality doesn't have the time or machinery to compute uncomputable things, like a space constructed of dimensions each correspond to all the real numbers within some astronomical range . So the real number line is not really real. It is at best a construct of the human mind that can at best only be approximated in part. (2) complexity(U) complexity(R_U) Because I did not understand how U could be represented, and how R_U could be anything other than an approximation for any practical purposes, I didn't understand the meaning of the above line from your article? If U and R_U have the meaning I guessed in my discussion of quote (1), then U could not be meaningfully representable in the language, other than by a symbol that references some definition
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. Not really. They're limitations on what measurements of physical reality can be simultaneously made. Quantum systems can compute *exactly* the class of Turing computable functions ... this has been proved according to standard quantum mechanics math. however, there are some things they can compute faster than any Turing machine, in the average case but not the worst case. But, I am old fashioned enough to be more interested in things about the brain and AGI that are supported by what would traditionally be considered scientific evidence or by what can be reasoned or designed from such evidence. If there is any thing that would fit under those headings to support the notion of the brain either being infinite, or being an antenna that receives decodable information from some infinite-information-content source, I would love to hear it. the key point of the blog post you didn't fully grok, was a careful argument that (under certain, seemingly reasonable assumptions) science can never provide evidence in favor of infinite mechanisms... ben g --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
On Mon, Dec 1, 2008 at 11:19 AM, Ed Porter [EMAIL PROTECTED] wrote: You said QUANTUM THEORY REALLY HAS NOTHING DIRECTLY TO DO WITH UNCOMPUTABILITY. Please don't quote people using this style, it hurts my eyes. But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. I don't even know what you're saying here. Maybe you're trying to say that it takes a really big computer to compute a very small box of physical reality.. which is true.. I just don't know why you would be saying that. You said IT IS CERTAINLY THINKABLE THAT THE BRAIN IS INFINITE NOT FINITE IN ITS INFORMATION CONTENT, OR THAT IT'S A SORT OF ANTENNA THAT RECEIVES INFORMATION FROM SOME INFINITE-INFORMATION-CONTENT SOURCE This certainly is thinkable. And that is a non-trivial statement. We should never forget that our concepts of reality could be nothing but illusions, and that our understanding of science and physical reality may be much more partial and flawed than we think. It's also completely unscientific. You might as well say that magic pixies deliver your thoughts from big invisible bucket made of gold. But, I am old fashioned enough to be more interested in things about the brain and AGI that are supported by what would traditionally be considered scientific evidence or by what can be reasoned or designed from such evidence. So why are you entertaining notions of magic antennas to God? If there is any thing that would fit under those headings to support the notion of the brain either being infinite, or being an antenna that receives decodable information from some infinite-information-content source, I would love to hear it. I wouldn't. It's untestable non-sense. Trent --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
RE: RE: FW: [agi] A paper that actually does solve the problem of consciousness
Regarding the uncertainty principal, Wikipedia says: In quantum physics, the Heisenberg uncertainty principle states that the values of certain pairs of conjugate variables (position and momentum, for instance) cannot both be known with arbitrary precision. That is, the more precisely one variable is known, the less precisely the other is known. THIS IS NOT A STATEMENT ABOUT THE LIMITATIONS OF A RESEARCHER'S ABILITY TO MEASURE PARTICULAR QUANTITIES OF A SYSTEM, BUT RATHER ABOUT THE NATURE OF THE SYSTEM ITSELF. (emphasis added.) I am sure you know more about quantum mechanics than I do. But I have heard many say the uncertainty controls limits not just on scientific measurement, but the amount of information different parts of reality can have about each other when computing in response to each other. Perhaps such theories are wrong, but they are not without support in the field. With regard to the statement science can never provide evidence in favor of infinite mechanisms I though you were saying there was no way the human mind could fully represent or fully understand an infinite mechanism --- which I agree with. You were correct in thinking that I did not grok that you were implying this means if an infinite mechanism exited there could be no evidence in favor of it infinity. In fact, it is not clear that this is the case, if you use provide evidence considerably more loosely than provide proof for. Until the advent of quantum mechanics and/or the theory of the expanding universe, based in part on observations and in part intuitions derived from them, many people felt the universe was infinitely continuous and/or of infinite extent in space and time. I agree you would probably never be able to prove infinite realities, but the mind is capable of conceiving of them, and of seeing evidence that might suggest to some their existence, such as was suggested to Einstein, who for many years I have been told believed in a universe that was infinite in time. Ed Porter -Original Message- From: Ben Goertzel [mailto:[EMAIL PROTECTED] Sent: Sunday, November 30, 2008 9:09 PM To: agi@v2.listbox.com Subject: Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. Not really. They're limitations on what measurements of physical reality can be simultaneously made. Quantum systems can compute *exactly* the class of Turing computable functions ... this has been proved according to standard quantum mechanics math. however, there are some things they can compute faster than any Turing machine, in the average case but not the worst case. But, I am old fashioned enough to be more interested in things about the brain and AGI that are supported by what would traditionally be considered scientific evidence or by what can be reasoned or designed from such evidence. If there is any thing that would fit under those headings to support the notion of the brain either being infinite, or being an antenna that receives decodable information from some infinite-information-content source, I would love to hear it. the key point of the blog post you didn't fully grok, was a careful argument that (under certain, seemingly reasonable assumptions) science can never provide evidence in favor of infinite mechanisms... ben g --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
HI, In quantum physics, the Heisenberg uncertainty principle states that the values of certain pairs of conjugate variables (position and momentum, for instance) cannot both be known with arbitrary precision. That is, the more precisely one variable is known, the less precisely the other is known. THIS IS NOT A STATEMENT ABOUT THE LIMITATIONS OF A RESEARCHER'S ABILITY TO MEASURE PARTICULAR QUANTITIES OF A SYSTEM, BUT RATHER ABOUT THE NATURE OF THE SYSTEM ITSELF. (emphasis added.) I am sure you know more about quantum mechanics than I do. But I have heard many say the uncertainty controls limits not just on scientific measurement, but the amount of information different parts of reality can have about each other when computing in response to each other. Perhaps such theories are wrong, but they are not without support in the field. Yeah, the interpretation of quantum theory is certainly contentious and there are multiple conflicting views... However, regarding quantum computing, it is universally agreed that the class of quantum computable functions is identical to the class of classically Turing computable functions. With regard to the statement science can never provide evidence in favor of infinite mechanisms I though you were saying there was no way the human mind could fully represent or fully understand an infinite mechanism --- which I agree with. No, I was not saying that there was no way the human mind could fully represent or fully understand an infinite mechanism What I argued is that **scientific data** can never convincingly be used to argue in favor of an infinite mechanism, due to the intrinsically finite nature of scientific data. This says **nothing** about any intrinsic limitations on the human mind ... unless one adds the axiom that the human mind must be entirely comprehensible via science ... which seems an unnecessary assumption to make In fact, it is not clear that this is the case, if you use provide evidence considerably more loosely than provide proof for. Until the advent of quantum mechanics and/or the theory of the expanding universe, based in part on observations and in part intuitions derived from them, many people felt the universe was infinitely continuous and/or of infinite extent in space and time. I agree you would probably never be able to prove infinite realities, but the mind is capable of conceiving of them, and of seeing evidence that might suggest to some their existence, such as was suggested to Einstein, who for many years I have been told believed in a universe that was infinite in time. well, my argument implies that you can never use science to prove that the mind is capable of conceiving of infinite realities This may be true in some other sense, but I argue, not in a scientific sense... -- Ben G --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
OTOH, there is no possible real-world test to distinguish a true random sequence from a high-algorithmic-information quasi-random sequence So I don't find this argument very convincing... On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. Not really. They're limitations on what measurements of physical reality can be simultaneously made. Quantum systems can compute *exactly* the class of Turing computable functions ... this has been proved according to standard quantum mechanics math. however, there are some things they can compute faster than any Turing machine, in the average case but not the worst case. Sorry, I am not really following the discussion but I just read that there is some misinterpretation here. It is the standard model of quantum computation that effectively computes exactly the Turing computable functions, but that was almost hand tailored to do so, perhaps because adding to the theory an assumption of continuum measurability was already too much (i.e. distinguishing infinitely close quantum states). But that is far from the claim that quantum systems can compute exactly the class of Turing computable functions. Actually the Hilbert space and the superposition of particles in an infinite number of states would suggest exactly the opposite. While the standard model of quantum computation only considers a superposition of 2 states (the so-called qubit, capable of entanglement in 0 and 1). But even if you stick to the standard model of quantum computation, the proof that it computes exactly the set of recursive functions [Feynman, Deutsch] can be put in jeopardy very easy : Turing machines are unable to produce non-deterministic randomness, something that quantum computers do as an intrinsic property of quantum mechanics (not only because of measure limitations of the kind of the Heisenberg principle but by quantum non-locality, i.e. the violation of Bell's theorem). I just exhibited a non-Turing computable function that standard quantum computers compute... [Calude, Casti] But, I am old fashioned enough to be more interested in things about the brain and AGI that are supported by what would traditionally be considered scientific evidence or by what can be reasoned or designed from such evidence. If there is any thing that would fit under those headings to support the notion of the brain either being infinite, or being an antenna that receives decodable information from some infinite-information-content source, I would love to hear it. You and/or other people might be interested in a paper of mine published some time ago on the possible computational power of the human mind and the way to encode infinite information in the brain: http://arxiv.org/abs/cs/0605065 the key point of the blog post you didn't fully grok, was a careful argument that (under certain, seemingly reasonable assumptions) science can never provide evidence in favor of infinite mechanisms... ben g --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Hector Zenilhttp://www.mathrix.org --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] I intend to live forever, or die trying. -- Groucho Marx --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. Not really. They're limitations on what measurements of physical reality can be simultaneously made. Quantum systems can compute *exactly* the class of Turing computable functions ... this has been proved according to standard quantum mechanics math. however, there are some things they can compute faster than any Turing machine, in the average case but not the worst case. Sorry, I am not really following the discussion but I just read that there is some misinterpretation here. It is the standard model of quantum computation that effectively computes exactly the Turing computable functions, but that was almost hand tailored to do so, perhaps because adding to the theory an assumption of continuum measurability was already too much (i.e. distinguishing infinitely close quantum states). But that is far from the claim that quantum systems can compute exactly the class of Turing computable functions. Actually the Hilbert space and the superposition of particles in an infinite number of states would suggest exactly the opposite. While the standard model of quantum computation only considers a superposition of 2 states (the so-called qubit, capable of entanglement in 0 and 1). But even if you stick to the standard model of quantum computation, the proof that it computes exactly the set of recursive functions [Feynman, Deutsch] can be put in jeopardy very easy : Turing machines are unable to produce non-deterministic randomness, something that quantum computers do as an intrinsic property of quantum mechanics (not only because of measure limitations of the kind of the Heisenberg principle but by quantum non-locality, i.e. the violation of Bell's theorem). I just exhibited a non-Turing computable function that standard quantum computers compute... [Calude, Casti] But, I am old fashioned enough to be more interested in things about the brain and AGI that are supported by what would traditionally be considered scientific evidence or by what can be reasoned or designed from such evidence. If there is any thing that would fit under those headings to support the notion of the brain either being infinite, or being an antenna that receives decodable information from some infinite-information-content source, I would love to hear it. You and/or other people might be interested in a paper of mine published some time ago on the possible computational power of the human mind and the way to encode infinite information in the brain: http://arxiv.org/abs/cs/0605065 the key point of the blog post you didn't fully grok, was a careful argument that (under certain, seemingly reasonable assumptions) science can never provide evidence in favor of infinite mechanisms... ben g --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Hector Zenilhttp://www.mathrix.org --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote: OTOH, there is no possible real-world test to distinguish a true random sequence from a high-algorithmic-information quasi-random sequence I know, but the point is not whether we can distinguish it, but that quantum mechanics actually predicts to be intrinsically capable of non-deterministic randomness, while for a Turing machine that is impossible by definition. I find quite convincing and interesting the way in which the mathematical proof of the standard model of quantum computation as Turing computable has been put in jeopardy by physical reality. So I don't find this argument very convincing... On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. Not really. They're limitations on what measurements of physical reality can be simultaneously made. Quantum systems can compute *exactly* the class of Turing computable functions ... this has been proved according to standard quantum mechanics math. however, there are some things they can compute faster than any Turing machine, in the average case but not the worst case. Sorry, I am not really following the discussion but I just read that there is some misinterpretation here. It is the standard model of quantum computation that effectively computes exactly the Turing computable functions, but that was almost hand tailored to do so, perhaps because adding to the theory an assumption of continuum measurability was already too much (i.e. distinguishing infinitely close quantum states). But that is far from the claim that quantum systems can compute exactly the class of Turing computable functions. Actually the Hilbert space and the superposition of particles in an infinite number of states would suggest exactly the opposite. While the standard model of quantum computation only considers a superposition of 2 states (the so-called qubit, capable of entanglement in 0 and 1). But even if you stick to the standard model of quantum computation, the proof that it computes exactly the set of recursive functions [Feynman, Deutsch] can be put in jeopardy very easy : Turing machines are unable to produce non-deterministic randomness, something that quantum computers do as an intrinsic property of quantum mechanics (not only because of measure limitations of the kind of the Heisenberg principle but by quantum non-locality, i.e. the violation of Bell's theorem). I just exhibited a non-Turing computable function that standard quantum computers compute... [Calude, Casti] But, I am old fashioned enough to be more interested in things about the brain and AGI that are supported by what would traditionally be considered scientific evidence or by what can be reasoned or designed from such evidence. If there is any thing that would fit under those headings to support the notion of the brain either being infinite, or being an antenna that receives decodable information from some infinite-information-content source, I would love to hear it. You and/or other people might be interested in a paper of mine published some time ago on the possible computational power of the human mind and the way to encode infinite information in the brain: http://arxiv.org/abs/cs/0605065 the key point of the blog post you didn't fully grok, was a careful argument that (under certain, seemingly reasonable assumptions) science can never provide evidence in favor of infinite mechanisms... ben g --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Hector Zenilhttp://www.mathrix.org --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] I intend to live forever, or die trying. -- Groucho Marx --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Hector Zenilhttp://www.mathrix.org
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
On Mon, Dec 1, 2008 at 4:53 AM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote: OTOH, there is no possible real-world test to distinguish a true random sequence from a high-algorithmic-information quasi-random sequence I know, but the point is not whether we can distinguish it, but that quantum mechanics actually predicts to be intrinsically capable of non-deterministic randomness, while for a Turing machine that is impossible by definition. I find quite convincing and interesting the way in which the mathematical proof of the standard model of quantum computation as Turing computable has been put in jeopardy by physical reality. or at least by a model of physical reality... =) (a reality by the way, that the authors of the mathematical proof believe in as the most basic) So I don't find this argument very convincing... On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. Not really. They're limitations on what measurements of physical reality can be simultaneously made. Quantum systems can compute *exactly* the class of Turing computable functions ... this has been proved according to standard quantum mechanics math. however, there are some things they can compute faster than any Turing machine, in the average case but not the worst case. Sorry, I am not really following the discussion but I just read that there is some misinterpretation here. It is the standard model of quantum computation that effectively computes exactly the Turing computable functions, but that was almost hand tailored to do so, perhaps because adding to the theory an assumption of continuum measurability was already too much (i.e. distinguishing infinitely close quantum states). But that is far from the claim that quantum systems can compute exactly the class of Turing computable functions. Actually the Hilbert space and the superposition of particles in an infinite number of states would suggest exactly the opposite. While the standard model of quantum computation only considers a superposition of 2 states (the so-called qubit, capable of entanglement in 0 and 1). But even if you stick to the standard model of quantum computation, the proof that it computes exactly the set of recursive functions [Feynman, Deutsch] can be put in jeopardy very easy : Turing machines are unable to produce non-deterministic randomness, something that quantum computers do as an intrinsic property of quantum mechanics (not only because of measure limitations of the kind of the Heisenberg principle but by quantum non-locality, i.e. the violation of Bell's theorem). I just exhibited a non-Turing computable function that standard quantum computers compute... [Calude, Casti] But, I am old fashioned enough to be more interested in things about the brain and AGI that are supported by what would traditionally be considered scientific evidence or by what can be reasoned or designed from such evidence. If there is any thing that would fit under those headings to support the notion of the brain either being infinite, or being an antenna that receives decodable information from some infinite-information-content source, I would love to hear it. You and/or other people might be interested in a paper of mine published some time ago on the possible computational power of the human mind and the way to encode infinite information in the brain: http://arxiv.org/abs/cs/0605065 the key point of the blog post you didn't fully grok, was a careful argument that (under certain, seemingly reasonable assumptions) science can never provide evidence in favor of infinite mechanisms... ben g --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Hector Zenilhttp://www.mathrix.org --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] I intend to live forever, or die trying. -- Groucho Marx --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
But I don't get your point at all, because the whole idea of nondeterministic randomness has nothing to do with physical reality... true random numbers are uncomputable entities which can never be existed, and any finite series of observations can be modeled equally well as the first N bits of an uncomputable series or of a computable one... ben g On Sun, Nov 30, 2008 at 10:53 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote: OTOH, there is no possible real-world test to distinguish a true random sequence from a high-algorithmic-information quasi-random sequence I know, but the point is not whether we can distinguish it, but that quantum mechanics actually predicts to be intrinsically capable of non-deterministic randomness, while for a Turing machine that is impossible by definition. I find quite convincing and interesting the way in which the mathematical proof of the standard model of quantum computation as Turing computable has been put in jeopardy by physical reality. So I don't find this argument very convincing... On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. Not really. They're limitations on what measurements of physical reality can be simultaneously made. Quantum systems can compute *exactly* the class of Turing computable functions ... this has been proved according to standard quantum mechanics math. however, there are some things they can compute faster than any Turing machine, in the average case but not the worst case. Sorry, I am not really following the discussion but I just read that there is some misinterpretation here. It is the standard model of quantum computation that effectively computes exactly the Turing computable functions, but that was almost hand tailored to do so, perhaps because adding to the theory an assumption of continuum measurability was already too much (i.e. distinguishing infinitely close quantum states). But that is far from the claim that quantum systems can compute exactly the class of Turing computable functions. Actually the Hilbert space and the superposition of particles in an infinite number of states would suggest exactly the opposite. While the standard model of quantum computation only considers a superposition of 2 states (the so-called qubit, capable of entanglement in 0 and 1). But even if you stick to the standard model of quantum computation, the proof that it computes exactly the set of recursive functions [Feynman, Deutsch] can be put in jeopardy very easy : Turing machines are unable to produce non-deterministic randomness, something that quantum computers do as an intrinsic property of quantum mechanics (not only because of measure limitations of the kind of the Heisenberg principle but by quantum non-locality, i.e. the violation of Bell's theorem). I just exhibited a non-Turing computable function that standard quantum computers compute... [Calude, Casti] But, I am old fashioned enough to be more interested in things about the brain and AGI that are supported by what would traditionally be considered scientific evidence or by what can be reasoned or designed from such evidence. If there is any thing that would fit under those headings to support the notion of the brain either being infinite, or being an antenna that receives decodable information from some infinite-information-content source, I would love to hear it. You and/or other people might be interested in a paper of mine published some time ago on the possible computational power of the human mind and the way to encode infinite information in the brain: http://arxiv.org/abs/cs/0605065 the key point of the blog post you didn't fully grok, was a careful argument that (under certain, seemingly reasonable assumptions) science can never provide evidence in favor of infinite mechanisms... ben g --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Hector Zenilhttp://www.mathrix.org --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] I
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But I don't get your point at all, because the whole idea of nondeterministic randomness has nothing to do with physical reality... It has all to do when it is about quantum mechanics. Quantum mechanics is non-deterministic by nature. A quantum computer, even within the standard model of quantum computation, could then take advantage of this intrinsic property of the physical (quantum) reality (assuming the model correct, as most physicists would). true random numbers are uncomputable entities which can never be existed, and any finite series of observations can be modeled equally well as the first N bits of an uncomputable series or of a computable one... That's the point, that's what the classical theory of computability would say (also making some assumptions, namely Church's thesis), but again quantum mechanics says something else : The fact that quantum computers are able of non-deterministic randomness by definition and Turing machines are unable of non-deterministic randomness also by definition seems incompatible with the claim (or mathematical proof) that standard quantum computers compute exactly the same functions than Turing machines, and that's only when dealing with standard quantum computation, because non-standard quantum computation is far from being proved to be reduced to Turing-computable (modulo their speed-up). Concerning the observations, you don't need to do an infinite number of them to get a non-computable answer from an Oracle (although you would need in case you want to finitely verify it). And even if you can model equally well the first N bits of a non-deterministic random sequence, the fact that a random sequence is ontologically of a non-deterministic nature, makes it a priori a different one in essence from a pseudo random sequence. The point is not epistemological. In any case, whether we agree on the philosophical matter, my point is that it is not the case that there is a mathematical proof about quantum systems computing exactly the same functions than Turing machines. There is a mathematical proof that the standard model of quantum computation computes the same set of functions than Turing machines. ben g On Sun, Nov 30, 2008 at 10:53 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote: OTOH, there is no possible real-world test to distinguish a true random sequence from a high-algorithmic-information quasi-random sequence I know, but the point is not whether we can distinguish it, but that quantum mechanics actually predicts to be intrinsically capable of non-deterministic randomness, while for a Turing machine that is impossible by definition. I find quite convincing and interesting the way in which the mathematical proof of the standard model of quantum computation as Turing computable has been put in jeopardy by physical reality. So I don't find this argument very convincing... On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. Not really. They're limitations on what measurements of physical reality can be simultaneously made. Quantum systems can compute *exactly* the class of Turing computable functions ... this has been proved according to standard quantum mechanics math. however, there are some things they can compute faster than any Turing machine, in the average case but not the worst case. Sorry, I am not really following the discussion but I just read that there is some misinterpretation here. It is the standard model of quantum computation that effectively computes exactly the Turing computable functions, but that was almost hand tailored to do so, perhaps because adding to the theory an assumption of continuum measurability was already too much (i.e. distinguishing infinitely close quantum states). But that is far from the claim that quantum systems can compute exactly the class of Turing computable functions. Actually the Hilbert space and the superposition of particles in an infinite number of states would suggest exactly the opposite. While the standard model of quantum computation only considers a superposition of 2 states (the so-called qubit, capable of entanglement in 0 and 1). But even if you stick to the standard model of quantum computation, the proof that it computes exactly the set of recursive functions [Feynman, Deutsch] can be put in jeopardy very easy : Turing machines are unable to produce non-deterministic randomness, something that quantum computers do as an intrinsic property of quantum mechanics (not only because of
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But I don't get your point at all, because the whole idea of nondeterministic randomness has nothing to do with physical reality... I don't get it. You don't think that quantum mechanics is part of our physical reality (if it is not all of it)? true random numbers are uncomputable entities which can never be existed, you can say that either they don't exist or they do exist but that we don't have access to them. That's a rather philosophical matter. But scientifically QM says the latter. Even more, since bits from a non-deterministic random source are truly independent from each other, something that does not happen when produced by a Turing machine, then any sequence (even finite) is of different nature from one produced by a Turing machine. In practice, if your claim is that you will not be able to distinguish the difference, you actually would if you let the machine run for a longer period of time, once finished its physical resources it will either halt or start over (making the random string periodic), while QM says that resources don't matter, a quantum computer will always continue producing non-deterministic (e.g. never periodic) strings of any length independently of any constraint of time or space! and any finite series of observations can be modeled equally well as the first N bits of an uncomputable series or of a computable one... ben g On Sun, Nov 30, 2008 at 10:53 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote: OTOH, there is no possible real-world test to distinguish a true random sequence from a high-algorithmic-information quasi-random sequence I know, but the point is not whether we can distinguish it, but that quantum mechanics actually predicts to be intrinsically capable of non-deterministic randomness, while for a Turing machine that is impossible by definition. I find quite convincing and interesting the way in which the mathematical proof of the standard model of quantum computation as Turing computable has been put in jeopardy by physical reality. So I don't find this argument very convincing... On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But quantum theory does appear to be directly related to limits of the computations of physical reality. The uncertainty theory and the quantization of quantum states are limitations on what can be computed by physical reality. Not really. They're limitations on what measurements of physical reality can be simultaneously made. Quantum systems can compute *exactly* the class of Turing computable functions ... this has been proved according to standard quantum mechanics math. however, there are some things they can compute faster than any Turing machine, in the average case but not the worst case. Sorry, I am not really following the discussion but I just read that there is some misinterpretation here. It is the standard model of quantum computation that effectively computes exactly the Turing computable functions, but that was almost hand tailored to do so, perhaps because adding to the theory an assumption of continuum measurability was already too much (i.e. distinguishing infinitely close quantum states). But that is far from the claim that quantum systems can compute exactly the class of Turing computable functions. Actually the Hilbert space and the superposition of particles in an infinite number of states would suggest exactly the opposite. While the standard model of quantum computation only considers a superposition of 2 states (the so-called qubit, capable of entanglement in 0 and 1). But even if you stick to the standard model of quantum computation, the proof that it computes exactly the set of recursive functions [Feynman, Deutsch] can be put in jeopardy very easy : Turing machines are unable to produce non-deterministic randomness, something that quantum computers do as an intrinsic property of quantum mechanics (not only because of measure limitations of the kind of the Heisenberg principle but by quantum non-locality, i.e. the violation of Bell's theorem). I just exhibited a non-Turing computable function that standard quantum computers compute... [Calude, Casti] But, I am old fashioned enough to be more interested in things about the brain and AGI that are supported by what would traditionally be considered scientific evidence or by what can be reasoned or designed from such evidence. If there is any thing that would fit under those headings to support the notion of the brain either being infinite, or being an antenna that receives decodable information from some infinite-information-content source, I would love to hear it. You and/or other people might be interested in a paper of mine published some
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But I don't get your point at all, because the whole idea of nondeterministic randomness has nothing to do with physical reality... I don't get it. You don't think that quantum mechanics is part of our physical reality (if it is not all of it)? Of course it isn't -- quantum mechanics is a mathematical and conceptual model that we use in order to predict certain finite sets of finite-precision observations, based on other such sets true random numbers are uncomputable entities which can never be existed, you can say that either they don't exist or they do exist but that we don't have access to them. That's a rather philosophical matter. But scientifically QM says the latter. Sure it does: but there is an equivalent mathematical theory that explains all observations identically to QM, yet doesn't posit any uncomputable entities So, choosing to posit that these uncomputable entities exist in reality, is just a matter of aesthetic or philosophical taste ... so you can't really say they exist in reality, because they contribute nothing to the predictive power of QM ... -- Ben G --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
On Mon, Dec 1, 2008 at 6:20 AM, Ben Goertzel [EMAIL PROTECTED] wrote: On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But I don't get your point at all, because the whole idea of nondeterministic randomness has nothing to do with physical reality... I don't get it. You don't think that quantum mechanics is part of our physical reality (if it is not all of it)? Of course it isn't -- quantum mechanics is a mathematical and conceptual model that we use in order to predict certain finite sets of finite-precision observations, based on other such sets Oh I see! I think that's of philosophical taste as well. I don't think everybody would agree with you. Specially if you poll physicists like those that constructed the standard model of computation! We cannot ask Feynman, but I actually asked Deutsch. He does not only think QM is our most basic physical reality (he thinks math and computer science lie in quantum mechanics), but he even takes quite seriously his theory of parallel universes! and he is not alone. Speaking by myself, I would agree with you, but I think we would need to relativize the concept of agreement. I don't think QM is just another model of merely mathematical value to make finite predictions. I think physical models say something about our physical reality. If you deny QM as part of our physical reality then I guess you deny any other physical model. I wonder then what is left to you. You perhaps would embrace total skepticism, perhaps even solipsism. Current trends have moved from there to a more relativized positions, where models are considered so, models, but still with some value as part of our actual physical reality (just as Newtonian physics is not just completely wrong after General Relativity since it still describes a huge part of our physical reality). At the end, even if you claim a Platonic physical reality to which we have no access at all, not even through our best explanations in the way of models, the world is either quantum or not (as we have defined the theory), and as long as it remains as our best explanation of a the phenomena that characterizes one has to face it to other models describing other aspects or models of our best known physical reality. It is not clear to me how you would deny the physical reality of QM but defend the theory of computability or algorithmic information theory as if they were more basic than QM. If we take as equally basic QM and AIT, even in a practical sense, there are incompatibilities in essence. QM cannot be said as Turing computable, and AIT cannot posit the in-existence of non-deterministic randomness specially when QM says something else. I am more in the side of AIT but I think the question is open, is interesting (both philosophically and scientific) and not trivial at all. true random numbers are uncomputable entities which can never be existed, you can say that either they don't exist or they do exist but that we don't have access to them. That's a rather philosophical matter. But scientifically QM says the latter. Sure it does: but there is an equivalent mathematical theory that explains all observations identically to QM, yet doesn't posit any uncomputable entities So, choosing to posit that these uncomputable entities exist in reality, is just a matter of aesthetic or philosophical taste ... so you can't really say they exist in reality, because they contribute nothing to the predictive power of QM ... There are people that think that quantum randomness is actually the source of the complexity we see in the universe [Bennett, Lloyd]. Even when I do not agree with them (since AIT does not require non-deterministic randomness) I think it is not that trivial since even researchers think they contribute in some fundamental (not only philosophical) way. -- Ben G --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Hector Zenilhttp://www.mathrix.org --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness
Hector Zenil wrote: On Mon, Dec 1, 2008 at 6:20 AM, Ben Goertzel [EMAIL PROTECTED] wrote: On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote: But I don't get your point at all, because the whole idea of ... ... Oh I see! I think that's of philosophical taste as well. I don't think everybody would agree with you. Specially if you poll physicists like those that constructed the standard model of computation! We cannot ask Feynman, but I actually asked Deutsch. He does not only think QM is our most basic physical reality (he thinks math and computer science lie in quantum mechanics), but he even takes quite seriously his theory of parallel universes! and he is not alone. Speaking by... when I do not agree with them (since AIT does not require non-deterministic randomness) I think it is not that trivial since even researchers think they contribute in some fundamental (not only philosophical) way. -- Ben G Still, one must remember that there is Quantum Theory, and then there are the interpretations of Quantum Theory. As I understand things there are still several models of the universe which yield the same observables, and choosing between them is a matter of taste. They are all totally consistent with standard Quantum Theory...but ...well, which do you prefer? Multi-world? Action at a distance? No objective universe? (I'm not sure what that means.) The present is created by the future as well as the past? As I understand things, these cannot be chosen between on the basis of Quantum Theory. And somewhere in that mix is Wholeness and the Implicate Order. When math gets translated into Language, interpretations add things. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
RE: [agi] A paper that actually does solve the problem of consciousness
Eric, Without knowing the scientifically measurable effects of the substance your post mentioned on the operation of the brain --- I am hypothesizing that the subjective experience you described could be caused, for example, by a greatly increased activation of neurons, or by a great decrease in the operations of the control and tuning mechanism of the brain, such as those in the basil-gangia/thalamic/cortical feedback loop. This could result in the large part of the brain that receives and perceives sensation and emotions not being a well moduluated, gain-controlled, and having normal higher level attention focusing processes select which, relatively small, parts of it get high degrees of activation by the parts of you brain that normally controls your mind --- which are often the part of your brain most normally associated with self control, and thus the self, --- a scheme selected by evolution so you as an organism can respond to those aspects of the environment that are most relevant to serving your own purposes, as has been generally necessary for survival of our ancestors, from a Darwinian standpoint. To use a sociological analogy, it may be a temporary revolution, in which the majority of the brain's neurons, that normally stay under the control of the elites, the portions of the pre-frontal lobe that normally control the focus of attention of the brain through their domination of the basil-ganglia and the thalamus, losing their ability to keep the mob in its place. The result is that the senses and emotions run wild, and the part of the brain dedicated to representing the self --- instead of being able to control things --- is overwhelmed and greatly out numbered by the large portion of the brain dedicated to emotion, sensation, and patterns within them -- so that the consciousness is much more directly felt, without any or significant interference from the self. And being overwhelmed by this sensation, and its awareness of the being and computation (i.e., a since of life) of the reality around us--- uninterrupted by the control and voices of the self --- generates a strong sensation that such sensed being is all, and, thus, we are one with it. If any one could give me a concise explanation, or link to one, of the scientifically studied effects on the brain of the chemicals that give such experiences, I would be interested in reading it, to see to what extent it agrees with the above hypothesis. Ed Porter -Original Message- From: Eric Burton [mailto:[EMAIL PROTECTED] Sent: Sunday, November 23, 2008 10:50 PM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness Ego death! This is not as pernicious as it sounds. The death/rebirth trial is a standby of the psilocybin excursion. One realizes one's self has vanished and is reincarnated into all the strangeness of life on earth as if being born. Very much an experience of the physical vessel being re-filled with new spirit stuff, some new soul overly given to wonder at it all. A sensation at the heart of most tryptamine raptures, I think... certainly more overlaid with alien imagery when induced by say psilocin than say, five methoxy dmt. But with almost all the tryptamine/indole hallucinogens this experience of user reboot is often there As if the user, not the machine, is rebooting. Worthy, but outside list scope ._. On 11/23/08, Ed Porter [EMAIL PROTECTED] wrote: Ben, I googled ego loss and found a lot of first person accounts of various experiences. From an AGI/brain science standpoint they were quite interesting, but I can see why you might not want such account to be on this list, other than perhaps if they were copied from other sites, and accompanied by third party deconstruction from a brain science or AGI standpoint. In fact, some of the account were disturbing, and were actually written to be cautionary tails. Some of these accounts described ego death. Ego death appears to me to be quite distinct from what I had thought of as ego loss, because it appears to be associated with a sense of fearing death (which presumably one would not do if one had lost one's ego), which in some instances occurred after, or intermitantly with, periods of having sensed a lost of ego, and was associated with a feat that one was permanently loosing that sense of self that would be necessary for normal human existence. Several people reported having disturbing repercussions from such trips for months or longer. But some of the people who reported ego loss said they felt it was a valuable experience. I forget exactly what various entheogens are supposed to do the brain, from a measurable brain science standpoint, but several of the subjective accounts by people claiming to have taken very strong dosages of entheogens described experiences that would be compatable with loss of normal brain control mechanism to maintain their normal control, or perhaps
Re: [agi] A paper that actually does solve the problem of consciousness
I remember reading that LSD caused a desegregation of brain faculties, so that patterns of activity produced by normal operation in one region can spill over into adjacent ones, where they're intepreted bizarrely. However, the brain does not go to soup or static, but rather explodes with novel noise or intense satori. So indeed, something else is happening. I think your idea that ego loss is induced by a swelling of abstract senses, squeezing out the structures that deal with your self in an identificatory way, rings true. It's a phenomenon one usually realizes has occurred, rather than going through acutely -- that is, it's in the midst of some other trial that one realizes the conventional self has evapourated, or become thin and transparent like tissue. The signal to noise ratio on content-heavy tryptamines is very high. 5-meo-DMT which I mentioned is actually light on content but does reliably induce a sense of transcendance and universal oneness. I don't know if 5-meo-dmt satori is an ideal example of the bare ego death experince. It is certainly also found in stranger substances Eric B On 11/24/08, Ed Porter [EMAIL PROTECTED] wrote: Eric, Without knowing the scientifically measurable effects of the substance your post mentioned on the operation of the brain --- I am hypothesizing that the subjective experience you described could be caused, for example, by a greatly increased activation of neurons, or by a great decrease in the operations of the control and tuning mechanism of the brain, such as those in the basil-gangia/thalamic/cortical feedback loop. This could result in the large part of the brain that receives and perceives sensation and emotions not being a well moduluated, gain-controlled, and having normal higher level attention focusing processes select which, relatively small, parts of it get high degrees of activation by the parts of you brain that normally controls your mind --- which are often the part of your brain most normally associated with self control, and thus the self, --- a scheme selected by evolution so you as an organism can respond to those aspects of the environment that are most relevant to serving your own purposes, as has been generally necessary for survival of our ancestors, from a Darwinian standpoint. To use a sociological analogy, it may be a temporary revolution, in which the majority of the brain's neurons, that normally stay under the control of the elites, the portions of the pre-frontal lobe that normally control the focus of attention of the brain through their domination of the basil-ganglia and the thalamus, losing their ability to keep the mob in its place. The result is that the senses and emotions run wild, and the part of the brain dedicated to representing the self --- instead of being able to control things --- is overwhelmed and greatly out numbered by the large portion of the brain dedicated to emotion, sensation, and patterns within them -- so that the consciousness is much more directly felt, without any or significant interference from the self. And being overwhelmed by this sensation, and its awareness of the being and computation (i.e., a since of life) of the reality around us--- uninterrupted by the control and voices of the self --- generates a strong sensation that such sensed being is all, and, thus, we are one with it. If any one could give me a concise explanation, or link to one, of the scientifically studied effects on the brain of the chemicals that give such experiences, I would be interested in reading it, to see to what extent it agrees with the above hypothesis. Ed Porter -Original Message- From: Eric Burton [mailto:[EMAIL PROTECTED] Sent: Sunday, November 23, 2008 10:50 PM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness Ego death! This is not as pernicious as it sounds. The death/rebirth trial is a standby of the psilocybin excursion. One realizes one's self has vanished and is reincarnated into all the strangeness of life on earth as if being born. Very much an experience of the physical vessel being re-filled with new spirit stuff, some new soul overly given to wonder at it all. A sensation at the heart of most tryptamine raptures, I think... certainly more overlaid with alien imagery when induced by say psilocin than say, five methoxy dmt. But with almost all the tryptamine/indole hallucinogens this experience of user reboot is often there As if the user, not the machine, is rebooting. Worthy, but outside list scope ._. On 11/23/08, Ed Porter [EMAIL PROTECTED] wrote: Ben, I googled ego loss and found a lot of first person accounts of various experiences. From an AGI/brain science standpoint they were quite interesting, but I can see why you might not want such account to be on this list, other than perhaps if they were copied from other sites, and accompanied by third party
Re: [agi] A paper that actually does solve the problem of consciousness
Eric: I think your idea that ego loss is induced by a swelling of abstract senses, squeezing out the structures that deal with your self in an identificatory way, rings true. I haven't followed this thread closely, but there is an aspect to it, I would argue, which is AGI-relevant. It's not so much ego-loss as ego-abandonment - letting your self go, which is central to mental illness. We are all capable of doing that under pressure - being highly conscious is painful especially under difficult circumstances. We also all continually diminish (and heighten) our consciousness- diminish rather than abandon our self - by some form of substance abuse from hard drugs to mild stimulants like coffee and comfort food.. How is that AGI-relevant? Because a true AGI that is continually dealing with creative problems, is and has to be continually afraid (along with other unpleasant emotions) - i.e. alert to the risks of things going wrong, which they always can - those problems may not be solved. And there is and has to be an issue of how much attention the self should pay to those fears, (all part of the area of emotional (general) intelligence). In extreme situations, of course, there will be an issue of self-extinction - suicide. When *should* an AGI commit suicide? --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Hey, ego loss is attendant with even modest doses of LSD or psilocybin. At ~ 700 mics I found that effect to be very much background On 11/21/08, Ed Porter [EMAIL PROTECTED] wrote: Ben, Entheogens! What a great word/euphemism. Is it pronounced like Inns (where travelers sleep) + Theo (short for Theodore) + gins(a subset of liquors I normally avoid like the plague, except in the occasional summer gin and tonic with lime)? What is the respective emphasis given to each of these three parts in the proper pronunciations. It is a word that would be deeply appreciated by many at my local Unitarian Church. Ed Porter -Original Message- From: Ben Goertzel [mailto:[EMAIL PROTECTED] Sent: Thursday, November 20, 2008 7:11 PM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness When I was in college and LSD was the rage, one of the main goals of the heavy duty heads was ego loss which was to achieve a sense of cosmic oneness with all of the universe. It was commonly stated that 1000 micrograms was the ticket to ego loss. I never went there. Nor have I ever achieved cosmic oneness through meditation, although I have achieved temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss. Perhaps you have been more brave (acid wise) or much lucky or disciplined meditation wise, and have achieve a seen of oneness with the cosmic consciousness. If so, I tip my hat (and Colbert wag of the finger) to you. Not a great topic for public mailing list discussion but ... uh ... yah .. But it's not really so much about the dosage ... entheogens are tools and it's all about what you do with them ;-) ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
RE: [agi] A paper that actually does solve the problem of consciousness
Eric, If, as your post below implies, you have experienced ego loss, --- please tell me --- how, if at all, was it different than the sense of oneness with the surround world that I described in my post of Fri 11/21/2008 8:02 PM which started this named thread. That is, how was it different than merely having, for an extended period of time, a oneness with sensual experience of the computational richness of external reality around (or perhaps of just ones breathing and feelings it engenders) --- a oneness uninterrupted by awareness of oneself as a something separate from such sensations or by the chattering of the chatbot most of us have inside our heads --- other than for the standard effects on sensations and emotions one would routinely associate with being entheogenned. Ed Porter -Original Message- From: Eric Burton [mailto:[EMAIL PROTECTED] Sent: Sunday, November 23, 2008 11:40 AM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness Hey, ego loss is attendant with even modest doses of LSD or psilocybin. At ~ 700 mics I found that effect to be very much background On 11/21/08, Ed Porter [EMAIL PROTECTED] wrote: Ben, Entheogens! What a great word/euphemism. Is it pronounced like Inns (where travelers sleep) + Theo (short for Theodore) + gins(a subset of liquors I normally avoid like the plague, except in the occasional summer gin and tonic with lime)? What is the respective emphasis given to each of these three parts in the proper pronunciations. It is a word that would be deeply appreciated by many at my local Unitarian Church. Ed Porter -Original Message- From: Ben Goertzel [mailto:[EMAIL PROTECTED] Sent: Thursday, November 20, 2008 7:11 PM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness When I was in college and LSD was the rage, one of the main goals of the heavy duty heads was ego loss which was to achieve a sense of cosmic oneness with all of the universe. It was commonly stated that 1000 micrograms was the ticket to ego loss. I never went there. Nor have I ever achieved cosmic oneness through meditation, although I have achieved temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss. Perhaps you have been more brave (acid wise) or much lucky or disciplined meditation wise, and have achieve a seen of oneness with the cosmic consciousness. If so, I tip my hat (and Colbert wag of the finger) to you. Not a great topic for public mailing list discussion but ... uh ... yah .. But it's not really so much about the dosage ... entheogens are tools and it's all about what you do with them ;-) ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
I don't feel motivated to kill this thread in my role as list moderator, and I agree that what's on or off topic is fairly fuzzy ... but I just have a sense that discussions of various varieties of drug-induced (or otherwise induced) states of exalted consciousness is a bit off-topic for an AGI list ... anyway I don't feel it quite right to share my own experiences in this regard in this forum ;-) Ben G On Sun, Nov 23, 2008 at 5:21 PM, Ed Porter [EMAIL PROTECTED] wrote: Ben, It's your list, so you get to decide what is off topic. Are you implying all discussion of subjectively describable aspect of human conscious experience is off topic? At least in my own experience, thinking about introspective subjective experiences has played a major role in my thinking about AGI. Thus, I tend to have a bias toward thinking discussions of such thinking are relevant to AGI. If p-consciousness, such as discussed in Richard's paper, is relevant to AGI, then why isn't a-consciousness? Or, perhaps, your implication about what is off topic was more narrow? That is what I assumed, and that is why, in the post you responding to below, I was asking if there were any describable non-entheogenic aspects of the ego-loss experience, other than what I had already described. Ed Porter -Original Message- From: Ben Goertzel [mailto:[EMAIL PROTECTED] Sent: Sunday, November 23, 2008 4:04 PM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness Goodness.. I feel like a) it is mighty hard to draw distinctions about these kinds of experiences in ordinary, informal language... b) this is kinda off topic for the list ;-) ben On Sun, Nov 23, 2008 at 3:28 PM, Ed Porter [EMAIL PROTECTED] wrote: Eric, If, as your post below implies, you have experienced ego loss, --- please tell me --- how, if at all, was it different than the sense of oneness with the surround world that I described in my post of Fri 11/21/2008 8:02 PM which started this named thread. That is, how was it different than merely having, for an extended period of time, a oneness with sensual experience of the computational richness of external reality around (or perhaps of just ones breathing and feelings it engenders) --- a oneness uninterrupted by awareness of oneself as a something separate from such sensations or by the chattering of the chatbot most of us have inside our heads --- other than for the standard effects on sensations and emotions one would routinely associate with being entheogenned. Ed Porter -Original Message- From: Eric Burton [mailto:[EMAIL PROTECTED] Sent: Sunday, November 23, 2008 11:40 AM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness Hey, ego loss is attendant with even modest doses of LSD or psilocybin. At ~ 700 mics I found that effect to be very much background On 11/21/08, Ed Porter [EMAIL PROTECTED] wrote: Ben, Entheogens! What a great word/euphemism. Is it pronounced like Inns (where travelers sleep) + Theo (short for Theodore) + gins(a subset of liquors I normally avoid like the plague, except in the occasional summer gin and tonic with lime)? What is the respective emphasis given to each of these three parts in the proper pronunciations. It is a word that would be deeply appreciated by many at my local Unitarian Church. Ed Porter -Original Message- From: Ben Goertzel [mailto:[EMAIL PROTECTED] Sent: Thursday, November 20, 2008 7:11 PM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness When I was in college and LSD was the rage, one of the main goals of the heavy duty heads was ego loss which was to achieve a sense of cosmic oneness with all of the universe. It was commonly stated that 1000 micrograms was the ticket to ego loss. I never went there. Nor have I ever achieved cosmic oneness through meditation, although I have achieved temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss. Perhaps you have been more brave (acid wise) or much lucky or disciplined meditation wise, and have achieve a seen of oneness with the cosmic consciousness. If so, I tip my hat (and Colbert wag of the finger) to you. Not a great topic for public mailing list discussion but .. uh . yah .. But it's not really so much about the dosage ... entheogens are tools and it's all about what you do with them ;-) ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:
RE: [agi] A paper that actually does solve the problem of consciousness
Wannabe, If you read my post of Fri 11/21/2008 8:02 PM in this thread, you will see that I said the sense of oneness with the external world many of us feel may just be sensory experience and perception of the external world, uninterrupted by thoughts of oneself or our brain's chatbot. This would tend to agree with what you say in your post below. Ed Porter -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Saturday, November 22, 2008 2:57 PM To: agi@v2.listbox.com Subject: RE: [agi] A paper that actually does solve the problem of consciousness You guys and your experiments. Well the whole experience of oneness could also just be the disruption of the orientation association cortex. Jill Bolte Taylor, a neuroscientist, describes this in her book, _My Stroke of Insight_. She had a stroke that affected much of her left hemisphere, including this area that creates awareness of personal boundaries. So she had the whole feeling of oneness with the universe. And also now that she has recovered she is able to shift her consciousness more to her right brain and get back to it. She has a TED talk about it: http://www.ted.com/index.php/talks/jill_bolte_taylor_s_powerful_stroke_of_in sight.html andi Quoting Ed Porter [EMAIL PROTECTED]: Ben, Entheogens! What a great word/euphemism. Is it pronounced like Inns (where travelers sleep) + Theo (short for Theodore) + gins(a subset of liquors I normally avoid like the plague, except in the occasional summer gin and tonic with lime)? What is the respective emphasis given to each of these three parts in the proper pronunciations. It is a word that would be deeply appreciated by many at my local Unitarian Church. Ed Porter -Original Message- From: Ben Goertzel [mailto:[EMAIL PROTECTED] Sent: Thursday, November 20, 2008 7:11 PM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness When I was in college and LSD was the rage, one of the main goals of the heavy duty heads was ego loss which was to achieve a sense of cosmic oneness with all of the universe. It was commonly stated that 1000 micrograms was the ticket to ego loss. I never went there. Nor have I ever achieved cosmic oneness through meditation, although I have achieved temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss. Perhaps you have been more brave (acid wise) or much lucky or disciplined meditation wise, and have achieve a seen of oneness with the cosmic consciousness. If so, I tip my hat (and Colbert wag of the finger) to you. Not a great topic for public mailing list discussion but ... uh ... yah .. But it's not really so much about the dosage ... entheogens are tools and it's all about what you do with them ;-) ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
RE: [agi] A paper that actually does solve the problem of consciousness
Ben, Entheogens! What a great word/euphemism. Is it pronounced like Inns (where travelers sleep) + Theo (short for Theodore) + gins(a subset of liquors I normally avoid like the plague, except in the occasional summer gin and tonic with lime)? What is the respective emphasis given to each of these three parts in the proper pronunciations. It is a word that would be deeply appreciated by many at my local Unitarian Church. Ed Porter -Original Message- From: Ben Goertzel [mailto:[EMAIL PROTECTED] Sent: Thursday, November 20, 2008 7:11 PM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness When I was in college and LSD was the rage, one of the main goals of the heavy duty heads was ego loss which was to achieve a sense of cosmic oneness with all of the universe. It was commonly stated that 1000 micrograms was the ticket to ego loss. I never went there. Nor have I ever achieved cosmic oneness through meditation, although I have achieved temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss. Perhaps you have been more brave (acid wise) or much lucky or disciplined meditation wise, and have achieve a seen of oneness with the cosmic consciousness. If so, I tip my hat (and Colbert wag of the finger) to you. Not a great topic for public mailing list discussion but ... uh ... yah .. But it's not really so much about the dosage ... entheogens are tools and it's all about what you do with them ;-) ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Ed Porter wrote: Richard, In response to your below copied email, I have the following response to the below quoted portions: ### My prior post That aspects of consciousness seem real does not provides much of an “explanation for consciousness.” It says something, but not much. It adds little to Descartes’ “I think therefore I am.” I don’t think it provides much of an answer to any of the multiple questions Wikipedia associates with Chalmer’s hard problem of consciousness. ### Richard said I would respond as follows. When I make statements about consciousness deserving to be called real, I am only saying this as a summary of a long argument that has gone before. So it would not really be fair to declare that this statement of mine says something, but not much without taking account of the reasons that have been building up toward that statement earlier in the paper. ## My response ## Perhaps --- but this prior work which you claim explains so much is not in the paper being discussed. Without it, it is not clear how much your paper itself contributes. And, Ben, who is much more knowledgeable than I on these things seemed similarly unimpressed. I would say that it does. I blieve that the situation is that you do not yet understand it. Ben has had similar trouble, but seems to be comprehending more of the issue as I respond to his questions. (I owe him one response right now: I am working on it) ### Richard said I am arguing that when we probe the meaning of real we find that the best criterion of realness is the way that the system builds a population of concept-atoms that are (a) mutually consistent with one another, ## My response ## I don’t know what mutually consistent means in this context, and from my memory of reading you paper multiple times I don’t think it explains it, other than perhaps implying that the framework of atoms represent experiential generalization and associations, which would presumably tend to represent the regularities of experienced reality. I'll grant you that one: I did not explain in detail this idea of mutual consistency. However, the reason I did not is that I really had to assume some background, and I was hoping that the reader would already be aware of the general idea that cognitive systems build their knowledge in the form of concepts that are (largely) consistent with one another, and that it is this global consistency that lends strength to the whole. In other words, all the bits of our knowledge work together. A piece of knowledge like The Loch Ness monster lives in Loch Ness is NOT a piece of knowledge that fits well with all of the rest of our knowledge, because we have little or no evidence that such a thing as the Loch Ness Monster has been photographed, observed by independent people, observed by several people at the same time, caught in a trap and taken to a museum, been found as a skeletal remain, bumped into a boat, etc etc etc. There are no links from the rest of our knowledge to the LNM fact, so we actually do not credit the LNM as being real. By contrast, facts about Coelacanths are very well connected to the rest of our knowledge, and we believe that they do exist. ### Richard said and (b) strongly supported by sensory evidence (there are other criteria, but those are the main ones). If you think hard enough about these criteria, you notice that the qualia-atoms (those concept-atoms that cause the analysis mechanism to bottom out) score very high indeed. This is in dramatic contrast to other concept-atoms like hallucinations, which we consider 'artifacts' precisely because they score so low. The difference between these two is so dramatic that I think we need to allow the qualia-atoms to be called real by all our usual criteria, BUT with the added feature that they cannot be understood in any more basic terms. ## My response ## You seem to be defining “real” here to mean believed to exist in what is perceived as objective reality. I personally believe a sense of subjective reality is much more central to the concept of consciousness. Personal computers of today, which most people don’t think have anything approaching a human-like consciousness, could in many tasks make estimations of whether some signal was “real” in the sense of representing something in objective reality without being conscious. But a powerful hallucination, combined with a human level of sense of being conscious of it, does not appear to be something any current computer can achieve. So if you are looking for the hard problems in consciousness focus more on the human subjective sense of awareness, not whether there is evidence something is real in what we perceive as objective reality. Alas, you have
Re: [agi] A paper that actually does solve the problem of consciousness
Hmmm... I don't agree w/ you that the hard problem of consciousness is unimportant or non-critical in a philosophical sense. Far from it. However, from the point of view of this list, I really don't think it needs to be solved (whatever that might mean) in order to build AGI. Of course, I think that because I think the hard problem of consciousness is actually easy: I'm a panpsychist ... I think everything is conscious, and different kinds of structures just focus and amplify this universal consciousness in different ways... Interestingly, this panpsychist perspective is seen as obviously right by most folks deeply involved with meditation or yoga whom I've talked to, and seen as obviously wrong by most scientists I talk to... -- Ben G On Thu, Nov 20, 2008 at 5:26 PM, Ed Porter [EMAIL PROTECTED] wrote: Richard, Thank you for your reply. I started to write a point-by-point response to your reply, copied below, but after 45 minutes I said stop. As interesting as it is, from a philosophical and argumentative writing standpoint to play wack-a-mole with your constantly sifting and often contradictory arguments --- right now, I have much more pressing things to do. And I think I have already stated many of my positions on the subject of this thread sufficiently clearly that intelligent people who have a little imagination and really want to can understand them. Since few others beside you have responded to my posts, I don't think there is any community demand that I spend further time on such replies. What little I can add to what I have already said is that I basically I think the hard problem/easy problem dichotomy is largely, although, not totally pointless. I do not think the hard problem is central to understanding consciousness, because so much of consciousness is excluded from being part of the hard problem. It is excluded either because it can be described verbally by introspection by the mind itself, or because it affects external behavior, and, thus, at least according to Wikipedia's definition of p-consciousness, is part of the easy problem. It should be noted that not affecting external behavior excludes one hell of a lot of consciousness, because emotions, which clearly affect external behavior, are so closely associated with much of our sensing of experience. Thus, it seems a large part of what we humans consider to be our subjective sense of experience of consciousness is rejected by hard problem purists as being part of the easy problem. Richard, you in particular seems to be much more of a hard problem purist than those who wrote the Wikipedia definition of p-consciousness. This is because in your responses to me you have even excluded as not part of the hard problem any lateral or higher level associations of one of your bottom level red detector nodes might have. This, for example, would arguably exclude from the p-consciousness of the color red the associations between the lowest level, local red sensing nodes, that are necessary so the activation of such nodes can be recognized as a common color red no matter where they occur in different parts of the visual field. Thus according to such a definition, qualia for red would have to be different for each location of V1 in which red is sensed --- even when different portions of V1 get mapped into the same portions of the semi stationary representation your brain builds out of stationary surroundings as your eyes saccade and pan across them. Thus, your concept of the qualia for the color red does not cover a unified color red, and necessarily includes thousands of separate red qualia, each associated with a different portion of V1. Aspects of consciousness that (a) cannot be verbally described by introspection; (b) have no effect on behavior, and (c) cannot involve any associations with the activation of other nodes (which is an exclusion you, Richard, seem to have added to Wikipedia's description of p-consciousness) --- defines the hard problem so narrowly as to make it of relatively little, or no importance. It certainly is not the central question of consciousness, because a sense of experiencing something has no meaning unless it has grounding, and that requires associations in large numbers, and, thus, according to your definition could not be part of the hard problem. Plus, Richard, you have not even come close to addressing my statement that just because certain aspects of consciousness cannot be verbally described by the introspection of the brain or by affects on external behavior of the body itself does not mean they cannot be subject to further analysis through scientific research --- such as by brain science, brain scanning, brain simulations, and advances in understanding of AGIs. I have already spent way, way too much time in this response, So, I will leave it at that. If you want to think you have won the argument
Re: [agi] A paper that actually does solve the problem of consciousness
Ben: I'm a panpsychist ... You think that all things are sentient/ conscious? (I argue that consciousness depends on having a nervous system and being able to feel - and if we could understand the mechanics of that, we would probably have solved the hard problem and be able to give something similar to a machine (which might have to be organic) ). So I'm interested in any alternative/panpsychist views. If you do think that inorganic things like stones, say, are conscious, then surely it would follow, that we should ultimately be able to explain their consciousness, and make even inanimate metallic computers conscious? Care to expand a little on your views? --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
well, what does feel mean to you ... what is feeling that a slug can do but a rock or an atom cannot ... are you sure this is an absolute distinction rather than a matter of degree? On Thu, Nov 20, 2008 at 6:15 PM, Mike Tintner [EMAIL PROTECTED] wrote: Ben: I'm a panpsychist ... You think that all things are sentient/ conscious? (I argue that consciousness depends on having a nervous system and being able to feel - and if we could understand the mechanics of that, we would probably have solved the hard problem and be able to give something similar to a machine (which might have to be organic) ). So I'm interested in any alternative/panpsychist views. If you do think that inorganic things like stones, say, are conscious, then surely it would follow, that we should ultimately be able to explain their consciousness, and make even inanimate metallic computers conscious? Care to expand a little on your views? --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects. -- Robert Heinlein --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
RE: [agi] A paper that actually does solve the problem of consciousness
Ben, If you place the limitations on what is part of the hard problem that Richard has, most of what you consider part of the hard problem would probably cease to be part of the hard problem. In one argument he eliminated things relating to lateral or upward associative connections from being consider part of the hard problem of consciousness. That would eliminate the majority of sources of grounding from any notion of consciousness. I like you tend to think that all of reality is conscious, but I think there are vastly different degrees and types of consciousness, and I think there are many meaningful types of consciousness that humans have that most of reality does not have. When I was in college and LSD was the rage, one of the main goals of the heavy duty heads was ego loss which was to achieve a sense of cosmic oneness with all of the universe. It was commonly stated that 1000 micrograms was the ticket to ego loss. I never went there. Nor have I ever achieved cosmic oneness through meditation, although I have achieved temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss. Perhaps you have been more brave (acid wise) or much lucky or disciplined meditation wise, and have achieve a seen of oneness with the cosmic consciousness. If so, I tip my hat (and Colbert wag of the finger) to you. Ed Porter -Original Message- From: Ben Goertzel [mailto:[EMAIL PROTECTED] Sent: Thursday, November 20, 2008 5:46 PM To: agi@v2.listbox.com Subject: Re: [agi] A paper that actually does solve the problem of consciousness Hmmm... I don't agree w/ you that the hard problem of consciousness is unimportant or non-critical in a philosophical sense. Far from it. However, from the point of view of this list, I really don't think it needs to be solved (whatever that might mean) in order to build AGI. Of course, I think that because I think the hard problem of consciousness is actually easy: I'm a panpsychist ... I think everything is conscious, and different kinds of structures just focus and amplify this universal consciousness in different ways... Interestingly, this panpsychist perspective is seen as obviously right by most folks deeply involved with meditation or yoga whom I've talked to, and seen as obviously wrong by most scientists I talk to... -- Ben G On Thu, Nov 20, 2008 at 5:26 PM, Ed Porter [EMAIL PROTECTED] wrote: Richard, Thank you for your reply. I started to write a point-by-point response to your reply, copied below, but after 45 minutes I said stop. As interesting as it is, from a philosophical and argumentative writing standpoint to play wack-a-mole with your constantly sifting and often contradictory arguments --- right now, I have much more pressing things to do. And I think I have already stated many of my positions on the subject of this thread sufficiently clearly that intelligent people who have a little imagination and really want to can understand them. Since few others beside you have responded to my posts, I don't think there is any community demand that I spend further time on such replies. What little I can add to what I have already said is that I basically I think the hard problem/easy problem dichotomy is largely, although, not totally pointless. I do not think the hard problem is central to understanding consciousness, because so much of consciousness is excluded from being part of the hard problem. It is excluded either because it can be described verbally by introspection by the mind itself, or because it affects external behavior, and, thus, at least according to Wikipedia's definition of p-consciousness, is part of the easy problem. It should be noted that not affecting external behavior excludes one hell of a lot of consciousness, because emotions, which clearly affect external behavior, are so closely associated with much of our sensing of experience. Thus, it seems a large part of what we humans consider to be our subjective sense of experience of consciousness is rejected by hard problem purists as being part of the easy problem. Richard, you in particular seems to be much more of a hard problem purist than those who wrote the Wikipedia definition of p-consciousness. This is because in your responses to me you have even excluded as not part of the hard problem any lateral or higher level associations of one of your bottom level red detector nodes might have. This, for example, would arguably exclude from the p-consciousness of the color red the associations between the lowest level, local red sensing nodes, that are necessary so the activation of such nodes can be recognized as a common color red no matter where they occur in different parts of the visual field. Thus according to such a definition, qualia for red would have to be different for each location of V1 in which red is sensed --- even when different portions of V1 get mapped into the same portions of the
Re: [agi] A paper that actually does solve the problem of consciousness
Ben, I suspect you're being evasive. You and I know what feel means. When I feel the wind, I feel cold. When I feel tea poured on my hand, I/it feel/s scalding hot. And we can trace the line of feeling to a considerable extent - no? - through the nervous system and brain. Not only do I feel it internally, but there are normally external signs of my feeling. You see me shivering/ wincing etc. And we - science - can interfere with those feelings and anaesthetise or heighten them. Now when the rock is exposed to the same wind or hot tea, if it does feel anything, it stoically and heroically refuses to display any signs whatsoever. It appears to be magnificently indifferent. And if it really is suffering, we wouldn't know what to do to alleviate its suffering. So what do you (or others) mean by inanimate things feeling? I'm mainly seeking enlightenment not an argument here - and to see whether your or others' panpsychism has been at all thought through, and is more than an abstract conjunction of concepts. I assume there is some substance to the philosophy - I'd like to know what it is. I Ben: well, what does feel mean to you ... what is feeling that a slug can do but a rock or an atom cannot ... are you sure this is an absolute distinction rather than a matter of degree? On Thu, Nov 20, 2008 at 6:15 PM, Mike Tintner [EMAIL PROTECTED] wrote: Ben: I'm a panpsychist ... You think that all things are sentient/ conscious? (I argue that consciousness depends on having a nervous system and being able to feel - and if we could understand the mechanics of that, we would probably have solved the hard problem and be able to give something similar to a machine (which might have to be organic) ). So I'm interested in any alternative/panpsychist views. If you do think that inorganic things like stones, say, are conscious, then surely it would follow, that we should ultimately be able to explain their consciousness, and make even inanimate metallic computers conscious? Care to expand a little on your views? --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
On Fri, Nov 21, 2008 at 2:23 AM, Ben Goertzel [EMAIL PROTECTED] wrote: well, what does feel mean to you ... what is feeling that a slug can do but a rock or an atom cannot ... are you sure this is an absolute distinction rather than a matter of degree? Does a rock compute Fibonacci numbers just to a lesser degree than this program? A concept, like any other. Also, some shades of gray are so thin you'd run out of matter in the Universe to track all the things that light. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
When I was in college and LSD was the rage, one of the main goals of the heavy duty heads was ego loss which was to achieve a sense of cosmic oneness with all of the universe. It was commonly stated that 1000 micrograms was the ticket to ego loss. I never went there. Nor have I ever achieved cosmic oneness through meditation, although I have achieved temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss. Perhaps you have been more brave (acid wise) or much lucky or disciplined meditation wise, and have achieve a seen of oneness with the cosmic consciousness. If so, I tip my hat (and Colbert wag of the finger) to you. Not a great topic for public mailing list discussion but ... uh ... yah ... But it's not really so much about the dosage ... entheogens are tools and it's all about what you do with them ;-) ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)
Matt Mahoney wrote: Autobliss... Imagine that there is another human language which is the same as English, just the pain/pleasure related words have the opposite meaning. Then consider what would that mean for your Autobliss. My definition of pain is negative reinforcement in a system that learns. IMO, pain is more like a data with the potential to cause disorder in hard-wired algorithms. I'm not saying this fully covers it but it's IMO already out of the Autobliss scope. Trent Waddington wrote: Apparently, it was Einstein who said that if you can't explain it to your grandmother then you don't understand it. That was Richard Feynman Regards, Jiri Jelinek PS: Sorry if I'm missing anything. Being busy, I don't read all posts. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)
On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek [EMAIL PROTECTED] wrote: Trent Waddington wrote: Apparently, it was Einstein who said that if you can't explain it to your grandmother then you don't understand it. That was Richard Feynman When? I don't really know who said it.. but everyone else on teh internets seems to attribute it to Einstein. I've seen at least one site attribute it to the bible (but of course they give no reference). As such, I think there's two nuggets of wisdom here: If you can't provide references, then your opinion is just as good as mine, and if you can provide references, that doesn't excuse you from explaining what you're talking about so that everyone can understand. Two points that many members of this list would do well to heed now and then. Trent --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: Ben Hi Richard, Ben I don't have any comments yet about what you have written, Ben because I'm not sure I fully understand what you're trying to Ben say... I hope your answers to these questions will help clarify Ben things. Ben It seems to me that your core argument goes something like this: Ben That there are many concepts for which an introspective analysis Ben can only return the concept itself. That this recursion blocks Ben any possible explanation. That consciousness is one of these Ben concepts because self is inherently recursive. Therefore, Ben consciousness is explicitly blocked from having any kind of Ben explanation. Haven't read the paper yet, but the situation with introspection is the following: Introspection accesses a meaning level, at which you can summon and use concepts (subroutines) by name, but you are protected essentially by information hiding from looking at the code that implements them. Consider for example summoning Microsoft Word to perform some task. You know what you are doing, why you are doing it, how you intend to use it, but you have no idea of the code within Microsoft Word. The same is true for internal concepts within your mind. Your mind is no more built to be able to look inside subroutines, than my laptop is built to output the internal transistor values. Partial results within subroutines are not meaningful, your conscious processing is in terms of meaningful quantities. What is Thought? (MIT Press, 2004) discusses this, in Chap 14 which answers most questions about consciousness IMO. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)
--- On Wed, 11/19/08, Jiri Jelinek [EMAIL PROTECTED] wrote: My definition of pain is negative reinforcement in a system that learns. IMO, pain is more like a data with the potential to cause disorder in hard-wired algorithms. I'm not saying this fully covers it but it's IMO already out of the Autobliss scope. You might be thinking of continuous or uncontrollable pain. Like when a rat is shocked and can stop the shock by turning a paddle wheel, and a second rat receives identical shocks to the first but its paddle wheel has no effect. Only the second rat develops stomach ulcers. When autobliss is run with two negative arguments so that it is punished no matter what it does, the neural network weights take on random values and it never learns a function. It also dies, but only because I programmed it that way. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Ben Goertzel wrote: Richard, I re-read your paper and I'm afraid I really don't grok why you think it solves Chalmers' hard problem of consciousness... It really seems to me like what you're suggesting is a cognitive correlate of consciousness, to morph the common phrase neural correlate of consciousness ... You seem to be stating that when X is an unanalyzable, pure atomic sensation from the perspective of cognitive system C, then C will perceive X as a raw quale ... unanalyzable and not explicable by ordinary methods of explication, yet, still subjectively real... But, I don't see how the hypothesis Conscious experience is **identified with** unanalyzable mind-atoms could be distinguished empirically from Conscious experience is **correlated with** unanalyzable mind-atoms I think finding cognitive correlates of consciousness is interesting, but I don't think it constitutes solving the hard problem in Chalmers' sense... I grok that you're saying consciousness feels inexplicable because it has to do with atoms that the system can't explain, due to their role as its primitive atoms ... and this is a good idea, but, I don't see how it bridges the gap btw subjective experience and empirical data ... What it does is explain why, even if there *were* no hard problem, cognitive systems might feel like there is one, in regard to their unanalyzable atoms Another worry I have is: I feel like I can be conscious of my son, even though he is not an unanalyzable atom. I feel like I can be conscious of the unique impression he makes ... in the same way that I'm conscious of redness ... and, yeah, I feel like I can't fully explain the conscious impression he makes on me, even though I can explain a lot of things about him... So I'm not convinced that atomic sensor input is the only source of raw, unanalyzable consciousness... My first response to this is that you still don't seem to have taken account of what was said in the second part of the paper - and, at the same time, I can find many places where you make statements that are undermined by that second part. To take the most significant example: when you say: But, I don't see how the hypothesis Conscious experience is **identified with** unanalyzable mind-atoms could be distinguished empirically from Conscious experience is **correlated with** unanalyzable mind-atoms ... there are several concepts buried in there, like [identified with], [distinguished empirically from] and [correlated with] that are theory-laden. In other words, when you use those terms you are implictly applying some standards that have to do with semantics and ontology, and it is precisely those standards that I attacked in part 2 of the paper. However, there is also another thing I can say about this statement, based on the argument in part one of the paper. It looks like you are also falling victim to the argument in part 1, at the same time that you are questioning its validity: one of the consequences of that initial argument was that *because* those concept-atoms are unanalyzable, you can never do any such thing as talk about their being only correlated with a particular cognitive event versus actually being identified with that cognitive event! So when you point out that the above distinction seems impossible to make, I say: Yes, of course: the theory itself just *said* that!. So far, all of the serious questions that people have placed at the door of this theory have proved susceptible to that argument. That was essentially what I did when talking to Chalmers. He came up with an objection very like the one you gave above, so I said: Okay, the answer is that the theory itself predicts that you *must* find that question to be a stumbling block . AND, more importantly, you should be able to see that the strategy I am using here is a strategy that I can flexibly deploy to wipe out a whole class of objections, so the only way around that strategy (if you want to bring down this theory) is to come up a with a counter-strategy that demonstrably has the structure to undermine my strategy and I don't believe you can do that. His only response, IIRC, was Huh! This looks like it might be new. Send me a copy. To make further progress in this discussion it is important, I think, to understand both the fact that I have that strategy, and also to appreciate that the second part of the paper went far beyond that. Lastly, about your question re. consciousness of extended objects that are not concept-atoms. I think there is some confusion here about what I was trying to say (my fault perhaps). It is not just the fact of those concept-atoms being at the end of the line, it is actually about what happens to the analysis mechanism. So, what I did was point to the clearest cases where people feel that a subjective experience is in need of explanation - the qualia - and I showed that in
Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)
Trent, Feynman's page on wikipedia has it as: If you can't explain something to a first year student, then you haven't really understood it. but Feynman reportedly said it in a number of ways, including the grandmother variant. I learned about it when taking physics classes a while ago so I don't have a very useful source info, but I remember one of my professors saying that Feynman also says it in his books. But yes, I did a quick search and noticed that many attribute the grandmother variant to Einstein (which I didn't know - sorry). Some attribute it to Ernest Rutherford, some talk about Kurt Vonnegut, and yes, some about Bible... Well, I guess it's not that important. But one of my related thoughts is that when teaching AGIs, we should start with very high-level basic concepts/explanations/world_model and not dive into great granularity before the high-level concepts are relatively well understood [/correctly used when generating solutions]. I oppose the idea of throwing tons of raw data (from very different granularity levels [and possibly different contexts]) at the AGI and expecting that it will somehow sort everything [or most of it] out correctly. Jiri On Wed, Nov 19, 2008 at 3:39 AM, Trent Waddington [EMAIL PROTECTED] wrote: On Wed, Nov 19, 2008 at 6:20 PM, Jiri Jelinek [EMAIL PROTECTED] wrote: Trent Waddington wrote: Apparently, it was Einstein who said that if you can't explain it to your grandmother then you don't understand it. That was Richard Feynman When? I don't really know who said it.. but everyone else on teh internets seems to attribute it to Einstein. I've seen at least one site attribute it to the bible (but of course they give no reference). As such, I think there's two nuggets of wisdom here: If you can't provide references, then your opinion is just as good as mine, and if you can provide references, that doesn't excuse you from explaining what you're talking about so that everyone can understand. Two points that many members of this list would do well to heed now and then. Trent --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Lastly, about your question re. consciousness of extended objects that are not concept-atoms. I think there is some confusion here about what I was trying to say (my fault perhaps). It is not just the fact of those concept-atoms being at the end of the line, it is actually about what happens to the analysis mechanism. So, what I did was point to the clearest cases where people feel that a subjective experience is in need of explanation - the qualia - and I showed that in that case the explanation is a failure of the analysis mechanism because it bottoms out. However, just because I picked that example for the sake of clarity, that does not mean that the *only* place where the analysis mechanism can get into trouble must be just when it bumps into those peripheral atoms. I tried to explain this in a previous reply to someone (perhaps it was you): it would be entirely possible that higher level atoms could get built to represent [a sum of all the qualia-atoms that are part of one object], and if that happened we might find that this higher level atom was partly analyzable (it is composed of lower level qualia) and partly not (any analysis hits the brick wall after one successful unpacking step). OK, I think I get that... I think that's the easy part ;-) Indeed, the analysis mechanism can get into trouble just due to its limited capacity Other aspects of the mind can pack together complex mental structures, which the analysis mechanism perceives as tokens with some evocative power, but which the analysis mechanism lacks the capacity to decompose into parts. So, these can appear to it as indecomposable too, in a related but slightly different sense from peripheral atoms... ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Richard, My first response to this is that you still don't seem to have taken account of what was said in the second part of the paper - and, at the same time, I can find many places where you make statements that are undermined by that second part. To take the most significant example: when you say: But, I don't see how the hypothesis Conscious experience is **identified with** unanalyzable mind-atoms could be distinguished empirically from Conscious experience is **correlated with** unanalyzable mind-atoms ... there are several concepts buried in there, like [identified with], [distinguished empirically from] and [correlated with] that are theory-laden. In other words, when you use those terms you are implictly applying some standards that have to do with semantics and ontology, and it is precisely those standards that I attacked in part 2 of the paper. However, there is also another thing I can say about this statement, based on the argument in part one of the paper. It looks like you are also falling victim to the argument in part 1, at the same time that you are questioning its validity: one of the consequences of that initial argument was that *because* those concept-atoms are unanalyzable, you can never do any such thing as talk about their being only correlated with a particular cognitive event versus actually being identified with that cognitive event! So when you point out that the above distinction seems impossible to make, I say: Yes, of course: the theory itself just *said* that!. So far, all of the serious questions that people have placed at the door of this theory have proved susceptible to that argument. Well, suppose I am studying your brain with a super-advanced brain-monitoring device ... Then, suppose that I, using the brain-monitoring device, identify the brain response pattern that uniquely occurs when you look at something red ... I can then pose the question: Is your experience of red *identical* to this brain-response pattern ... or is it correlated with this brain-response pattern? I can pose this question even though the cognitive atoms corresponding to this brain-response pattern are unanalyzable from your perspective... Next, note that I can also turn the same brain-monitoring device on myself... So I don't see why the question is unaskable ... it seems askable, because these concept-atoms in question are experience-able even if not analyzable... that is, they still form mental content even though they aren't susceptible to explanation as you describe it... I agree that, subjectively or empirically, there is no way to distinguish Conscious experience is **identified with** unanalyzable mind-atoms from Conscious experience is **correlated with** unanalyzable mind-atoms and it seems to me that this indicates you have NOT solved the hard problem, but only restated it in a different (possibly useful) way -- Ben G --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Ben Goertzel wrote: Richard, My first response to this is that you still don't seem to have taken account of what was said in the second part of the paper - and, at the same time, I can find many places where you make statements that are undermined by that second part. To take the most significant example: when you say: But, I don't see how the hypothesis Conscious experience is **identified with** unanalyzable mind-atoms could be distinguished empirically from Conscious experience is **correlated with** unanalyzable mind-atoms ... there are several concepts buried in there, like [identified with], [distinguished empirically from] and [correlated with] that are theory-laden. In other words, when you use those terms you are implictly applying some standards that have to do with semantics and ontology, and it is precisely those standards that I attacked in part 2 of the paper. However, there is also another thing I can say about this statement, based on the argument in part one of the paper. It looks like you are also falling victim to the argument in part 1, at the same time that you are questioning its validity: one of the consequences of that initial argument was that *because* those concept-atoms are unanalyzable, you can never do any such thing as talk about their being only correlated with a particular cognitive event versus actually being identified with that cognitive event! So when you point out that the above distinction seems impossible to make, I say: Yes, of course: the theory itself just *said* that!. So far, all of the serious questions that people have placed at the door of this theory have proved susceptible to that argument. Well, suppose I am studying your brain with a super-advanced brain-monitoring device ... Then, suppose that I, using the brain-monitoring device, identify the brain response pattern that uniquely occurs when you look at something red ... I can then pose the question: Is your experience of red *identical* to this brain-response pattern ... or is it correlated with this brain-response pattern? I can pose this question even though the cognitive atoms corresponding to this brain-response pattern are unanalyzable from your perspective... Next, note that I can also turn the same brain-monitoring device on myself... So I don't see why the question is unaskable ... it seems askable, because these concept-atoms in question are experience-able even if not analyzable... that is, they still form mental content even though they aren't susceptible to explanation as you describe it... I agree that, subjectively or empirically, there is no way to distinguish Conscious experience is **identified with** unanalyzable mind-atoms from Conscious experience is **correlated with** unanalyzable mind-atoms and it seems to me that this indicates you have NOT solved the hard problem, but only restated it in a different (possibly useful) way There are several different approaches and comments that I could take with what you just wrote, but let me focus on just one; the last one. When you make a statement such as ... it seems to me that .. you have NOT solved the hard problem, but only restated it, you are implicitly bringing to the table a set of ideas about what it means to solve this problem, or explain consciousness. Fine so far: everyone uses the rules of explanation that they have acquired over a lifetime - and of course in science we all roughly agree on a set of ideas about what it means to explain things. But what I am trying to point out in this paper is that because of the nature of intelligent systems and how they must do their job, the very concept of *explanation* is undermined by the topic that in this case we are trying to explain. You cannot just go right ahead and apply a standard of explanation right out of the box (so to speak) because unlike explaining atoms and explaining stars, in this case you are trying to explain something that interferes with the notion of explanation. So when you imply that the theory I propose is weak *because* it provides no way to distinguish: Conscious experience is **identified with** unanalyzable mind-atoms from Conscious experience is **correlated with** unanalyzable mind-atoms You are missing the main claim that the theory tries to make: that such distinctions are broken precisely *because* of what is going on with the explanandum. You have got to get this point to be able to understand the paper. I mean, it is okay to disagree with the point and say why (to talk about what it means to explain things' to talk about the connection between the explanandum and the methods and basic terms of the thing that we call explaining things). That would be fine. But at the moment it seems to me that you have been through several passes
Re: [agi] A paper that actually does solve the problem of consciousness
Richard, So are you saying that: According to the ordinary scientific standards of 'explanation', the subjective experience of consciousness cannot be explained ... and as a consequence, the relationship between subjective consciousness and physical data (as required to be elucidated by any solution to Chalmers' hard problem as normally conceived) also cannot be explained. If so, then: according to the ordinary scientific standards of explanation, you are not explaining consciousness, nor explaining the relation btw consciousness and the physical ... but are rather **explaining why, due to the particular nature of consciousness and its relationship to the ordinary scientific standards of explanation, this kind of explanation is not possible** ?? ben g On Wed, Nov 19, 2008 at 4:05 PM, Richard Loosemore [EMAIL PROTECTED]wrote: Ben Goertzel wrote: Richard, My first response to this is that you still don't seem to have taken account of what was said in the second part of the paper - and, at the same time, I can find many places where you make statements that are undermined by that second part. To take the most significant example: when you say: But, I don't see how the hypothesis Conscious experience is **identified with** unanalyzable mind-atoms could be distinguished empirically from Conscious experience is **correlated with** unanalyzable mind-atoms ... there are several concepts buried in there, like [identified with], [distinguished empirically from] and [correlated with] that are theory-laden. In other words, when you use those terms you are implictly applying some standards that have to do with semantics and ontology, and it is precisely those standards that I attacked in part 2 of the paper. However, there is also another thing I can say about this statement, based on the argument in part one of the paper. It looks like you are also falling victim to the argument in part 1, at the same time that you are questioning its validity: one of the consequences of that initial argument was that *because* those concept-atoms are unanalyzable, you can never do any such thing as talk about their being only correlated with a particular cognitive event versus actually being identified with that cognitive event! So when you point out that the above distinction seems impossible to make, I say: Yes, of course: the theory itself just *said* that!. So far, all of the serious questions that people have placed at the door of this theory have proved susceptible to that argument. Well, suppose I am studying your brain with a super-advanced brain-monitoring device ... Then, suppose that I, using the brain-monitoring device, identify the brain response pattern that uniquely occurs when you look at something red ... I can then pose the question: Is your experience of red *identical* to this brain-response pattern ... or is it correlated with this brain-response pattern? I can pose this question even though the cognitive atoms corresponding to this brain-response pattern are unanalyzable from your perspective... Next, note that I can also turn the same brain-monitoring device on myself... So I don't see why the question is unaskable ... it seems askable, because these concept-atoms in question are experience-able even if not analyzable... that is, they still form mental content even though they aren't susceptible to explanation as you describe it... I agree that, subjectively or empirically, there is no way to distinguish Conscious experience is **identified with** unanalyzable mind-atoms from Conscious experience is **correlated with** unanalyzable mind-atoms and it seems to me that this indicates you have NOT solved the hard problem, but only restated it in a different (possibly useful) way There are several different approaches and comments that I could take with what you just wrote, but let me focus on just one; the last one. When you make a statement such as ... it seems to me that .. you have NOT solved the hard problem, but only restated it, you are implicitly bringing to the table a set of ideas about what it means to solve this problem, or explain consciousness. Fine so far: everyone uses the rules of explanation that they have acquired over a lifetime - and of course in science we all roughly agree on a set of ideas about what it means to explain things. But what I am trying to point out in this paper is that because of the nature of intelligent systems and how they must do their job, the very concept of *explanation* is undermined by the topic that in this case we are trying to explain. You cannot just go right ahead and apply a standard of explanation right out of the box (so to speak) because unlike explaining atoms and explaining stars, in this case you are trying to explain something that
Re: [agi] A paper that actually does solve the problem of consciousness
Ed, I'd be curious for your reaction to http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html which explores the limits of scientific and linguistic explanation, in a different but possibly related way to Richard's argument. Science and language are powerful tools for explanation but there is no reason to assume they are all-powerful. We should push them as far as we can, but no further... I agree with Richard that according to standard scientific notions of explanation, consciousness and its relation to the physical world are inexplicable. My intuition and reasoning are probably not exactly the same as his, but there seems some similarity btw our views... -- Ben G On Wed, Nov 19, 2008 at 5:27 PM, Ed Porter [EMAIL PROTECTED] wrote: Richard, (the second half of this post, that starting with the all capitalized heading, is the most important) I agree with your extreme cognitive semantics discussion. I agree with your statement that one criterion for realness is the directness and immediateness of something's phenomenology. I agree with your statement that, based on this criterion for realness, many conscious phenomena, such as qualia, which have traditionally fallen under the hard problem of consciousness seem to be real. But I have problems with some of the conclusions you draw from these things, particularly in your Implications section at the top of the second column on Page 5 of your paper. There you state …the correct explanation for consciousness is that all of its various phenomenological facets deserve to be called as real as any other concept we have, because there are no meaningful objective standards that we could apply to judge them otherwise. That aspects of consciousness seem real does not provides much of an explanation for consciousness. It says something, but not much. It adds little to Descartes' I think therefore I am. I don't think it provides much of an answer to any of the multiple questions Wikipedia associates with Chalmer's hard problem of consciousness. You further state that some aspects of consciousness have a unique status of being beyond the reach of scientific inquiry and give a purported reason why they are beyond such a reach. Similarly you say: …although we can never say exactly what the phenomena of consciousness are, in the way that we give scientific explanations for other things, we can nevertheless say exactly why we cannot say anything: so in the end, we can explain it. First, I would point out as I have in my prior papers that, given the advances that are expected to be made in AGI, brain scanning and brain science in the next fifty years, it is not clear that consciousness is necessarily any less explainable than are many other aspects of physical reality. You admit there are easy problems of consciousness that can be explained, just as there are easy parts of physical reality that can be explained. But it is not clear that the percent of consciousness that will remain a mystery in fifty years is any larger than the percent of basic physical reality that will remain a mystery in that time frame. But even if we accept as true your statement that certain phenomena of consciousness are beyond analysis, that does little to explain consciousness. In fact, it does not appear to answer any of the hard problems of consciousness. For example, just because (a) we are conscious of the distinction used in our own mind's internal representation between sensation of the colors red and blue, (b) we allegedly cannot analyze that difference further, and (c) that distinction seems subjectively real to us --- that does not shed much light on whether or not a p-zombie would be capable of acting just like a human without having consciousness of red and blue color qualia. It is not even clear to me that your paper shows consciousness is not an artifact, as your abstract implies. Just because something is real does not mean it is not an artifact, in many senses of the word, such as an unintended, secondary, or unessential, aspect of something. THE REAL WEAKNESS OF YOUR PAPER IS THAT IS PUTS WAY TOO MUCH EMPHASIS ON THE PART OF YOUR MOLECULAR FRAMEWORK THAT ALLEGEDLY BOTTOMS OUT, AND NOT ENOUGH ON THE PART OF THE FRAMEWORK YOU SAY REPORTS A SENSE OF REALNESS DESPITE SUCH BOTTOMING OUT -- THE SENSE OF REALNESS THAT IS MOST ESSENTIAL TO CONSCIOUSNESS. It is my belief that if you want to understand consciousness in the context of the types of things discussed in your paper, you should focus the part of the molecular framework, which you imply it is largely in the foreground, that prevents the system from returning with no answer, even when trying to analyze a node such as a lowest level input node for the color red in a given portion of the visual field. This is the part of your molecular framework that …because of
Re: [agi] A paper that actually does solve the problem of consciousness
Ben Goertzel wrote: Richard, So are you saying that: According to the ordinary scientific standards of 'explanation', the subjective experience of consciousness cannot be explained ... and as a consequence, the relationship between subjective consciousness and physical data (as required to be elucidated by any solution to Chalmers' hard problem as normally conceived) also cannot be explained. If so, then: according to the ordinary scientific standards of explanation, you are not explaining consciousness, nor explaining the relation btw consciousness and the physical ... but are rather **explaining why, due to the particular nature of consciousness and its relationship to the ordinary scientific standards of explanation, this kind of explanation is not possible** ?? No! If you write the above, then you are summarizing the question that I pose at the half-way point of the paper, just before the second part gets underway. The ordinary scientific standards of explanation are undermined by questions about consciousness. They break. You cannot use them. They become internally inconsistent. You cannot say I hereby apply the standard mechanism of 'explanation' to Problem X, but then admit that Problem X IS the very mechanism that is responsible for determining the 'explanation' method you are using, AND the one thing you know about that mechanism is that you can see a gaping hole in the mechanism! You have to find a way to mend that broken standard of explanation. I do that in part 2. So far we have not discussed the whole paper, only part 1. Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Ok, well I read part 2 three times and I seem not to be getting the importance or the crux of it. I hate to ask this, but could you possibly summarize it in some different way, in the hopes of getting through to me?? I agree that the standard scientific approach to explanation breaks when presented with consciousness. I do not (yet) understand your proposed alternative approach to explanation. If anyone on this list *does* understand it, feel free to chip in with your own attempted summary... thx ben On Wed, Nov 19, 2008 at 5:47 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Ben Goertzel wrote: Richard, So are you saying that: According to the ordinary scientific standards of 'explanation', the subjective experience of consciousness cannot be explained ... and as a consequence, the relationship between subjective consciousness and physical data (as required to be elucidated by any solution to Chalmers' hard problem as normally conceived) also cannot be explained. If so, then: according to the ordinary scientific standards of explanation, you are not explaining consciousness, nor explaining the relation btw consciousness and the physical ... but are rather **explaining why, due to the particular nature of consciousness and its relationship to the ordinary scientific standards of explanation, this kind of explanation is not possible** ?? No! If you write the above, then you are summarizing the question that I pose at the half-way point of the paper, just before the second part gets underway. The ordinary scientific standards of explanation are undermined by questions about consciousness. They break. You cannot use them. They become internally inconsistent. You cannot say I hereby apply the standard mechanism of 'explanation' to Problem X, but then admit that Problem X IS the very mechanism that is responsible for determining the 'explanation' method you are using, AND the one thing you know about that mechanism is that you can see a gaping hole in the mechanism! You have to find a way to mend that broken standard of explanation. I do that in part 2. So far we have not discussed the whole paper, only part 1. Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects. -- Robert Heinlein --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Ed Porter wrote: Richard, /(the second half of this post, that starting with the all capitalized heading, is the most important)/ I agree with your extreme cognitive semantics discussion. I agree with your statement that one criterion for “realness” is the directness and immediateness of something’s phenomenology. I agree with your statement that, based on this criterion for “realness,” many conscious phenomena, such as qualia, which have traditionally fallen under the hard problem of consciousness seem to be “real.” But I have problems with some of the conclusions you draw from these things, particularly in your “Implications” section at the top of the second column on Page 5 of your paper. There you state “…the correct explanation for consciousness is that all of its various phenomenological facets deserve to be called as “real” as any other concept we have, because there are no meaningful /objective /standards that we could apply to judge them otherwise.” That aspects of consciousness seem real does not provides much of an “explanation for consciousness.” It says something, but not much. It adds little to Descartes’ “I think therefore I am.” I don’t think it provides much of an answer to any of the multiple questions Wikipedia associates with Chalmer’s hard problem of consciousness. I would respond as follows. When I make statements about consciousness deserving to be called real, I am only saying this as a summary of a long argument that has gone before. So it would not really be fair to declare that this statement of mine says something, but not much without taking account of the reasons that have been building up toward that statement earlier in the paper. I am arguing that when we probe the meaning of real we find that the best criterion of realness is the way that the system builds a population of concept-atoms that are (a) mutually consistent with one another, and (b) strongly supported by sensory evidence (there are other criteria, but those are the main ones). If you think hard enough about these criteria, you notice that the qualia-atoms (those concept-atoms that cause the analysis mechanism to bottom out) score very high indeed. This is in dramatic contrast to other concept-atoms like hallucinations, which we consider 'artifacts' precisely because they score so low. The difference between these two is so dramatic that I think we need to allow the qualia-atoms to be called real by all our usual criteria, BUT with the added feature that they cannot be understood in any more basic terms. Now, all of that (and more) lies behind the simple statement that they should be called real. It wouldn't make much sense to judge that statement by itself. Only judge the argument behind it. You further state that some aspects of consciousness have a unique status of being beyond the reach of scientific inquiry and give a purported reason why they are beyond such a reach. Similarly you say: ”…although we can never say exactly what the phenomena of consciousness are, in the way that we give scientific explanations for other things, we can nevertheless say exactly why we cannot say anything: so in the end, we can explain it.” First, I would point out as I have in my prior papers that, given the advances that are expected to be made in AGI, brain scanning and brain science in the next fifty years, it is not clear that consciousness is necessarily any less explainable than are many other aspects of physical reality. You admit there are easy problems of consciousness that can be explained, just as there are easy parts of physical reality that can be explained. But it is not clear that the percent of consciousness that will remain a mystery in fifty years is any larger than the percent of basic physical reality that will remain a mystery in that time frame. The paper gives a clear argument for *why* it cannot be explained. So contradict that argument (to say it is not clear that consciousness is necessarily any less explainable than are many other aspects of physical reality) you have to say why the argument does not work. It would make no sense for a person to simply assert the opposite of the argument's conclusion, without justification. The argument goes into plenty of specific details, so there are many kinds of attack that you could make. But even if we accept as true your statement that certain phenomena of consciousness are beyond analysis, that does little to explain consciousness. In fact, it does not appear to answer any of the hard problems of consciousness. For example, just because (a) we are conscious of the distinction used in our own mind’s internal representation between sensation of the colors red and blue, (b) we allegedly cannot analyze that difference further, and (c) that distinction seems subjectively real to us --- that does not shed much light on whether or not a p-zombie would be
RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction
From: Trent Waddington [mailto:[EMAIL PROTECTED] On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye with a needle (the standard way to draw blood) even though it squirms just like a human would. It is surprisingly easy to modify one's ethics to feel this way, as proven by the Milgram experiments and Nazi war crime trials. I'm sure you're not meaning to suggest that scientists commonly rationalize in this way, nor that they are all Nazi war criminals for experimenting on animals. I feel the need to remind people that animal rights is a fringe movement that does not represent the views of the majority. We experiment on animals because the benefits, to humans, are considered worthwhile. I like animals. And I like the idea of coming up with cures to diseases and testing them on animals first. In college my biologist roommate protested the torture of fruit flies. My son has starting playing video games where you shoot, zapp and chemically immolate the opponent, so I need to explain to him that those bad guys are not conscious...yet. I don't know if there are guidelines. Humans, being the rulers of planet, appear as godlike beings to other conscious inhabitants. That brings responsibility. So when we start coming up with AI stuff in the lab that attains certain levels of consciousness we have to know what consciousness is in order to govern our behavior. And naturally if some superintelligent space alien or rogue interstellar AI encounters us and decides that we are a culinary delicacy and wants to grow us enmass economically, we hope that some respect is given eh? Reminds me of hearing that some farms are experimenting with growing chickens w/o heads. Animal rights may be more than just a fringe movement. Kind of like Mike - http://en.wikipedia.org/wiki/Mike_the_Headless_Chicken John --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction
I mean that people are free to decide if others feel pain. Wow! You are one sick puppy, dude. Personally, you have just hit my Do not bother debating with list. You can decide anything you like -- but that doesn't make it true. - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, November 17, 2008 4:44 PM Subject: RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote: First, it is not clear people are free to decide what makes pain real, at least subjectively real. I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye with a needle (the standard way to draw blood) even though it squirms just like a human would. It is surprisingly easy to modify one's ethics to feel this way, as proven by the Milgram experiments and Nazi war crime trials. If we have anything close to the advances in brain scanning and brain science that Kurzweil predicts 1, we should come to understand the correlates of consciousness quite well No. I used examples like autobliss ( http://www.mattmahoney.net/autobliss.txt ) and the roundworm c. elegans as examples of simple systems whose functions are completely understood, yet the question of whether such systems experience pain remains a philosophical question that cannot be answered by experiment. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Richard Loosemore wrote: Harry Chesley wrote: Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf One other point: Although this is a possible explanation for our subjective experience of qualia like red or soft, I don't see it explaining pain or happy quite so easily. You can hypothesize a sort of mechanism-level explanation of those by relegating them to the older or lower parts of the brain (i.e., they're atomic at the conscious level, but have more effects at the physiological level (like releasing chemicals into the system)), but that doesn't satisfactorily cover the subjective side for me. I do have a quick answer to that one. Remember that the core of the model is the *scope* of the analysis mechanism. If there is a sharp boundary (as well there might be), then this defines the point where the qualia kick in. Pain receptors are fairly easy: they are primitive signal lines. Emotions are, I believe, caused by clusters of lower brain structures, so the interface between lower brain and foreground is the place where the foreground sees a limit to the analysis mechanisms. More generally, the significance of the foreground is that it sets a boundary on how far the analysis mechanisms can reach. I am not sure why that would seem less satisfactory as an explanation of the subjectivity. It is a raw feel, and that is the key idea, no? My problem is if qualia are atomic, with no differentiable details, why do some feel different than others -- shouldn't they all be separate but equal? Red is relatively neutral, while searing hot is not. Part of that is certainly lower brain function, below the level of consciousness, but that doesn't explain to me why it feels qualitatively different. If it was just something like increased activity (franticness) in response to searing hot, then fine, that could just be something like adrenaline being pumped into the system, but there is a subjective feeling that goes beyond that. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
My problem is if qualia are atomic, with no differentiable details, why do some feel different than others -- shouldn't they all be separate but equal? Red is relatively neutral, while searing hot is not. Part of that is certainly lower brain function, below the level of consciousness, but that doesn't explain to me why it feels qualitatively different. If it was just something like increased activity (franticness) in response to searing hot, then fine, that could just be something like adrenaline being pumped into the system, but there is a subjective feeling that goes beyond that. Maybe I missed it but why do you assume that because qualia are atomic that they have no differentiable details? Evolution is, quite correctly, going to give pain qualia higher priority and less ability to be shut down than red qualia. In a good representation system, that means that searing hot is going to be *very* whatever and very tough to ignore. - Original Message - From: Harry Chesley [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, November 18, 2008 1:57 PM Subject: **SPAM** Re: [agi] A paper that actually does solve the problem of consciousness Richard Loosemore wrote: Harry Chesley wrote: Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf One other point: Although this is a possible explanation for our subjective experience of qualia like red or soft, I don't see it explaining pain or happy quite so easily. You can hypothesize a sort of mechanism-level explanation of those by relegating them to the older or lower parts of the brain (i.e., they're atomic at the conscious level, but have more effects at the physiological level (like releasing chemicals into the system)), but that doesn't satisfactorily cover the subjective side for me. I do have a quick answer to that one. Remember that the core of the model is the *scope* of the analysis mechanism. If there is a sharp boundary (as well there might be), then this defines the point where the qualia kick in. Pain receptors are fairly easy: they are primitive signal lines. Emotions are, I believe, caused by clusters of lower brain structures, so the interface between lower brain and foreground is the place where the foreground sees a limit to the analysis mechanisms. More generally, the significance of the foreground is that it sets a boundary on how far the analysis mechanisms can reach. I am not sure why that would seem less satisfactory as an explanation of the subjectivity. It is a raw feel, and that is the key idea, no? My problem is if qualia are atomic, with no differentiable details, why do some feel different than others -- shouldn't they all be separate but equal? Red is relatively neutral, while searing hot is not. Part of that is certainly lower brain function, below the level of consciousness, but that doesn't explain to me why it feels qualitatively different. If it was just something like increased activity (franticness) in response to searing hot, then fine, that could just be something like adrenaline being pumped into the system, but there is a subjective feeling that goes beyond that. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Mark Waser wrote: My problem is if qualia are atomic, with no differentiable details, why do some feel different than others -- shouldn't they all be separate but equal? Red is relatively neutral, while searing hot is not. Part of that is certainly lower brain function, below the level of consciousness, but that doesn't explain to me why it feels qualitatively different. If it was just something like increased activity (franticness) in response to searing hot, then fine, that could just be something like adrenaline being pumped into the system, but there is a subjective feeling that goes beyond that. Maybe I missed it but why do you assume that because qualia are atomic that they have no differentiable details? Evolution is, quite correctly, going to give pain qualia higher priority and less ability to be shut down than red qualia. In a good representation system, that means that searing hot is going to be *very* whatever and very tough to ignore. I thought that was the meaning of atomic as used in the paper. Maybe I got it wrong. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote: I mean that people are free to decide if others feel pain. Wow! You are one sick puppy, dude. Personally, you have just hit my Do not bother debating with list. You can decide anything you like -- but that doesn't make it true. Aren't you the one who decided that autobliss feels pain? Or did you decide that it doesn't? -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Harry Chesley wrote: Richard Loosemore wrote: Harry Chesley wrote: Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf One other point: Although this is a possible explanation for our subjective experience of qualia like red or soft, I don't see it explaining pain or happy quite so easily. You can hypothesize a sort of mechanism-level explanation of those by relegating them to the older or lower parts of the brain (i.e., they're atomic at the conscious level, but have more effects at the physiological level (like releasing chemicals into the system)), but that doesn't satisfactorily cover the subjective side for me. I do have a quick answer to that one. Remember that the core of the model is the *scope* of the analysis mechanism. If there is a sharp boundary (as well there might be), then this defines the point where the qualia kick in. Pain receptors are fairly easy: they are primitive signal lines. Emotions are, I believe, caused by clusters of lower brain structures, so the interface between lower brain and foreground is the place where the foreground sees a limit to the analysis mechanisms. More generally, the significance of the foreground is that it sets a boundary on how far the analysis mechanisms can reach. I am not sure why that would seem less satisfactory as an explanation of the subjectivity. It is a raw feel, and that is the key idea, no? My problem is if qualia are atomic, with no differentiable details, why do some feel different than others -- shouldn't they all be separate but equal? Red is relatively neutral, while searing hot is not. Part of that is certainly lower brain function, below the level of consciousness, but that doesn't explain to me why it feels qualitatively different. If it was just something like increased activity (franticness) in response to searing hot, then fine, that could just be something like adrenaline being pumped into the system, but there is a subjective feeling that goes beyond that. There is more than one question wrapped up inside this question, I think. First: all qualia feel different, of course. You seem to be pointing to a sense in which pain is more different than most ? But is that really a valid idea? Does pain have differentiable details? Well, there are different types of pain but that is to be expected, like different colors. But that is arelatively trivial point. Within one single pain there can be several *effects* of that pain, including some strange ones that do not have counterparts in the vision-color case. For example, suppose that a searing hot pain caused a simultaneous triggering of the motivational system, forcing you to suddenly want to do something (like pulling your body part away from the pain). The feeling of wanting (wanting to pull away) is a quale of its own, in a sense, so it would not be impossible for one quale (searing hot) to always be associated with another (wanting to pull away). If those always occurred together, it might seem that there was structure to the pain experience, where in fact there is a pair of things happening. It is probably more than a pair of things, but perhaps you get my drift. Remember that having associations to a pain is not part of what we consider to be the essence of the subjective experience; the bit that is most mysterious and needs to be explained. Another thing we have to keep in mind here is that the exact details of how each subjective experience feels are certainly going to seem different, and some can seem like each other and not like others colors are like other colors, but not like pains. That is to be expected: we can say that colors happen in a certain place in our sensorium (vision) while pains are associated with the body (usually), but these differences are not inconsistent with the account I have given. If concept-atoms encoding [red] always attach to all the othe concept-atoms involving visual experiences, that would make them very different than pains like [searing hot], but all of this could be true at the same time that [red] would do what it does to the analysis mechanism (when we try to think the thought Was is the essence of redness?). So the problem with the analysis mechanism would happen with both pains and colors, even though the two different atom types played games with different sets of other concept-atoms. Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote: Autobliss has no grounding, no internal feedback, and no volition. By what definitions does it feel pain? Now you are making up new rules to decide that autobliss doesn't feel pain. My definition of pain is negative reinforcement in a system that learns. There is no other requirement. You stated that machines can feel pain, and you stated that we don't get to decide which ones. So can you precisely define grounding, internal feedback and volition (as properties of Turing machines) and prove that these criteria are valid? And just to avoid confusion, my question has nothing to do with ethics. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)
On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote: Autobliss has no grounding, no internal feedback, and no volition. By what definitions does it feel pain? Now you are making up new rules to decide that autobliss doesn't feel pain. My definition of pain is negative reinforcement in a system that learns. There is no other requirement. You stated that machines can feel pain, and you stated that we don't get to decide which ones. So can you precisely define grounding, internal feedback and volition (as properties of Turing machines) Clearly, this can be done, and has largely been done already ... though cutting and pasting or summarizing the relevant literature in emails would not a productive use of time and prove that these criteria are valid? That is a different issue, as it depends on the criteria of validity, of course... I think one can argue that these properties are necessary for a finite-resources AI system to display intense systemic patterns correlated with its goal-achieving behavior in the context of diverse goals and situations. So, one can argue that these properties are necessary for **the sort of consciousness associated with general intelligence** ... but that's a bit weaker than saying they are necessary for consciousness (and I don't think they are) ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)
On Wed, Nov 19, 2008 at 9:29 AM, Ben Goertzel [EMAIL PROTECTED] wrote: Clearly, this can be done, and has largely been done already ... though cutting and pasting or summarizing the relevant literature in emails would not a productive use of time Apparently, it was Einstein who said that if you can't explain it to your grandmother then you don't understand it. Of course, he never had to argue on the Internet. Trent --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Richard, I re-read your paper and I'm afraid I really don't grok why you think it solves Chalmers' hard problem of consciousness... It really seems to me like what you're suggesting is a cognitive correlate of consciousness, to morph the common phrase neural correlate of consciousness ... You seem to be stating that when X is an unanalyzable, pure atomic sensation from the perspective of cognitive system C, then C will perceive X as a raw quale ... unanalyzable and not explicable by ordinary methods of explication, yet, still subjectively real... But, I don't see how the hypothesis Conscious experience is **identified with** unanalyzable mind-atoms could be distinguished empirically from Conscious experience is **correlated with** unanalyzable mind-atoms I think finding cognitive correlates of consciousness is interesting, but I don't think it constitutes solving the hard problem in Chalmers' sense... I grok that you're saying consciousness feels inexplicable because it has to do with atoms that the system can't explain, due to their role as its primitive atoms ... and this is a good idea, but, I don't see how it bridges the gap btw subjective experience and empirical data ... What it does is explain why, even if there *were* no hard problem, cognitive systems might feel like there is one, in regard to their unanalyzable atoms Another worry I have is: I feel like I can be conscious of my son, even though he is not an unanalyzable atom. I feel like I can be conscious of the unique impression he makes ... in the same way that I'm conscious of redness ... and, yeah, I feel like I can't fully explain the conscious impression he makes on me, even though I can explain a lot of things about him... So I'm not convinced that atomic sensor input is the only source of raw, unanalyzable consciousness... -- Ben G On Tue, Nov 18, 2008 at 5:14 PM, Richard Loosemore [EMAIL PROTECTED]wrote: Harry Chesley wrote: Richard Loosemore wrote: Harry Chesley wrote: Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf One other point: Although this is a possible explanation for our subjective experience of qualia like red or soft, I don't see it explaining pain or happy quite so easily. You can hypothesize a sort of mechanism-level explanation of those by relegating them to the older or lower parts of the brain (i.e., they're atomic at the conscious level, but have more effects at the physiological level (like releasing chemicals into the system)), but that doesn't satisfactorily cover the subjective side for me. I do have a quick answer to that one. Remember that the core of the model is the *scope* of the analysis mechanism. If there is a sharp boundary (as well there might be), then this defines the point where the qualia kick in. Pain receptors are fairly easy: they are primitive signal lines. Emotions are, I believe, caused by clusters of lower brain structures, so the interface between lower brain and foreground is the place where the foreground sees a limit to the analysis mechanisms. More generally, the significance of the foreground is that it sets a boundary on how far the analysis mechanisms can reach. I am not sure why that would seem less satisfactory as an explanation of the subjectivity. It is a raw feel, and that is the key idea, no? My problem is if qualia are atomic, with no differentiable details, why do some feel different than others -- shouldn't they all be separate but equal? Red is relatively neutral, while searing hot is not. Part of that is certainly lower brain function, below the level of consciousness, but that doesn't explain to me why it feels qualitatively different. If it was just something like increased activity (franticness) in response to searing hot, then fine, that could just be something like adrenaline being pumped into the system, but there is a subjective feeling that goes beyond that. There is more than one question wrapped up inside this question, I think. First: all qualia feel different, of course. You seem to be pointing to a sense in which pain is more different than most ? But is that really a valid idea? Does pain have differentiable details? Well, there are different types of pain but that is to be expected, like different colors. But that is arelatively trivial point. Within one single pain there can be several *effects* of that pain, including some strange ones that do not have counterparts in the vision-color case. For example, suppose that a searing hot pain caused a simultaneous triggering of the motivational system, forcing you to suddenly want to do something (like pulling your body part away from the pain). The feeling of wanting (wanting to pull away) is a quale of its own,
Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)
Now you are making up new rules to decide that autobliss doesn't feel pain. My definition of pain is negative reinforcement in a system that learns. There is no other requirement. I made up no rules. I merely asked a question. You are the one who makes a definition like this and then says that it is up to people to decide whether other humans feel pain or not. That is hypocritical to an extreme. I also believe that your definition is a total crock that was developed for no purpose other than to support your BS. You stated that machines can feel pain, and you stated that we don't get to decide which ones. So can you precisely define grounding, internal feedback and volition (as properties of Turing machines) and prove that these criteria are valid? I stated that *SOME* future machines will be able to feel pain. I can define grounding, internal feedback and volition but feel no need to do so as properties of a Turing machine and decline to attempt to prove anything to you since you're so full of it that your mother couldn't prove to you that you were born. - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, November 18, 2008 6:26 PM Subject: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction) --- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote: Autobliss has no grounding, no internal feedback, and no volition. By what definitions does it feel pain? Now you are making up new rules to decide that autobliss doesn't feel pain. My definition of pain is negative reinforcement in a system that learns. There is no other requirement. You stated that machines can feel pain, and you stated that we don't get to decide which ones. So can you precisely define grounding, internal feedback and volition (as properties of Turing machines) and prove that these criteria are valid? And just to avoid confusion, my question has nothing to do with ethics. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)
I am just trying to point out the contradictions in Mark's sweeping generalizations about the treatment of intelligent machines Huh? That's what you're trying to do? Normally people do that by pointing to two different statements and arguing that they contradict each other. Not by creating new, really silly definitions and then trying to posit a universe where blue equals red so everybody is confused. But to be fair, such criticism is unwarranted. So exactly why are you persisting? Ethical beliefs are emotional, not rational, Ethical beliefs are subconscious and deliberately obscured from the conscious mind so that defections can be explained away without triggering other primate's lie-detecting senses. However, contrary to your antiquated beliefs, they are *purely* a survival trait with a very solid grounding. Ethical beliefs are also algorithmically complex Absolutely not. Ethical beliefs are actually pretty darn simple as far as the subconscious is concerned. It's only when the conscious rational mind gets involved that ethics are twisted beyond recognition (just like all your arguments). so the result of this argument could only result in increasingly complex rules to fit his model Again, absolutely not. You have no clue as to what my argument is yet you fantasize that you can predict it's results. BAH! For the record, I do have ethical beliefs like most other people Yet you persist in arguing otherwise. *Most* people would call that dishonest, deceitful, and time-wasting. The question is not how should we interact with machines, but how will we? No, it isn't. Study the results on ethical behavior when people are convinced that they don't have free will. = = = = = BAH! I should have quit answering you long ago. No more. - Original Message - From: Matt Mahoney To: agi@v2.listbox.com Sent: Tuesday, November 18, 2008 7:58 PM Subject: Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction) Just to clarify, I'm not really interested in whether machines feel pain. I am just trying to point out the contradictions in Mark's sweeping generalizations about the treatment of intelligent machines. But to be fair, such criticism is unwarented. Mark is arguing about ethics. Everyone has ethical beliefs. Ethical beliefs are emotional, not rational, although we often forget this. Ethical beliefs are also algorithmically complex, so the result of this argument could only result in increasingly complex rules to fit his model. It would be unfair to bore the rest of this list with such a discussion. For the record, I do have ethical beliefs like most other people, but they are irrelevant to the design of AGI. The question is not how should we interact with machines, but how will we? For example, when we develop the technology to simulate human minds in general, or to simulate specific humans who have died, common ethical models among humans will probably result in the granting of legal and property rights to these simulations. Since these simulations could reproduce, evolve, and acquire computing resources much faster than humans, the likely result will be human extinction, or viewed another way, our evolution into a non-DNA based life form. I won't offer an opinion on whether this is desirable or not, because my opinion would be based on my ethical beliefs. -- Matt Mahoney, [EMAIL PROTECTED] --- On Tue, 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote: From: Ben Goertzel [EMAIL PROTECTED] Subject: Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction) To: agi@v2.listbox.com Date: Tuesday, November 18, 2008, 6:29 PM On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote: Autobliss has no grounding, no internal feedback, and no volition. By what definitions does it feel pain? Now you are making up new rules to decide that autobliss doesn't feel pain. My definition of pain is negative reinforcement in a system that learns. There is no other requirement. You stated that machines can feel pain, and you stated that we don't get to decide which ones. So can you precisely define grounding, internal feedback and volition (as properties of Turing machines) Clearly, this can be done, and has largely been done already ... though cutting and pasting or summarizing the relevant literature in emails would not a productive use of time and prove that these criteria are valid? That is a different issue, as it depends on the criteria of validity, of course... I think one can argue that these properties are necessary for a
Re: [agi] A paper that actually does solve the problem of consciousness
Colin: right or wrong...I have a working physical model for consciousness. Just so. Serious scientific study of consciousness entails *models* not verbal definitions. The latter are quite hopeless. Richard opined that there is a precise definition of the hard problem of consciousness. There is no precise definition of any term AFAIK in philosophy, or language...consciousness,mind,problem-solving, senses, intelligence etc... Every term is massively contested in philosophy - and often by the individual philosopher himself. See studies of how many ways Kuhn used the term paradigm. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
--- On Sun, 11/16/08, Mark Waser [EMAIL PROTECTED] wrote: I wrote: I think the reason that the hard question is interesting at all is that it would presumably be OK to torture a zombie because it doesn't actually experience pain, even though it would react exactly like a human being tortured. That's an ethical question. Ethics is a belief system that exists in our minds about what we should or should not do. There is no objective experiment you can do that will tell you whether any act, such as inflicting pain on a human, animal, or machine, is ethical or not. The only thing you can measure is belief, for example, by taking a poll. What is the point to ethics? The reason why you can't do objective experiments is because *YOU* don't have a grounded concept of ethics. The second that you ground your concepts in effects that can be seen in the real world, there are numerous possible experiments. How do you propose grounding ethics? I have a complex model that says some things are right and others are wrong. So does everyone else. These models don't agree. How do you propose testing whether a model is correct or not? If everyone agreed that torturing people was wrong, then torture wouldn't exist. The same is true of consciousness. The hard problem of consciousness is hard because the question is ungrounded. Define all of the arguments in terms of things that appear and matter in the real world and the question goes away. It's only because you invent ungrounded unprovable distinctions that the so-called hard problem appears. How do you prove that Richard's definition of consciousness is correct and Colin's is wrong, or vice versa? All you can say about either definition is that some entities are conscious and others are not, according to whichever definition you accept. But so what? Torturing a p-zombie is unethical because whether it feels pain or not is 100% irrelevant in the real world. If it 100% acts as if it feels pain, then for all purposes that matter it does feel pain. Why invent this mystical situation where it doesn't feel pain yet acts as if it does? Because people nevertheless make this arbitrary distinction in order to make ethical decisions. Torturing a p-zombie is only wrong according to some ethical models but not others. The same is true about doing animal experiments, or running autobliss with two negative arguments. If you ask people why they think so, a common response is that the things that it is not ethical to torture are conscious. Richard's paper attempts to solve the hard problem by grounding some of the silliness. It's the best possible effort short of just ignoring the silliness and going on to something else that is actually relevant to the real world. I agree. This whole irrelevant discussion of consciousness is getting tedious. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
How do you propose grounding ethics? Ethics is building and maintaining healthy relationships for the betterment of all. Evolution has equipped us all with a good solid moral sense that frequently we don't/can't even override with our short-sighted selfish desires (that, more frequently than not, eventually end up screwing us over when we follow them). It's pretty easy to ground ethics as long as you realize that there are some cases that are just too close to call with the information that you possess at the time you need to make a decision. But then again, that's precisely what intelligence is -- making effective decisions under uncertainty. I have a complex model that says some things are right and others are wrong. That's nice -- but you've already pointed out that your model has numerous shortcomings such that you won't even stand behind it. Why do you keep bringing it up? It's like saying I have an economic theory when you clearly don't have the expertise to form a competent one. So does everyone else. These models don't agree. And lots of people have theories of creationism. Do you want to use that to argue that evolution is incorrect? How do you propose testing whether a model is correct or not? By determining whether it is useful and predictive -- just like what we always do when we're practicing science (as opposed to spouting BS). If everyone agreed that torturing people was wrong, then torture wouldn't exist. Wrong. People agree that things are wrong and then they go and do them anyways because they believe that it is beneficial for them. Why do you spout obviously untrue BS? How do you prove that Richard's definition of consciousness is correct and Colin's is wrong, or vice versa? All you can say about either definition is that some entities are conscious and others are not, according to whichever definition you accept. But so what? Wow! You really do practice useless sophistry. For definitions, correct simply means useful and predictive. I'll go with whichever definition most accurately reflects the world. Are you trying to propose that there is an absolute truth out there as far as definitions go? Because people nevertheless make this arbitrary distinction in order to make ethical decisions. So when lemmings go into the river you believe that they are correct and you should follow them? - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, November 17, 2008 9:35 AM Subject: Re: [agi] A paper that actually does solve the problem of consciousness --- On Sun, 11/16/08, Mark Waser [EMAIL PROTECTED] wrote: I wrote: I think the reason that the hard question is interesting at all is that it would presumably be OK to torture a zombie because it doesn't actually experience pain, even though it would react exactly like a human being tortured. That's an ethical question. Ethics is a belief system that exists in our minds about what we should or should not do. There is no objective experiment you can do that will tell you whether any act, such as inflicting pain on a human, animal, or machine, is ethical or not. The only thing you can measure is belief, for example, by taking a poll. What is the point to ethics? The reason why you can't do objective experiments is because *YOU* don't have a grounded concept of ethics. The second that you ground your concepts in effects that can be seen in the real world, there are numerous possible experiments. How do you propose grounding ethics? I have a complex model that says some things are right and others are wrong. So does everyone else. These models don't agree. How do you propose testing whether a model is correct or not? If everyone agreed that torturing people was wrong, then torture wouldn't exist. The same is true of consciousness. The hard problem of consciousness is hard because the question is ungrounded. Define all of the arguments in terms of things that appear and matter in the real world and the question goes away. It's only because you invent ungrounded unprovable distinctions that the so-called hard problem appears. How do you prove that Richard's definition of consciousness is correct and Colin's is wrong, or vice versa? All you can say about either definition is that some entities are conscious and others are not, according to whichever definition you accept. But so what? Torturing a p-zombie is unethical because whether it feels pain or not is 100% irrelevant in the real world. If it 100% acts as if it feels pain, then for all purposes that matter it does feel pain. Why invent this mystical situation where it doesn't feel pain yet acts as if it does? Because people nevertheless make this arbitrary distinction in order to make ethical decisions. Torturing a p-zombie is only wrong according to some ethical models but not others. The same is true about doing animal experiments, or running
Re: [agi] A paper that actually does solve the problem of consciousness
John G. Rose wrote: From: Richard Loosemore [mailto:[EMAIL PROTECTED] Three things. First, David Chalmers is considered one of the world's foremost researchers in the consciousness field (he is certainly now the most celebrated). He has read the argument presented in my paper, and he has discussed it with me. He understood all of it, and he does not share any of your concerns, nor anything remotely like your concerns. He had one single reservation, on a technical point, but when I explained my answer, he thought it interesting and novel, and possibly quite valid. Second, the remainder of your comments below are not coherent enough to be answerable, and it is not my job to walk you through the basics of this field. Third, about your digression: gravity does not escape from black holes, because gravity is just the curvature of spacetime. The other things that cannot escape from black holes are not forces. I will not be replying to any further messages from you because you are wasting my time. I read this paper several times and still have trouble holding the model that you describe in my head as it fades quickly and then there is a just a memory of it (recursive ADD?). I'm not up on the latest consciousness research but still somewhat understand what is going on there. Your paper is a nice and terse description but to get others to understand the highlighted entity that you are trying to describe may be easier done with more diagrams. When I kind of got it for a second it did appear quantitative, like mathematically describable. I find it hard to believe though that others have not put it this way, I mean doesn't Hofstadter talk about this in his books, in an unacademical fashion? Hofstadter does talk about loopiness and recursion in ways that are similar, but the central idea is not the same. FWIW I did have a brief discussion with him about this at the same conference where I talked to Chalmers, and he agreed that his latest ideas about consciousness and the one I was suggesting did not seem to overlap. Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Ben Goertzel wrote: Sorry to be negative, but no, my proposal is not in any way a modernization of Peirce's metaphysical analysis of awareness. Could you elaborate the difference? It seems very similar to me. You're saying that consciousness has to do with the bottoming-out of mental hierarchies in raw percepts that are unanalyzable by the mind ... and Peirce's Firsts are precisely raw percepts that are unanalyzable by the mind... It is partly the stance (I arrive at my position from a cognitivist point of view, with specific mechanisms that must be causing the problem), where Peirce appears to suggest the Firsts idea as a purely metaphysical proposal. So, what I am saying is that this superficial resemblance between his position and mine is so superficial that it makes no sense to describe on the latter as a modernization of the former. A good analogy would be Galilean Relativity and Einsten's Relativity. Although there is a superficial resemblance, nobody would really say that Einstein was just a modernization of Galileo. *** The standard meaning of Hard Problem issues was described very well by Chalmers, and I am addressing the hard problem of concsciousness, not the other problems. *** Hmmm I don't really understand why you think your argument is a solution to the hard problem It seems like you explicitly acknowledge in your paper that it's *not*, actually It's more like a philosophical argument as to why the hard problem is unsolvable, IMO. No, that is only part one of the paper, and as you pointed out before, the first part of the proposal ends with a question, not a statement that this was a failure to explain the problem. That question was important. The important part is the analysis of explanation and meaning. This can also be taken to be about your use of the word unsolvable in the above sentence. What I am claiming (and I will make this explicit in a revision of the paper) is that these notions of explanation, meaning, solution to the problem, etc., are pushed to their breaking point by the problem of consciousness. So it is not that there is a problem with understanding consciousness itself, so much as there is a problem with what it means to *explain* things. Other things are easy to explain, but when we ask for an explanation of something like consciousness, the actual notion of explanation breaks down in a drastic way. This is very closely related to the idea of an objective observer in physics in the quantum realm that notion breaks down. What I gave in my paper was (a) a detailed description of how the confusion about consciousness arises [peculiar behavior of the analysis mechanism], but then (b) I went on to point out this peculiar behavior infects much more than just our ability to explain consciousness, because it casts doubt on the fundamental meaning of explanation and semantics and ontology. The conclusion that I then tried to draw was that it would be wrong to say that consciousness was just an artifact or (ordinarily) inexplicable thing, because this would be to tacitly assume that the sense of explain that we are using in these statements is the same one we have always used. Anyone who continued to use explain and mean (etc.) in their old context would be stuck in what I have called Level 0, and in that level the old meanings [sic] of those terms are just not able to address the issue of consciousness. Go back to the quantum mechanics analogy again: it is not right to cling to old ideas of position and momentum, etc., and say that we simply do not know the position of an electron. The real truth - the new truth about how we should understand position and momentum - is that the position of the electron is fundamentally not even determined (without observation). This analogy is not just an analogy, as I think you might begin to guess: there is a deep relationship between these two domains, and I am still working on a way to link them. Richard Loosemore. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Zombies, Autism and Consciousness {WAS Re: [agi] A paper that actually does solve the problem of consciousness]
Trent Waddington wrote: Richard, After reading your paper and contemplating the implications, I believe you have done a good job at describing the intuitive notion of consciousness that many lay-people use the word to refer to. I don't think your explanation is fleshed out enough for those lay-people, but its certainly sufficient for most the people on this list. I would recommend that anyone who hasn't read the paper, and has an interest in this whole consciousness business, give it a read. I especially liked the bit where you describe how the model of self can't be defined in terms of anything else.. as it is inherently recursive. I wonder whether the dynamic updating of the model of self may well be exactly the subjective experience of consciousness that people describe. If so, the notion of a p-zombie is not impossible, as you suggest in your conclusions, but simply an AGI without a self-model. This is something that does intrigue me (the different kinds of self-model that could be in there), but I come to slightly different conclusions. I think someone (Putnam, IIRC) pointed out that you could still have consciousness without the equivalent of any references to self and others, because such a creature would still be experiencing qualia. But, that aside, do you not think that a creature with absolutely no self model at all woudl have some troubles? It woudl not be able to represent itself in the context of the world, so it would be purely reactive. But wait: come to think of it, could it actually control any limbs if it did not have some kind of model of itself? Now, suppose you grant me that all AGIs would have at least some model of self (if only to control a single robot arm): then, if the rest of the cognitive mechanism allows it to think in a powerful and recursive way about the contents of its own thought processes (which I have suggested is one of the main preconditions for being conscious, or even being AG-Intelligent), would it not be difficult to stop it from developing a more general model of itself than just the simple self model needed to control the robot arm? We might find that any kind of self model would be a slippery slope toward a bigger self model. Finally, consider the case of humans with severe Autism. One suggestion is that they have a very poorly developed, or suppressed self model. I would be *extremely* reluctant to think that these humans are p-zombies, just because of that. I know that is a gut feeling, but even so. Finally, the introduction says: Given the strength of feeling on these matters - for example, the widespread belief that AGIs would be dangerous because, as conscious beings, they would inevitably rebel against their lack of freedom - it is incumbent upon the AGI community to resolve these questions as soon as possible. I was really looking forward to seeing you address this widespread belief, but unfortunately you declined. Seems a bit of a tease. Trent Oh, I apologize. :-( I started out with the intention of squeezing into the paper a description of the concsiousness proposal PLUS my parallel proposal about AGI motivation and emotion. It became obvious toward the end that I would not be able to say anything about the latter (I barely had enough room for a terse description of the former). But then I explained instead that this was part of a larger research program to cover issues of motivation, emotion and friendliness. I guess that wording did not really make up for the initial tease, so I'll try to rephrase that in the edited version And I will also try to get the motivation and friendliness paper written asap, to complement this one. Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Benjamin Johnston wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: Hi Richard, I don't have any comments yet about what you have written, because I'm not sure I fully understand what you're trying to say... I hope your answers to these questions will help clarify things. It seems to me that your core argument goes something like this: That there are many concepts for which an introspective analysis can only return the concept itself. That this recursion blocks any possible explanation. That consciousness is one of these concepts because self is inherently recursive. Therefore, consciousness is explicitly blocked from having any kind of explanation. Is this correct? If not, how have I misinterpreted you? This is pretty much accurate, but only up to the end of the first phase of the paper, where I asked the question: Is explaining why we cannot explain something the same as explaining it? The next phase is crucial, because (as I explained a little more in my parallel reply to Ben) the conclusion of part 1 is really that the whole notion of 'explanation' is stretched to breaking point by the concept of consciousness. So in the end what I do is argue that the whole concept of explanation (and meaning, etc) has to be replaced in order to deal with consciousness. Eventually I come to a rather strange-looking conclusion, which is that we are obliged to say that consciousness is a real thing like any other in the universe, but the exact content of it (the subjective core) is truly inexplicable. I have a thought experiment that might help me understand your ideas: If we have a robot designed according to your molecular model, and we then ask the robot what exactly is the nature of red or what is it like to experience the subjective essense of red, the robot may analyze this concept, ultimately bottoming out on an incoming signal line. But what if this robot is intelligent and can study other robots? It might then examine other robots and see that when their analysis bottoms out on an incoming signal line, what actually happens is that the incoming signal line is activated by electromagnetic energy of a certain frequency, and that the object recognition routines identify patterns in signal lines and that when an object is identified it gets annotated with texture and color information from its sensations, and that a particular software module injects all that information into the foreground memory. It might conclude that the experience of experiencing red in the other robot is to have sensors inject atoms into foreground memory, and it could then explain how the current context of that robot's foreground memory interacts with the changing sensations (that have been injected into foreground memory) to make that experience 'meaningful' to the robot. What if this robot then turns its inspection abilities onto itself? Can it therefore further analyze red? How does your theory interpret that situation? -Ben Ahh, but that *is* the way that my theory analyzes the situation, no? :-) What I mean is, I would use a human (me) in place of the first robot. Bear in mind that we must first separate out the hard problem (the pure subjective experience of red) from any easy problems (mere radiation sensititivity, etc). From the point of view of that first robot, what will she get from studying the second robot (other robots in general), if the question she really wants to answer is What is the explanation for *my* subjective experience of redness? She could talk all about the foreground and the way the analysis mechanism works in other robots (and humans), but the question is, what would that avail her is she wanted to answer the hard problem of where her subjective conscious experience comes from? After reading the first part of my paper, she would say (I hope!): Ah, now I see how all my questions about the subjective experience of things are actually caused by my analysis mechanism doing somethig weird. But the (again, I hope) she would say: H, does it meta-explain my subjective experiences if I know why I cannot explain these experiences? And thence to part two of the paper Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Colin Hales wrote: Dear Richard, I have an issue with the 'falsifiable predictions' being used as evidence of your theory. The problem is that right or wrong...I have a working physical model for consciousness. Predictions 1-3 are something that my hardware can do easily. In fact that kind of experimentation is in my downstream implementation plan. These predictions have nothing whatsoever to do with your theory or mine or anyones. I'm not sure about prediction 4. It's not something I have thought about, so I'll leave it aside for now. In my case, in the second stage of testing of my chips, one of the things I want to do is literally 'Mind Meld', forming a bridge of 4 sets of compared, independently generated qualia. Ultimately the chips may be implantable, which means a human could experience what they generate in the first person...but I digress Your statement This theory of consciousness can be used to make some falsifiable predictions could be replaced by ANY theory of consciousness can be used to make falsifiable predictions 1..4 as follows.. Which basically says they are not predictions that falsify anything at all. In which case the predictions cannot be claimed to support your theory. The problem is that the evidence of predictions 1-4 acts merely as a correlate. It does not test any particular critical dependency (causality origins). The predictions are merely correlates of any theory of consciousness. They do not test the causal necessities. In any empirical science paper the evidence could not be held in support of the claim and they would be would be discounted as evidence of your mechanism. I could cite 10 different computationalist AGI knowledge metaphors in the sections preceding the 'predictions' and the result would be the same. SoIf I was a reviewer I'd be unable to accept the claim that your 'predictions' actually said anything about the theory preceding them. This would seem to be the problematic issue of the paper. You might want to take a deeper look at this issue and try to isolate something unique to your particular solution - which has a real critical dependency in it. Then you'll have an evidence base of your own that people can use independently. In this way your proposal could be seen to be scientific in the dry empirical sense. By way of example... a computer program is not scientific evidence of anything. The computer materials, as configured by the program, actually causally necessitate the behaviour. The program is a correlate. A correlate has the formal evidentiary status of 'hearsay'. This is the sense in which I invoke the term 'correlate' above. BTW I have fallen foul of this problem myself...I had to look elsewhere for real critical dependency, like I suggested above. You never know, you might find one in there someplace! I found one after a lot of investigation. You might, too. Regards, Colin Hales Okay, let me phrase it like this: I specifically say (or rather I should have done... this is another thing I need to make more explicit!) that the predictions are about making alterations at EXACTLY the boundary of the analysis mechanisms. So, when we test the predictions, we must first understand the mechanics of human (or AGI) cognition well enough to be able to locate the exact scope of the analysis mechanisms. Then, we make the tests by changing things around just outside the reach of those mechanisms. Then we ask subjects (human or AGI) what happened to their subjective experiences. If the subjects are ourselves - which I strongly suggest must be the case - then we can ask ourselves what happened to our subjective experiences. My prediction is that if the swaps are made at that boundary, then things will be as I state. But if changes are made within the scope of the analysis mechanisms, then we will not see those changes in the qualia. So the theory could be falsified if changes in the qualia are NOT consistent with the theory, when changes are made at different points in the system. The theory is all about the analysis mechanisms being the culprit, so in that sense it is extremely falsifiable. Now, correct me if I am wrong, but is there anywhere else in the literature where you have you seen anyone make a prediction that the qualia will be changed by the alteration of a specific mechanism, but not by other, fairly similar alterations? Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote: How do you propose testing whether a model is correct or not? By determining whether it is useful and predictive -- just like what we always do when we're practicing science (as opposed to spouting BS). An ethical model tells you what is good or bad. It does not make useful predictions. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote: What I am claiming (and I will make this explicit in a revision of the paper) is that these notions of explanation, meaning, solution to the problem, etc., are pushed to their breaking point by the problem of consciousness. So it is not that there is a problem with understanding consciousness itself, so much as there is a problem with what it means to *explain* things. Yes, that is because we are asking the wrong questions. For example: Not: should we do experiments on animals? Instead: will we do experiments on animals? Not: can computers think? Instead: can computers behave in a way that is indistinguishable from human? -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote: For example, in fifty years, I think it is quite possible we will be able to say with some confidence if certain machine intelligences we design are conscious nor not, and whether their pain is as real as the pain of another type of animal, such as chimpanzee, dog, bird, reptile, fly, or amoeba . No it won't, because people are free to decide what makes pain real. The question is not resolved for simple systems which are completely understood, for example, the 302 neuron nervous system of C. elegans. If it can be trained by reinforcement learning, it that real pain? What about autobliss? It learns to avoid negative reinforcement and it says ouch. Do you really think that if we build AGI in the likeness of a human mind, and stick it with a pin and it says ouch, that we will finally have an answer to the question of whether machines have a consciousness? And there is no reason to believe the question will be easier in the future. 100 years ago there was little controversy over animal rights, euthanasia, abortion, or capital punishment. Do you think that the addition of intelligent robots will make the boundary between human and non-human any sharper? -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote: Okay, let me phrase it like this: I specifically say (or rather I should have done... this is another thing I need to make more explicit!) that the predictions are about making alterations at EXACTLY the boundary of the analysis mechanisms. So, when we test the predictions, we must first understand the mechanics of human (or AGI) cognition well enough to be able to locate the exact scope of the analysis mechanisms. Then, we make the tests by changing things around just outside the reach of those mechanisms. Then we ask subjects (human or AGI) what happened to their subjective experiences. If the subjects are ourselves - which I strongly suggest must be the case - then we can ask ourselves what happened to our subjective experiences. My prediction is that if the swaps are made at that boundary, then things will be as I state. But if changes are made within the scope of the analysis mechanisms, then we will not see those changes in the qualia. So the theory could be falsified if changes in the qualia are NOT consistent with the theory, when changes are made at different points in the system. The theory is all about the analysis mechanisms being the culprit, so in that sense it is extremely falsifiable. Now, correct me if I am wrong, but is there anywhere else in the literature where you have you seen anyone make a prediction that the qualia will be changed by the alteration of a specific mechanism, but not by other, fairly similar alterations? Your predictions are not testable. How do you know if another person has experienced a change in qualia, or is simply saying that they do? If you do the experiment on yourself, how do you know if you really experience a change in qualia, or only believe that you do? There is a difference, you know. Belief is only a rearrangement of your neurons. I have no doubt that if you did the experiments you describe, that the brains would be rearranged consistently with your predictions. But what does that say about consciousness? -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Dan Dennett [WAS Re: [agi] A paper that actually does solve the problem of consciousness]
Ben Goertzel wrote: Ed, BTW on this topic my view seems closer to Richard's than yours, though not anywhere near identical to his either. Maybe I'll write a blog post on consciousness to clarify, it's too much for an email... I am very familiar with Dennett's position on consciousness, as I'm sure Richard is, but I consider it a really absurd and silly argument. I'll clarify in a blog post sometime soon, but I don't have time for it now. Anyway, arguing that experience basically doesn't exist, which is what Dennett does, certainly doesn't solve the hard problem as posed by Chalmers ... it just claims that the hard problem doesn't exist... ben Agreed. I like Dennett's analytical style in many ways, but I was disappointed when I realized where he was going with the multiple drafts account. He falls into a classic trap. Chalmers says: Whooaa! There is a big, 3-part problem here: (1) We can barely even define what we mean by consciousness, (2) That fact of its indefinability seems almost intrinsic to the definition of it!, and then (3) Nevertheless, most of us are convinced that there is something significant that needs to be explained here. So Chalmers is *pointing* at the dramatic conjunction of the three things inexplicability, inexplicability that seems intrinsic to the definition and needs to be explained ... and he is saying that these three combined make a very, very hard problem. But then what Dennett does is walk right up and say Whooaa! There is a big problem here: (1) You can barely even define what you mean by consciousness, so you folks are just confused. Chalmers is trying to get Dennett to go upstairs and look at the problem from a higher perspective, but Dennett digs in his heels and insists at looking at the problem *only* from the ground floor level. He can only see the fact there is a problem with defining it, he cannot see the fact that this problem is itself interesting. What I have tried to do is take it one step further and say that if we understand the nature of the confusion we can actually resolve it (albeit in a weird kind of way). Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
I have no doubt that if you did the experiments you describe, that the brains would be rearranged consistently with your predictions. But what does that say about consciousness? What are you asking about consciousness? - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, November 17, 2008 1:11 PM Subject: Re: [agi] A paper that actually does solve the problem of consciousness --- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote: Okay, let me phrase it like this: I specifically say (or rather I should have done... this is another thing I need to make more explicit!) that the predictions are about making alterations at EXACTLY the boundary of the analysis mechanisms. So, when we test the predictions, we must first understand the mechanics of human (or AGI) cognition well enough to be able to locate the exact scope of the analysis mechanisms. Then, we make the tests by changing things around just outside the reach of those mechanisms. Then we ask subjects (human or AGI) what happened to their subjective experiences. If the subjects are ourselves - which I strongly suggest must be the case - then we can ask ourselves what happened to our subjective experiences. My prediction is that if the swaps are made at that boundary, then things will be as I state. But if changes are made within the scope of the analysis mechanisms, then we will not see those changes in the qualia. So the theory could be falsified if changes in the qualia are NOT consistent with the theory, when changes are made at different points in the system. The theory is all about the analysis mechanisms being the culprit, so in that sense it is extremely falsifiable. Now, correct me if I am wrong, but is there anywhere else in the literature where you have you seen anyone make a prediction that the qualia will be changed by the alteration of a specific mechanism, but not by other, fairly similar alterations? Your predictions are not testable. How do you know if another person has experienced a change in qualia, or is simply saying that they do? If you do the experiment on yourself, how do you know if you really experience a change in qualia, or only believe that you do? There is a difference, you know. Belief is only a rearrangement of your neurons. I have no doubt that if you did the experiments you describe, that the brains would be rearranged consistently with your predictions. But what does that say about consciousness? -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
On 11/14/2008 9:27 AM, Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf Good paper. A related question: How do you explain the fact that we sometimes are aware of qualia and sometimes not? You can perform the same actions paying attention or on auto pilot. In one case, qualia manifest, while in the other they do not. Why is that? --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Matt Mahoney wrote: --- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote: Okay, let me phrase it like this: I specifically say (or rather I should have done... this is another thing I need to make more explicit!) that the predictions are about making alterations at EXACTLY the boundary of the analysis mechanisms. So, when we test the predictions, we must first understand the mechanics of human (or AGI) cognition well enough to be able to locate the exact scope of the analysis mechanisms. Then, we make the tests by changing things around just outside the reach of those mechanisms. Then we ask subjects (human or AGI) what happened to their subjective experiences. If the subjects are ourselves - which I strongly suggest must be the case - then we can ask ourselves what happened to our subjective experiences. My prediction is that if the swaps are made at that boundary, then things will be as I state. But if changes are made within the scope of the analysis mechanisms, then we will not see those changes in the qualia. So the theory could be falsified if changes in the qualia are NOT consistent with the theory, when changes are made at different points in the system. The theory is all about the analysis mechanisms being the culprit, so in that sense it is extremely falsifiable. Now, correct me if I am wrong, but is there anywhere else in the literature where you have you seen anyone make a prediction that the qualia will be changed by the alteration of a specific mechanism, but not by other, fairly similar alterations? Your predictions are not testable. How do you know if another person has experienced a change in qualia, or is simply saying that they do? If you do the experiment on yourself, how do you know if you really experience a change in qualia, or only believe that you do? There is a difference, you know. Belief is only a rearrangement of your neurons. I have no doubt that if you did the experiments you describe, that the brains would be rearranged consistently with your predictions. But what does that say about consciousness? Yikes, whatever happened to the incorrigibility of belief?! You seem to have a bone or two to pick with Descartes: please don't ask me! Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Harry Chesley wrote: On 11/14/2008 9:27 AM, Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf Good paper. A related question: How do you explain the fact that we sometimes are aware of qualia and sometimes not? You can perform the same actions paying attention or on auto pilot. In one case, qualia manifest, while in the other they do not. Why is that? I actually *really* like this question: I was trying to compose an answer to it while lying in bed this morning. This is what I started referring to (in a longer version of the paper) as a Consciousness Holiday. In fact, if start unpacking the idea of what we mean by conscious experience, we start to realize that it inly really exists when we look at it. It is not even logically possible to think about consciousness - any form of it, including *memories* of the consciousness that I had a few minutes ago, when I was driving along the road and talking to my companion without bothering to look at several large towns that we drove through - without applying the analysis mechanism to the consciousness episode. So when I don't remember anything about those towns, from a few minutes ago on my road trip, is it because (a) the attentional mechanism did not bother to lay down any episodic memory traces, so I cannot bring back the memories and analyze them, or (b) that I was actually not experiencing any qualia during that time when I was on autopilot? I believe that the answer is (a), and that IF I can stopped at any point during the observation period and thought about the experience I just had, I would be able to appreciate the last few seconds of subjective experience. The real reply to your question goes much much deeper, and it is fascinating because we need to get a handle on creatures that probably do not do any reflective, language-based philosophical thinking (like guinea pigs and crocodiles). I want to say more, but will have to set it down in a longer form. Does this seem to make sense so far, though? Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote: No it won't, because people are free to decide what makes pain real. What? You've got to be kidding . . . . What makes pain real is how the sufferer reacts to it -- not some abstract wishful thinking that we use to justify our decisions of how we wish to behave. Autobliss responds to pain by changing its behavior to make it less likely. Please explain how this is different from human suffering. And don't tell me its because one is human and the other is a simple program, because... Do you think that the addition of intelligent robots will make the boundary between human and non-human any sharper? No, I think that it will make it much fuzzier . . . . but since the boundary is just a strawman for lazy thinkers, removing it will actually make our ethics much sharper. So either pain is real to both, or to neither, or there is some other criteria which you haven't specified, in which case I would like to know what that is. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction
Matt, First, it is not clear people are free to decide what makes pain real, at least subjectively real. If I zap you will a horrible electric shock of the type Sadam Hussein might have used when he was the chief interrogator/torturer of Iraq's Baathist party, it is not clear exactly how much freedom you would to decide how subjectively real the resulting pain would seem to you --- that is, unless you had a level of mental control far beyond that of most humans. You indicate we currently don't know the degree of consciousness or pain that would be suffered by a certain organism with 302 neurons. I agree. Our understanding of the physical correlates of consciousness is still relatively limited, but it is rapidly increasing. I think it is probable that consciousness comes in various decrees, and it is possible that all of physical reality has a form of consciousness, just one that lacks many of he attributes of a human consciousness. A 302 neuron nervous system may have a type of consciousness, but it is my belief it would be one so much less rich and complex than that supported by the 100,000,000,000 neurons of a human brain that it is not only different in degree but also extremely different in kind. I understand I am making a statement based on belief when I predict we will make great strides in understanding the physical correlates of consciousness in the coming fifty years. But there are already a number of studies shedding light on that subject. If we have anything close to the advances in brain scanning and brain science that Kurzweil predicts 1, we should come to understand the correlates of consciousness quite well --- so well, in fact, that we should have pretty good, although not necessarily complete, explanations for the various facets of the Chalmers' hard problem of consciousness. That is, we will come to understand that consciousness is created largely or entirely by computations in physical reality, and we will develop a fairly broad understanding of what type of physical computations yield what types of subjective conscious experience. With this knowledge we would be better able to understand the physical correlates of conscious pain, and, thus, better estimate the probability that various humans, animals, or machines will suffer something like pain under what circumstance. The hard problem of consciousness is based on the assumption --- or at least the question whether --- consciousness has aspects that are separate from the physical world. As we increasingly learn more about the physical correlates of consciousness, I think the scope of the hard problem will increasingly diminish. Yes, there are things about consciousness that we cannot clearly define in terms of physical computations at this point in time, but it is not clear that will always be the case. Just as life is created to various degrees of complexity out of bio-chemical computations, I think human consciousness will be shown to be created to various degrees of complexity out of neurological computations. It is conceivable that the properties of other levels or reality will be required so explain some physical correlates of consciousness, such as such as quantum entanglement or quantum weirdness. I think future study will probably tell us if this is necessary. But ultimately there will always be limits to our knowledge. We have no ultimate way of knowing with total certainty that our perceptions of reality are anything other than an illusion. I agree with Richard's paper when it points out the often repeated statement that our subjective experiences are the most real things we have. But just because they are subjective to us now, does not necessarily mean that they are largely beyond the scope of human and AGI assisted science. Ed Porter 1. Kurzweil has claimed we will be able to so accurately scan and model an individual human mind that we will be able to create a virtually exact duplicate of it, including is consciousness, its memories, its passions, etc. I personally think that is unlikely within 50 years. But I think that the combination of brain science and AGI will allow us to understand the mysteries of the hard problem of consciousness much better in fifty years than we do today. -Original Message- From: Matt Mahoney [mailto:[EMAIL PROTECTED] Sent: Monday, November 17, 2008 12:44 PM To: agi@v2.listbox.com Subject: Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote: For example, in fifty years, I think it is quite possible we will be able to say with some confidence if certain machine intelligences we design are conscious nor not, and whether their pain is as real as the pain of another type of animal, such as chimpanzee, dog, bird, reptile, fly, or amoeba . No it won't, because people are free to decide what makes pain real. The
Re: [agi] A paper that actually does solve the problem of consciousness
An excellent question from Harry . . . . So when I don't remember anything about those towns, from a few minutes ago on my road trip, is it because (a) the attentional mechanism did not bother to lay down any episodic memory traces, so I cannot bring back the memories and analyze them, or (b) that I was actually not experiencing any qualia during that time when I was on autopilot? I believe that the answer is (a), and that IF I can stopped at any point during the observation period and thought about the experience I just had, I would be able to appreciate the last few seconds of subjective experience. So . . . . what if the *you* that you/we speak of is simply the attentional mechanism? What if qualia are simply the way that other brain processes appear to you/the attentional mechanism? Why would you be experiencing qualia when you were on autopilot? It's quite clear from experiments that human's don't see things in their visual field when they are concentrating on other things in their visual field (for example, when you are told to concentrate on counting something that someone is doing in the foreground while a man in an ape suit walks by in the background). Do you really have qualia from stuff that you don't sense (even though your sensory apparatus picked it up, it was clearly discarded at some level below the conscious/attentional level)? - Original Message - From: Richard Loosemore [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, November 17, 2008 1:46 PM Subject: **SPAM** Re: [agi] A paper that actually does solve the problem of consciousness Harry Chesley wrote: On 11/14/2008 9:27 AM, Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf Good paper. A related question: How do you explain the fact that we sometimes are aware of qualia and sometimes not? You can perform the same actions paying attention or on auto pilot. In one case, qualia manifest, while in the other they do not. Why is that? I actually *really* like this question: I was trying to compose an answer to it while lying in bed this morning. This is what I started referring to (in a longer version of the paper) as a Consciousness Holiday. In fact, if start unpacking the idea of what we mean by conscious experience, we start to realize that it inly really exists when we look at it. It is not even logically possible to think about consciousness - any form of it, including *memories* of the consciousness that I had a few minutes ago, when I was driving along the road and talking to my companion without bothering to look at several large towns that we drove through - without applying the analysis mechanism to the consciousness episode. So when I don't remember anything about those towns, from a few minutes ago on my road trip, is it because (a) the attentional mechanism did not bother to lay down any episodic memory traces, so I cannot bring back the memories and analyze them, or (b) that I was actually not experiencing any qualia during that time when I was on autopilot? I believe that the answer is (a), and that IF I can stopped at any point during the observation period and thought about the experience I just had, I would be able to appreciate the last few seconds of subjective experience. The real reply to your question goes much much deeper, and it is fascinating because we need to get a handle on creatures that probably do not do any reflective, language-based philosophical thinking (like guinea pigs and crocodiles). I want to say more, but will have to set it down in a longer form. Does this seem to make sense so far, though? Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Thanks Richard ... I will re-read the paper with this clarification in mind. On the face of it, I tend to agree that the concept of explanation is fuzzy and messy and probably is not, in its standard form, useful for dealing with consciousness However, I'm still uncertain as to whether your deconstruction and reconstruction of the notion of explanation counts as a) a solution of Chalmers' hard problem b) an explanation of why Chalmer's hard problem is ill-posed I'll reflect on this more as I re-read the paper... ben On Mon, Nov 17, 2008 at 8:38 AM, Richard Loosemore [EMAIL PROTECTED]wrote: Ben Goertzel wrote: Sorry to be negative, but no, my proposal is not in any way a modernization of Peirce's metaphysical analysis of awareness. Could you elaborate the difference? It seems very similar to me. You're saying that consciousness has to do with the bottoming-out of mental hierarchies in raw percepts that are unanalyzable by the mind ... and Peirce's Firsts are precisely raw percepts that are unanalyzable by the mind... It is partly the stance (I arrive at my position from a cognitivist point of view, with specific mechanisms that must be causing the problem), where Peirce appears to suggest the Firsts idea as a purely metaphysical proposal. So, what I am saying is that this superficial resemblance between his position and mine is so superficial that it makes no sense to describe on the latter as a modernization of the former. A good analogy would be Galilean Relativity and Einsten's Relativity. Although there is a superficial resemblance, nobody would really say that Einstein was just a modernization of Galileo. *** The standard meaning of Hard Problem issues was described very well by Chalmers, and I am addressing the hard problem of concsciousness, not the other problems. *** Hmmm I don't really understand why you think your argument is a solution to the hard problem It seems like you explicitly acknowledge in your paper that it's *not*, actually It's more like a philosophical argument as to why the hard problem is unsolvable, IMO. No, that is only part one of the paper, and as you pointed out before, the first part of the proposal ends with a question, not a statement that this was a failure to explain the problem. That question was important. The important part is the analysis of explanation and meaning. This can also be taken to be about your use of the word unsolvable in the above sentence. What I am claiming (and I will make this explicit in a revision of the paper) is that these notions of explanation, meaning, solution to the problem, etc., are pushed to their breaking point by the problem of consciousness. So it is not that there is a problem with understanding consciousness itself, so much as there is a problem with what it means to *explain* things. Other things are easy to explain, but when we ask for an explanation of something like consciousness, the actual notion of explanation breaks down in a drastic way. This is very closely related to the idea of an objective observer in physics in the quantum realm that notion breaks down. What I gave in my paper was (a) a detailed description of how the confusion about consciousness arises [peculiar behavior of the analysis mechanism], but then (b) I went on to point out this peculiar behavior infects much more than just our ability to explain consciousness, because it casts doubt on the fundamental meaning of explanation and semantics and ontology. The conclusion that I then tried to draw was that it would be wrong to say that consciousness was just an artifact or (ordinarily) inexplicable thing, because this would be to tacitly assume that the sense of explain that we are using in these statements is the same one we have always used. Anyone who continued to use explain and mean (etc.) in their old context would be stuck in what I have called Level 0, and in that level the old meanings [sic] of those terms are just not able to address the issue of consciousness. Go back to the quantum mechanics analogy again: it is not right to cling to old ideas of position and momentum, etc., and say that we simply do not know the position of an electron. The real truth - the new truth about how we should understand position and momentum - is that the position of the electron is fundamentally not even determined (without observation). This analogy is not just an analogy, as I think you might begin to guess: there is a deep relationship between these two domains, and I am still working on a way to link them. Richard Loosemore. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel,
RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote: First, it is not clear people are free to decide what makes pain real, at least subjectively real. I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye with a needle (the standard way to draw blood) even though it squirms just like a human would. It is surprisingly easy to modify one's ethics to feel this way, as proven by the Milgram experiments and Nazi war crime trials. If we have anything close to the advances in brain scanning and brain science that Kurzweil predicts 1, we should come to understand the correlates of consciousness quite well No. I used examples like autobliss ( http://www.mattmahoney.net/autobliss.txt ) and the roundworm c. elegans as examples of simple systems whose functions are completely understood, yet the question of whether such systems experience pain remains a philosophical question that cannot be answered by experiment. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction
On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye with a needle (the standard way to draw blood) even though it squirms just like a human would. It is surprisingly easy to modify one's ethics to feel this way, as proven by the Milgram experiments and Nazi war crime trials. I'm sure you're not meaning to suggest that scientists commonly rationalize in this way, nor that they are all Nazi war criminals for experimenting on animals. I feel the need to remind people that animal rights is a fringe movement that does not represent the views of the majority. We experiment on animals because the benefits, to humans, are considered worthwhile. Trent --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction
There are procedures in place for experimenting on humans. And the biologies of people and animals are orthogonal! Much of this will be simulated soon On 11/17/08, Trent Waddington [EMAIL PROTECTED] wrote: On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye with a needle (the standard way to draw blood) even though it squirms just like a human would. It is surprisingly easy to modify one's ethics to feel this way, as proven by the Milgram experiments and Nazi war crime trials. I'm sure you're not meaning to suggest that scientists commonly rationalize in this way, nor that they are all Nazi war criminals for experimenting on animals. I feel the need to remind people that animal rights is a fringe movement that does not represent the views of the majority. We experiment on animals because the benefits, to humans, are considered worthwhile. Trent --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote: Autobliss responds to pain by changing its behavior to make it less likely. Please explain how this is different from human suffering. And don't tell me its because one is human and the other is a simple program, because... Why don't you resend the link to this new autobliss that responds to pain by changing its behavior to make it less likely and clearly explain why what you refer to as pain for autobliss isn't just some ungrounded label that has absolutely nothing to do with pain in any real sense of the word. As far as I have seen, your autobliss argument is akin to claiming that a rock feels pain and runs away to avoid pain when I kick it So either pain is real to both, or to neither, or there is some other criteria which you haven't specified, in which case I would like to know what that is. Absolutely. Pain is real for both. autobliss: http://www.mattmahoney.net/autobliss.txt By pain I mean any signal that has the effect of negative reinforcement, such that a system that learns will modify its behavior to reduce the expected accumulated sum of the signal according to its model. In the AIXI model, pain is the negative of the reward signal. Kicking a rock or cutting down a tree does not inflict pain because rocks and trees don't learn. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction
Matt, With regard to your first point I largely agree with you. I would, however, qualify it with the fact that many of us find it hard not to sympathize with people or animals, such as a dog, under certain circumstances when we directly sense outward manifestations that they are experiencing terrible pain, unless we have a sufficient hatred toward them to compensate for our natural tendency to feel sympathy for them. Some people attribute this to mirror neurons, and the fact that we evolved to be tribal social animals. With regard to the second point, your statement does not refute my point, although my point is admittedly based on belief that is far from certain. Our understanding of the physical (such as neural) correlates of conscious is currently sufficiently limited that it does not yet let us say much about the consciousness or lack thereof of the systems you describe, even if one assumes they are totally understood in terms of things other than the knowledge of the physical correlates of consciousness that we currently don't have, but will have within fifty years. But from what little we do understand about the neural correlates of consciousness, it does not seem that either system you describe would have anything approaching a human consciousness, and thus a human experience of pain, since they lack the type of computation normally associated with reports by humans of conscious experience. Ed Porter -Original Message- From: Matt Mahoney [mailto:[EMAIL PROTECTED] Sent: Monday, November 17, 2008 4:45 PM To: agi@v2.listbox.com Subject: RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote: First, it is not clear people are free to decide what makes pain real, at least subjectively real. I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye with a needle (the standard way to draw blood) even though it squirms just like a human would. It is surprisingly easy to modify one's ethics to feel this way, as proven by the Milgram experiments and Nazi war crime trials. If we have anything close to the advances in brain scanning and brain science that Kurzweil predicts 1, we should come to understand the correlates of consciousness quite well No. I used examples like autobliss ( http://www.mattmahoney.net/autobliss.txt ) and the roundworm c. elegans as examples of simple systems whose functions are completely understood, yet the question of whether such systems experience pain remains a philosophical question that cannot be answered by experiment. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction
--- On Mon, 11/17/08, Trent Waddington [EMAIL PROTECTED] wrote: On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye with a needle (the standard way to draw blood) even though it squirms just like a human would. It is surprisingly easy to modify one's ethics to feel this way, as proven by the Milgram experiments and Nazi war crime trials. I'm sure you're not meaning to suggest that scientists commonly rationalize in this way, nor that they are all Nazi war criminals for experimenting on animals. I feel the need to remind people that animal rights is a fringe movement that does not represent the views of the majority. We experiment on animals because the benefits, to humans, are considered worthwhile. I am not taking a position on whether inflicting pain on animals (or people or machines) is right or wrong. That is an ethical question. Ethics is a system of beliefs that varies from one person to another. There is no such thing as a correct model, although everyone believe so. All we can say is that some models work better than others as measured by individual or group survival. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction
--- On Mon, 11/17/08, Eric Burton [EMAIL PROTECTED] wrote: There are procedures in place for experimenting on humans. And the biologies of people and animals are orthogonal! Much of this will be simulated soon When we start simulating people, there will be ethical debates about that. And there are no procedures in place. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction
Before you can start searching for consciousness, you need to describe precisely what you are looking for. -- Matt Mahoney, [EMAIL PROTECTED] --- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote: From: Ed Porter [EMAIL PROTECTED] Subject: RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction To: agi@v2.listbox.com Date: Monday, November 17, 2008, 5:15 PM Matt, With regard to your first point I largely agree with you. I would, however, qualify it with the fact that many of us find it hard not to sympathize with people or animals, such as a dog, under certain circumstances when we directly sense outward manifestations that they are experiencing terrible pain, unless we have a sufficient hatred toward them to compensate for our natural tendency to feel sympathy for them. Some people attribute this to mirror neurons, and the fact that we evolved to be tribal social animals. With regard to the second point, your statement does not refute my point, although my point is admittedly based on belief that is far from certain. Our understanding of the physical (such as neural) correlates of conscious is currently sufficiently limited that it does not yet let us say much about the consciousness or lack thereof of the systems you describe, even if one assumes they are totally understood in terms of things other than the knowledge of the physical correlates of consciousness that we currently don't have, but will have within fifty years. But from what little we do understand about the neural correlates of consciousness, it does not seem that either system you describe would have anything approaching a human consciousness, and thus a human experience of pain, since they lack the type of computation normally associated with reports by humans of conscious experience. Ed Porter -Original Message- From: Matt Mahoney [mailto:[EMAIL PROTECTED] Sent: Monday, November 17, 2008 4:45 PM To: agi@v2.listbox.com Subject: RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction --- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote: First, it is not clear people are free to decide what makes pain real, at least subjectively real. I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye with a needle (the standard way to draw blood) even though it squirms just like a human would. It is surprisingly easy to modify one's ethics to feel this way, as proven by the Milgram experiments and Nazi war crime trials. If we have anything close to the advances in brain scanning and brain science that Kurzweil predicts 1, we should come to understand the correlates of consciousness quite well No. I used examples like autobliss ( http://www.mattmahoney.net/autobliss.txt ) and the roundworm c. elegans as examples of simple systems whose functions are completely understood, yet the question of whether such systems experience pain remains a philosophical question that cannot be answered by experiment. -- Matt Mahoney, [EMAIL PROTECTED] --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
Re: [agi] A paper that actually does solve the problem of consciousness
Richard Loosemore wrote: Harry Chesley wrote: A related question: How do you explain the fact that we sometimes are aware of qualia and sometimes not? You can perform the same actions paying attention or on auto pilot. In one case, qualia manifest, while in the other they do not. Why is that? I actually *really* like this question: I was trying to compose an answer to it while lying in bed this morning. ... So when I don't remember anything about those towns, from a few minutes ago on my road trip, is it because (a) the attentional mechanism did not bother to lay down any episodic memory traces, so I cannot bring back the memories and analyze them, or (b) that I was actually not experiencing any qualia during that time when I was on autopilot? I believe that the answer is (a), and that IF I can stopped at any point during the observation period and thought about the experience I just had, I would be able to appreciate the last few seconds of subjective experience. ... Does this seem to make sense so far, though? It sounds reasonable. I would suspect (a) also, and that the reason is that these are circumstances where remembering is a waste of resources, either because the task being done on auto-pilot is so well understood that it won't need to be analyzed later, and/or because there is another task in the works at the same time that has more need for the memory resources. Note that your supposition about remembering the last few seconds if interrupted during an auto-pilot task is experimentally verifiable fairly easily. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com