Re: Edge: Myth of A.I.
On 25 Nov 2014, at 16:52, John Clark wrote: On Tue, Nov 25, 2014 zibb...@gmail.com wrote: I believe that too, but then I think that intelligent behavior is the test for consciousness, it's not a perfect test but it's the only test we have. Is that more accurate than saying we do not have a test for consciousness ? No, a test need not be perfect to be useful; in the real world almost none of our information is perfect, but we manage to make decisions nevertheless. I couldn't function if I believed I was the only conscious thing in the universe, and I couldn't function if I believed that everything was conscious, so the intelligent behavior test is very useful. It is useful, and provide some degree of plausibility. Of course we can be conscious without intelligent behavior, like with some paralysis. Intelligent behavior is sufficient to get some positive plausibility, but not necessary. Bruno John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. http://iridia.ulb.ac.be/~marchal/ -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Two apparently different forms of entropy
On 25 Nov 2014, at 17:54, Richard Ruquist wrote: On Tue, Nov 25, 2014 at 7:17 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 24 Nov 2014, at 16:58, Richard Ruquist wrote: On Mon, Nov 24, 2014 at 9:47 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 24 Nov 2014, at 11:35, Richard Ruquist wrote: On Mon, Nov 24, 2014 at 4:05 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 23 Nov 2014, at 18:11, Richard Ruquist wrote: Bruno: I doubt a photon needs to double his energy to go through two slits Richard: You should be ashamed That's hardly an argument. Agreed Einstein already understood that if the collapse was a physical phenomenon, and if special relativity was correct, then locality would make a wave possibly collapse on two different eigenvector, like sometimes finding literally the photon going in both hole. In that case, the energy would be double, and the schroedinger diffusion of the wave could be used to ... create energy. A quantum perpetual machine could be constructed, and, pace George Levy, but following John Clark's quote of Eddington, we can stop here ... Yes. I like Einstein's single pinhole thought experiment the best. The incident photon spreads in spherical waves beyond the hole from ray optics. So if waves could carry energy, the energy density would drop by 1/r^2 where r is distance from the hole. If we wrap the experiment with a spherical detector sheet, the energy density incident on the sheet would be a constant across the spherical sheet and the amount incident on any detector would be a fraction of the photon energy. So there is not enough energy incident on any detector to make a photon of the original energy. That's classical thinking and it is wrong. With MWI thinking, every detector will detect a photon at the same energy and frequency as the original photon but in a different world. So the total energy in the multiverse will locally have increased by the number of detectors times the photon energy. The only way to conserve energy is to detect only one photon of the same energy and frequency as the original photon. ... or the conservation of energy is something which has to be accounted in branches, not in the multiverse. Fine as long as the input energy in each branch is normalized by the quantum probabilities No, the conservation of energy is global, and should be statistically verified in the normal (non Harry-Potter-like) branches. But the collapse is not physical, it belongs to the mind of the people, fungible and then differentiated, in the infinite tensor product, which, with computationalism, should be a mirror of the fact that we are indeterminate on infinitely many sigma_1 sentences, where the ortholattice structure is determined by the logic of self-reference. My opinion is that collapse is what makes objects physical. That is my opinion too. But the collapse is a psychological phenomenon, making directly the physical into something psychological. Fine as long the process uses the correct initial conditions for each branch ? Everything else is just math (and deterministic.) So everything that could possibly happen can be computed ahead of time in a block 4 dimensional muliverse that I call the Math Space With collapse, the physical space becomes lines in the Math Space. That is not an argument. It is just how I see reality. OK. For a computationalist (who thinks), the collapse is not real, but the wave is not real too. It is itself the product of a Moiré effect on all computations. I agree. With computationalism nothing is real except the math. All is illusion- maya. So comp must have the support of Hinduism and Buddhism. And christianism before the 5th century, and judaism and Islam, before the 11th century. The obsession with matter came later. I find this weird, because there are no evidence for it. I'll take your word for it. So its not in history books? I think it is well known, at least by the scholars. I agree that I am a bit oversimplifying, by lack of time. The fact is that is that until Maimonides, there were as much platonist and aristotelian among the religious people. Religion, in a wide sense, are platonist at the start. What we see is not the real or the whole thing. Thus comes the idea of God, as the reason *behind* what we see, and the idea of science: let us find what really is. But Aristotelianism, which is very natural from the first person view (the brain is programmed to take seriously what we see), has made the human forgetting that science (including theology) comes from askeptical attitude with the idea that we are directly related to what we can measure and observe. I prefer to think that both quantum waves and particles are real, but that waves are math objects and particles are physical objects. Again that is not an argument.. My
Re: Is Dark Energy Gobbling Dark Matter, and Slowing Universe's Expansion?
On Tuesday, November 25, 2014 6:50:00 PM UTC, Liz R wrote: And I said that it seemed to me that if dark matter was being destroyed galaxies should be expanding, and asked if there was any observational evidence to support this. Liz, you said it right at the start...but the point is only valid one time. What you reason above restates the same point in a different form. Based on the current worldview, the idea of dark energy gobbling dark matter, causing expansion to slow down.is nonsensical. Can't be adjustedcan't be made into sense. Not without getting into significant levels of fussy details. Which cannot be done without large discoveries first that shed dramatic light on what dark matter and energy actually are. Not sensibly anyway (i.e. whatever fussy detailed explanation they create, there will be exponentially many other different and disagreeable explanations that are logically identical in terms of size and robustness of the necessary guesses, for each next level of detail necessary to go down, in order to fussy up the job. so it's a really huge issue. If it isn't correct, that'll show in the developments and no more will be said. But if this finding stubbornly sticks around, and then starts showing up in other independent ways. That then becomes the line too far.too far to patch the cosmological model up with more dark stuff. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On 26 Nov 2014, at 00:56, meekerdb wrote: On 11/25/2014 2:00 PM, LizR wrote: On 26 November 2014 at 04:38, John Clark johnkcl...@gmail.com wrote: On Tue, Nov 25, 2014 meekerdb meeke...@verizon.net wrote: I don't think John's post implied that conscious was another word for intelligence. I think his position is that a being could be conscious without being intelligent (which would be consistent with aware of one's self and surroundings), but not vice versa. Exactly. Ah, well that is a matter of opinion. It would mean that all the tests so far devised for intelligence that have been passed by computers, including some versions of the Turing test, may not in fact detect intelligence after all, if those machines aren't actually conscious, which they may well not be. Intelligent behavior is observable, so it doesn't make sense to say maybe it isn't really intelligence because there's a missing but unobservable property consciousness. Since the consciousness is unobservable the more sensible assumption would be that the machines are conscious. However, I don't agree with John that intelligence is necessarily accompanied by human-like consciousness. His argument is based on evolution, i.e. that if intelligence could exist without consciousness then it would evolved that way. But evolution can be driven by historical accident. So I think his argument only shows that intelligence as it developed in humans is necessarily accompanied by human-like consciousness that includes an inner narrtive. Julian Jaynes has a theory about how this happened. But I think there can be different kinds of consciousness; so I think that there could be intelligence which is not associated with human- like inner narrative for example. John recognizes that the human brain has multiple modules which may compete in deciding actions. Watson, which has a certain intelligence, probably doesn't have this kind of modular competition and so would have a different kind of consciousness. The use of some chemical pertubation in the brain can illustrate, in fact can lead to the making of an experience which illustrates how consciousness can be different from the usual mundane type of consciousness, with inner narratives. To bad this is not well seen in pour culture. There are non toxic means though, and not known for leading to any problem, except some metaphysical shock for people with strong religious prejudices. This seems to me to be redefining intelligence (and perhaps consciousness). Personally, I think machines can behave in an intelligent manner without being conscious - or at least in a manner than most people not used to computers would consider intelligent (e.g. performing huge mathematical calculations very fast would be considered intelligent by most people before the advent of computers, as would winning the world chess championship). I agree, except to qualify that as without being conscious the way people are with an inner narrative. I think any intelligent being must have a world-model which includes itself. I agree. Note that it is already the case for RA. But only PA can be aware of having it, and reason non trivially about it, and distinguish the first and third person aspect of the self. But this is getting very semantic-quibbly. If you guys want to redefine intelligence as being something that only conscious beings have, then fine, as long as you make it clear that's what you're doing I have no objection. We'll find another word for what machines (and unconscious parts of the brain) can do that merely looks intelligent. I don't think being conscious is a simple unitary attribute. I think there are different kinds of being conscious Yes and I have rock solid, I would even go so far as to call it perfect, evidence that the above is true; but unfortunately that evidence is available only to me. You may have a corresponding sort of evidence, I strongly suspect that you do, but I don't know it for a fact. This is also a matter of opinion. Some would say that one is either conscious or not, although what one is conscious of can vary a lot. Yes, that's Bruno's idea. But he supports it by taking a very weak definition of consciousness so that it is essentially just awareness of self as distinct from environment. I take even weaker definition, like just awareness, even in the case the self is not distinguish from the environmement. I can imagine babies, and simple animals are like that. Distinguishing the self from the environment needs the Löbianity, and note that this can lead to some artificial separation from the environment, which might be more part of us than we think, and this should please you Brent, as you insist often on the importance of the environment for consciousness. Bruno Brent -- You received this message because you
Re: Edge: Myth of A.I.
On 25 Nov 2014, at 06:56, LizR wrote: On 25 November 2014 at 16:54, meekerdb meeke...@verizon.net wrote: On 11/24/2014 5:36 PM, LizR wrote: On 25 November 2014 at 13:41, John Clark johnkcl...@gmail.com wrote: On Mon, Nov 24, 2014 LizR lizj...@gmail.com wrote: I don't think we need to worry about intelligent machines. A smartphone is fairly intelligent, for example, at doing what it does. Conscious machines, which (according to Bruno, at least) are possible, are another matter. From a practical operational standpoint it doesn't matter if a machine (or one of my fellow human beings) is conscious or not, all that matters is if it can outsmart me or not. And by the way, if you think that smartphone is more than just a name for a certain type of phone and is really smart then why don't you think it's conscious too? It's almost as if you believe that consciousness is harder to achieve than intelligence. We've made intelligent machines, but I don't know of any conscious ones (except those nature has produced, I mean) But do you know we have not made any conscious ones? No, of course I don't, how could I? I said I wasn't aware of any. The main difference being that conscious beings have their own objectives. But even if a intelligent being is not conscious (something I am quite sure is not possible) it would have tendencies to act in one way rather than another determined by the thoughts (call them information streams if you like euphemisms) flowing through its brain; and the more intelligent the being is the harder it would be for you to understand them. And those thoughts may very often have absolutely positively nothing to do with your best interests. Looks like you are using an unusual definition of consciousness, so I will pass on this discussion. What do you consider the usual definition of consciousness? Is it having an inner narrative (per Julian Jaynes)? Perceiving and reacting to surroundings? Understanding Lob's theorem? I believe it's to do with awareness of one's self and surroundings, or something like that, but I'm not an expert and maybe you have a better definition? What I do know is that it isn't just another word for intelligence, which is what I was objecting to (as the quote above shows). I agree that consciousness is not intelligence. I insist also to distinguish intelligence from competence. You need consciousness to develop intelligence, and you need intelligence to develop competence. Then competence has a negative feedback on intelligence and on conscience, which is close to consciousness. An entity can be competent, without intelligence nor consciousness, A entity can be conscious, without intelligence nor competence, An entity can be intelligent, without competence (but it still needs consciousness). I would say, Bruno -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. http://iridia.ulb.ac.be/~marchal/ -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Two apparently different forms of entropy
Turns out that I do not understand it either. The pinhole thought experiment should decrease the coherent photons by a factor of 2 regardless of whether the incoherent photons are in separate branches or not. So the result is the same for MWI and wave collapse. Richard On Wed, Nov 26, 2014 at 3:46 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 25 Nov 2014, at 17:54, Richard Ruquist wrote: On Tue, Nov 25, 2014 at 7:17 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 24 Nov 2014, at 16:58, Richard Ruquist wrote: On Mon, Nov 24, 2014 at 9:47 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 24 Nov 2014, at 11:35, Richard Ruquist wrote: On Mon, Nov 24, 2014 at 4:05 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 23 Nov 2014, at 18:11, Richard Ruquist wrote: Bruno: I doubt a photon needs to double his energy to go through two slits Richard: You should be ashamed That's hardly an argument. Agreed Einstein already understood that if the collapse was a physical phenomenon, and if special relativity was correct, then locality would make a wave possibly collapse on two different eigenvector, like sometimes finding literally the photon going in both hole. In that case, the energy would be double, and the schroedinger diffusion of the wave could be used to ... create energy. A quantum perpetual machine could be constructed, and, pace George Levy, but following John Clark's quote of Eddington, we can stop here ... Yes. I like Einstein's single pinhole thought experiment the best. The incident photon spreads in spherical waves beyond the hole from ray optics. So if waves could carry energy, the energy density would drop by 1/r^2 where r is distance from the hole. If we wrap the experiment with a spherical detector sheet, the energy density incident on the sheet would be a constant across the spherical sheet and the amount incident on any detector would be a fraction of the photon energy. So there is not enough energy incident on any detector to make a photon of the original energy. That's classical thinking and it is wrong. With MWI thinking, every detector will detect a photon at the same energy and frequency as the original photon but in a different world. So the total energy in the multiverse will locally have increased by the number of detectors times the photon energy. The only way to conserve energy is to detect only one photon of the same energy and frequency as the original photon. ... or the conservation of energy is something which has to be accounted in branches, not in the multiverse. Fine as long as the input energy in each branch is normalized by the quantum probabilities No, the conservation of energy is global, and should be statistically verified in the normal (non Harry-Potter-like) branches. But the collapse is not physical, it belongs to the mind of the people, fungible and then differentiated, in the infinite tensor product, which, with computationalism, should be a mirror of the fact that we are indeterminate on infinitely many sigma_1 sentences, where the ortholattice structure is determined by the logic of self-reference. My opinion is that collapse is what makes objects physical. That is my opinion too. But the collapse is a psychological phenomenon, making directly the physical into something psychological. Fine as long the process uses the correct initial conditions for each branch ? Everything else is just math (and deterministic.) So everything that could possibly happen can be computed ahead of time in a block 4 dimensional muliverse that I call the Math Space With collapse, the physical space becomes lines in the Math Space. That is not an argument. It is just how I see reality. OK. For a computationalist (who thinks), the collapse is not real, but the wave is not real too. It is itself the product of a Moiré effect on all computations. I agree. With computationalism nothing is real except the math. All is illusion- maya. So comp must have the support of Hinduism and Buddhism. And christianism before the 5th century, and judaism and Islam, before the 11th century. The obsession with matter came later. I find this weird, because there are no evidence for it. I'll take your word for it. So its not in history books? I think it is well known, at least by the scholars. I agree that I am a bit oversimplifying, by lack of time. The fact is that is that until Maimonides, there were as much platonist and aristotelian among the religious people. Religion, in a wide sense, are platonist at the start. What we see is not the real or the whole thing. Thus comes the idea of God, as the reason *behind* what we see, and the idea of science: let us find what really is. But Aristotelianism, which is very natural from the first person view (the brain is programmed to take seriously what we see), has made the human forgetting that science (including
Re: Edge: Myth of A.I.
On Mon, Nov 24, 2014 at 5:56 PM, John Clark johnkcl...@gmail.com wrote: On Mon, Nov 24, 2014 Telmo Menezes te...@telmomenezes.com wrote: All the AI we have so far gives as a little from a lot. The real goal of AI is to get a lot from a little. A human translator can't get good at translating language X to Y unless he hears a lot of both languages X and Y, and the same is true of computers. Right, but here what is meant is the effort in building the translator, and what you get from it. This dictum comes from the school of thought that claims that human-level AI will be grown/evolved instead of directly programmed. I suspect you might give this idea some credence, given that you say that we might create human-level AI before understanding how it works. Google's translator is not grown/evolved, as far as I can tell, so it might be a dead-end in terms of the effort that will actually lead us to the next quantitive jump in AI. With what I consider real AI, an artificial translator could also be taught how to drive a car. Computers can do both and subroutines exist so what's the problem? The problem is that human-level AI might require a level of complexity in terms of subroutine calls, shared data and so on that transcends the ability of any human programmer. Modern software development subscribes to the divide and conquer school of engineering. This makes a lot of sense when it comes to building large banking systems or even search engines, but it might be a dead end when it comes to building human-level AI, because there is no guarantee that all classes of problems can be modularised down to chunks small enough for a puny human programmer to be able to reason about. In a sense, I suspect we are stuck in a local maxima of software development common sense, and a lot of heresy will have to be attempted before anything of consequence is achieved. The extreme compartmentalisation of capabilities is the smoking gun that the intelligence part of AI is not increasing. A computer that beat the 2 best human players of Jeopardy on planet Earth blew that argument into (sorry but I just have to say it) bits . And human beings move from being mediocre translators to being very good translators by observing how great translators do it. And they can also do this for a number of different skills with the same software. I see no evidence that humans use the same mental software to translate languages, solve differential equations, walk and chew gum at the same time, and write about philosophy on the internet; I think humans use different subroutines for different tasks just as computers do. I would just repeat what I wrote above, but still. What computers can do is a superset of what humans can program computers to do. What humans can program computers to do is largely determined by tools. I am questioning that our current set of tools is adequate for the problem of creating human-level AI, and that most encouraging achievement are not dead ends. Translation certainly won't be the last profession where machines become better at there job than any human; and I predict that the next time it happens somebody will try to find a excuse for it just like you did and say Yes a machine is a better poet or surgeon or joke writer or physicists than I am but it doesn't really count because (insert lame excuse here). I am sure of that too, but I reserve my decision on which side of the argument I'm in until I see these surgeons, joke writers or physicists that you talk about. That just means you are a reasonable man. The people who exasperate me are those who say that even though X does very intelligent things that doesn't mean that X is intelligent. My point is that I don't believe in magic so I think that all the brilliant things humans have done over the last few thousand years happened because of the way the atoms in the 3 pounds of grey goo inside their bone box were organized, and so there is no reason that other things, like computers, couldn't be as intelligent or more so if they were organized in the right way. Ok, we have no disagreement over this. My problem is more with people who are trying to mess with the goal post, usually for marketing purposes. I don't mind the bragging, but it reinforces the idea that the goal can be achieved though iterative improvement over current systems, something that I am skeptical of. Telmo. John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to
Re: Edge: Myth of A.I.
I agree that it's applicable to any imaginable goal, this is the usual prisoner's dilemma. What I'm not sure is precisely how we could all agree on some collective goal. The pre-democracy solution was to enforce allegiance to some set of religious beliefs. You accept these goals if you don't want to spend eternity in torment, case closed. Democracy at face value asks the majority what they want. If you're in the minority, though shit. At least there is a rational explanation for your suffering. Democracy in reality does no such thing, but that's another story. I can't take you seriously until you tell me how to agree on the goal. On Tue, Nov 25, 2014 at 10:37 PM, Alberto G. Corona agocor...@gmail.com wrote: I don´t though it very much. But it is applicable to any imaginable goal. Since certain self interests will go against any collective interest that we can imagine. And the best way to advance self interest is indeed to use ideology to hide deleterious self interests behind any true or false good or bad collective interest already existent or promoted as such. 2014-11-24 13:01 GMT+01:00 Telmo Menezes te...@telmomenezes.com: Hi Alberto, You talk of advancement of society, so this implies some collective goal. What is the goal, in your view? Cheers Telmo. On Mon, Nov 24, 2014 at 9:14 AM, Alberto G. Corona agocor...@gmail.com wrote: I laugh at the anthropological optimists that are confident that humans will be like gods for the same reason that I laugh loudly at cibernetical optimists. most of the effort of inteligent people is devoted to lie themselves and other in order to gain power and enslave people, at least, to seduce other in his sophisticated lies. The more inteligence, the more chance for creation and destruction. On the average, intelligence alone contribute zero to the advancement of society and thus contributes nothing to the advancement of anything. It is often the case that dumb people are wiser than intelligent people from Harvard of Yale staturated by ideology (self-profitable ideology, I could say). To have intelligent machines either autonomous or not don´t change that they could be used for good or for evil contributing nothing, not even to the progress of machines. 2014-11-24 7:45 GMT+01:00 John Clark johnkcl...@gmail.com: A.I. is no closer than it was 20 or 30 or 40 years ago. Of one thing I am certain, someday computers will become more intelligent than any human who ever lived using any measure of intelligence you care to name. And I am even more certain that we are 20 years closer to that day than we were 20 years ago. But what is new and big is Big Data. But Big Data does not involve theories of A.I. nor efforts. it's about taking very large sets of paired data and converging by some basic rule to a single thing. This is how translation services work. Well... Big Data computers are artificial and good translation requires intelligence, so why in the world isn't that AI. Big Data does not involve theories of A.I I think it very unlikely that the secret to intelligence is some grand equation you could put on a teashirt, it's probably 1001 little hacks and kludges that all add up to something big. It's very large sets of translations of sentences, and sentence components, simply rehashed for best fit Simply? Is convoluted better than simple? Are you saying that if we can explain how it works then it can't be intelligent? It actually works fairly adequately for most translation needs. Which would be great, except this: The Big Data system is not independent at any point. Every day there needs to be a huge scrape of the translations performed by human translators. And human beings move from being mediocre translators to being very good translators by observing how great translators do it. Human translation professions are in a state of freefall. There used to be a career structure with rising income and security and status. Now there isn't. Translation certainly won't be the last profession where machines become better at there job than any human; and I predict that the next time it happens somebody will try to find a excuse for it just like you did and say Yes a machine is a better poet or surgeon or joke writer or physicists than I am but it doesn't really count because (insert lame excuse here). John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- Alberto. -- You received this message because you are subscribed to the Google Groups Everything List group. To
Re: My latest crossword
I shouldn't have clicked this. Please tell me you will post the solutions so I can have some peace. On Tue, Nov 25, 2014 at 7:36 PM, LizR lizj...@gmail.com wrote: http://mayaofauckland.wordpress.com/2014/11/25/do-quantum-mechanics-overcharge-not-after-renormalisation/ In case anyone out there is into cryptic crosswords. This has a bit of a science theme :-) -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: real A.I.
Nice :) One of the funny things about our sense of self-importance is that we imagine super-intelligent entities trying to destroy us, but we rarely consider the possibility that they would just have no desire to interact with us. On Mon, Nov 24, 2014 at 8:00 PM, meekerdb meeke...@verizon.net wrote: http://xkcd.com/1450/ -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Two apparently different forms of entropy
Entropy and Time seem related, or at least one seems at least one aspect of the other. Is it sensible to think then, that there are two or more types of entropy, therefore, there are at least two dimensions of time? -Original Message- From: Richard Ruquist yann...@gmail.com To: everything-list everything-list@googlegroups.com Sent: Wed, Nov 26, 2014 7:29 am Subject: Re: Two apparently different forms of entropy Turns out that I do not understand it either. The pinhole thought experiment should decrease the coherent photons by a factor of 2 regardless of whether the incoherent photons are in separate branches or not. So the result is the same for MWI and wave collapse. Richard On Wed, Nov 26, 2014 at 3:46 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 25 Nov 2014, at 17:54, Richard Ruquist wrote: On Tue, Nov 25, 2014 at 7:17 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 24 Nov 2014, at 16:58, Richard Ruquist wrote: On Mon, Nov 24, 2014 at 9:47 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 24 Nov 2014, at 11:35, Richard Ruquist wrote: On Mon, Nov 24, 2014 at 4:05 AM, Bruno Marchal marc...@ulb.ac.be wrote: On 23 Nov 2014, at 18:11, Richard Ruquist wrote: Bruno: I doubt a photon needs to double his energy to go through two slits Richard: You should be ashamed That's hardly an argument. Agreed Einstein already understood that if the collapse was a physical phenomenon, and if special relativity was correct, then locality would make a wave possibly collapse on two different eigenvector, like sometimes finding literally the photon going in both hole. In that case, the energy would be double, and the schroedinger diffusion of the wave could be used to ... create energy. A quantum perpetual machine could be constructed, and, pace George Levy, but following John Clark's quote of Eddington, we can stop here ... Yes. I like Einstein's single pinhole thought experiment the best. The incident photon spreads in spherical waves beyond the hole from ray optics. So if waves could carry energy, the energy density would drop by 1/r^2 where r is distance from the hole. If we wrap the experiment with a spherical detector sheet, the energy density incident on the sheet would be a constant across the spherical sheet and the amount incident on any detector would be a fraction of the photon energy. So there is not enough energy incident on any detector to make a photon of the original energy. That's classical thinking and it is wrong. With MWI thinking, every detector will detect a photon at the same energy and frequency as the original photon but in a different world. So the total energy in the multiverse will locally have increased by the number of detectors times the photon energy. The only way to conserve energy is to detect only one photon of the same energy and frequency as the original photon. ... or the conservation of energy is something which has to be accounted in branches, not in the multiverse. Fine as long as the input energy in each branch is normalized by the quantum probabilities No, the conservation of energy is global, and should be statistically verified in the normal (non Harry-Potter-like) branches. But the collapse is not physical, it belongs to the mind of the people, fungible and then differentiated, in the infinite tensor product, which, with computationalism, should be a mirror of the fact that we are indeterminate on infinitely many sigma_1 sentences, where the ortholattice structure is determined by the logic of self-reference. My opinion is that collapse is what makes objects physical. That is my opinion too. But the collapse is a psychological phenomenon, making directly the physical into something psychological. Fine as long the process uses the correct initial conditions for each branch ? Everything else is just math (and deterministic.) So everything that could possibly happen can be computed ahead of time in a block 4 dimensional muliverse that I call the Math Space With collapse, the physical space becomes lines in the Math Space. That is not an argument. It is just how I see reality. OK. For a computationalist (who thinks), the collapse is not real, but the wave is not real too. It is itself the product of a Moiré effect on all computations. I agree. With computationalism nothing is real except the math. All is illusion- maya. So comp must have the support of Hinduism and Buddhism. And christianism before the 5th century, and judaism and Islam, before the 11th century. The obsession with matter came later. I find this weird, because there are no evidence for it. I'll take your word for it. So its not in history books? I think it is well known, at least by the scholars. I agree that I am a bit oversimplifying, by lack of time. The fact is that is that until
Re: Edge: Myth of A.I.
On Tue, Nov 25, 2014 at 5:00 PM, LizR lizj...@gmail.com wrote: I don't think John's post implied that conscious was another word for intelligence. I think his position is that a being could be conscious without being intelligent (which would be consistent with aware of one's self and surroundings), but not vice versa. Exactly. Ah, well that is a matter of opinion. It would mean that all the tests so far devised for intelligence that have been passed by computers, including some versions of the Turing test, may not in fact detect intelligence after all, How on Earth do you figure that? if those machines aren't actually conscious, If computers aren't conscious that's their problem not ours, and if you're not conscious that's your problem not mine. which they may well not be. As I've said over and over and over and over again, if something can be intelligent but not conscious then Darwin was wrong. Personally, I think machines can behave in an intelligent manner without being conscious I do not think Darwin was wrong. But this is getting very semantic-quibbly. If you guys want to redefine intelligence as being something that only conscious beings have, then fine No that is not fine. I DEFINE intelligence just as everybody else does, the ability to find novel solutions to new problems, the greater the variety of problems the greater the intelligence. I DEDUCE that if intelligent beings can be non-conscious then Darwin was wrong. My OPINION is that Darwin was not wrong. As for consciousness I refuse to give a definition but I will do something better, give a example: me. We'll find another word for what machines (and unconscious parts of the brain) can do that merely looks intelligent. Why go to all the hassle of inventing a new word when nobody, absolutely positively nobody, can tell the difference between being intelligent and merely behaving intelligently? Some would say that one is either conscious or not Who would say such a silly thing? John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On Tue, Nov 25, 2014 at 6:56 PM, meekerdb meeke...@verizon.net wrote: I don't agree with John that intelligence is necessarily accompanied by human-like consciousness. His argument is based on evolution, i.e. that if intelligence could exist without consciousness then it would evolved that way. But evolution can be driven by historical accident. If Evolution just stumbled onto consciousness because a astronomically unlikely mutation occurred and not because it was the byproduct of intelligence then it would be of neutral survival value and the human race would have lost that property long ago by genetic drift. That's the reason creatures that have lived in dark caves for thousands of generations have no eyes; elsewhere a mutation that rendered a creature blind would be a disaster but in a pitch dark cave it wouldn't hinder its genes getting into the next generation at all. In fact lack of eyes would be a advantage, all the resources needed to make a complex organ like the eye could be directed into something more useful, like having more offspring. There are only 2 options, consciousness improves the survival of a organism or it does not, lets examine both possibilities. If consciousness improves survival it can only do so by effecting the behavior of the animal and then we must conclude that the Turing Test works for consciousness as well as intelligence. If on the other hand consciousness does not effect behavior then it MUST be a byproduct of something else that does (like intelligence) or Evolution would never have produced it and never have kept it even if it had, and yet I know for a fact Evolution HAS produced consciousness at least once (me) and probably many billions of times. John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Quantum Mechanics Violation of the Second Law
Hi everyone Not much of a response... answering the two questions below: Answer to question 1: If air is forcefully convected in a column having an isothermal temperature gradient, the column shifts toward an adiabatic gradient. Paradoxically, mixing does not equalize temperature, as is well known in meteorology. (air rising over a mountain gets colder) Answer to Question 2: After the fans are turned off and the air currents die down, the column slowly shifts back to its original isothermal state by diffusive heat flow. Loschmidt conjectured that the adiabatic state is inherently stable and that the column would remain in an adiabatic state. He was wrong with respect to gases such as air which have a mostly Maxwell-Boltzmann distribution. Gravity needs to be accounted for by multiplying the distribution by an exponential Boltzmann factor. This factor is eliminated by renormalization and the original distribution is recovered indicated no change in temperature. Non-Maxwellian gases such as electrical carriers in a thermoelectric material follow the Fermi-Dirac distribution. This distribution does not allow the electrical field to be accounted for by means of a simple multiplicative Boltzmann factor that can be eliminated by renormalization. Therefore, the carriers can acquire a temperature gradient when subjected to an electrical field as shown by the Caltech experiments and numerous simulations by myself and others. In my opinion, the Second Law is built in, but can be circumvented by stepping outside of, classical physics. George Levy On 11/24/2014 12:24 PM, George wrote: The gas does not flow unidirectionally in the column as in a pipe. There is no net flow. Convection involves a cyclic, mostly vertical, movement of gas in the column. Here is a thought experiment you may consider. A column of gas in a gravitational field is initially assigned an isothermal temperature distribution. Fans are placed at the bottom and configured to blow air vertically, setting up a forced convection. Question 1: Will the column remain isothermal? Question 2: What happens if the fans are turned off. What will the column final state be? These are tricky questions but answering them may enlighten the Loschmidt paradox. George Levy On 11/23/2014 5:38 PM, John Clark wrote: On Sun, Nov 23, 2014 at 6:28 PM, George gl...@quantics.net mailto:gl...@quantics.net wrote: There is no convection current even though gas near the floor is hotter than gas near the ceiling. The reason is that gas rising in an adiabatic column expands and cools exactly at the same rate as the adiabatic temperature lapse and therefore the gas is in equilibrium. But what if the column of gas can't expand because it's in a sealed insulated pipe? Loschmidt ignored the fact that the energy of the molecules is correlated with their vertical direction of movement. For example, those molecules which are at the top of their trajectories (zero vertical kinetic energy) must always experience their next collision at a lower elevation. But there will always be some molecules at the very top of the column, does that mean there will always be a downward current starting from the very top and a corresponding upward replacement current? Obviously do to the second law we know you couldn't set up a turbine and get work out of one of those currents, but exactly where is the flaw in the idea? Perhaps the error is that the 2 currents would be so small and intermingled that the turbine would just move back and forth in a random way and so you couldn't get any work out of it, and connecting the turbine to a ratchet wouldn't help because the ratchet is at the same temperature as the gas so it will undergo Brownian motion, and the bouncing ratchet teeth will slip at random intervals and allow the ratchet to slip backwards, so the end result is no net work. John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com mailto:everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com mailto:everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com mailto:everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com mailto:everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options,
Re: Edge: Myth of A.I.
On Wed, Nov 26, 2014 at 8:06 AM, Telmo Menezes te...@telmomenezes.com wrote: I am questioning that our current set of tools is adequate for the problem of creating human-level AI, I don't think there is any doubt about it, a human-level AI does not exist today because neither we nor computers currently have all the tools that are needed to make one. I doubt if I could say that in 30 years and perhaps not in 20, I would guess it will remain true for the next 10 but I've been wrong before. John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On Wed, Nov 26, 2014 , Bruno Marchal marc...@ulb.ac.be wrote: I agree that consciousness is not intelligence. I agree also. An entity can be competent, without intelligence [...] An entity can be intelligent, without competence I don't understand the distinction, but I do know that competence means having the skill and knowledge to get the job done, so what's the point of intelligence? As far as survival is concerned (and getting genes into the next generation is the only thing Evolution is concerned with) Intelligence, whatever you mean by the word, would be as useless as consciousness. So now you've doubled the number of mysteries you need to explain, not only do you need to explain why Evolution invented consciousness you can't even explain why it invented Intelligence. I insist also to distinguish intelligence from competence. Then please do so. I'm all ears. John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: My latest crossword
Yes, I will post the solution ... but not quite yet. In the meantime I have revised a few clues I wasn't happy with, so maybe that will help. I can also supply hints on request :-) Also if you aren't familiar with cryptic crosswords, this may help... http://www.elnitsky.com/cryptic On 27 November 2014 at 02:37, Telmo Menezes te...@telmomenezes.com wrote: I shouldn't have clicked this. Please tell me you will post the solutions so I can have some peace. On Tue, Nov 25, 2014 at 7:36 PM, LizR lizj...@gmail.com wrote: http://mayaofauckland.wordpress.com/2014/11/25/do-quantum-mechanics-overcharge-not-after-renormalisation/ In case anyone out there is into cryptic crosswords. This has a bit of a science theme :-) -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: real A.I.
Have you read The Genocides by Thomas M Disch? Super-intelligent entities trying to destroy us, but only in the same way we try to eradicate aphids from an orchard. On 27 November 2014 at 02:42, Telmo Menezes te...@telmomenezes.com wrote: Nice :) One of the funny things about our sense of self-importance is that we imagine super-intelligent entities trying to destroy us, but we rarely consider the possibility that they would just have no desire to interact with us. On Mon, Nov 24, 2014 at 8:00 PM, meekerdb meeke...@verizon.net wrote: http://xkcd.com/1450/ -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On 11/26/2014 9:53 AM, John Clark wrote: No that is not fine. I DEFINE intelligence just as everybody else does, the ability to find novel solutions to new problems, the greater the variety of problems the greater the intelligence. I DEDUCE that if intelligent beings can be non-conscious then Darwin was wrong. My OPINION is that Darwin was not wrong. I don't think that deduction is unqualifiedly valid. First, evolution permits what Gould called spandrels. I don't think human consciousness is a spandrel, but it's possible. Second, there may be different ways of being intelligent (as game theorists will play NIM differently from most people) and human consciousness necessarily accompanied human intelligence because of the precursors (hominid intelligence) that evolution had to start with. For example, I think human consciousness and intelligence are both closely linked to language. Language is an evolutionarily useful adaptation of social animals. But I see no reason that no-social animals cannot be intelligent (e.g. ocotopi are solitary by are the most intlligent non-vertebrates). This implies that there can be intelligent beings without language and therefore without anything like human-consciousness; although they would have consciousness in Bruno's sense of being aware. Brent -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On 11/26/2014 11:23 AM, John Clark wrote: On Wed, Nov 26, 2014 , Bruno Marchal marc...@ulb.ac.be mailto:marc...@ulb.ac.be wrote: I agree that consciousness is not intelligence. I agree also. An entity can be competent, without intelligence [...] An entity can be intelligent, without competence I don't understand the distinction, but I do know that competence means having the skill and knowledge to get the job done, so what's the point of intelligence? As far as survival is concerned (and getting genes into the next generation is the only thing Evolution is concerned with) Intelligence, whatever you mean by the word, would be as useless as consciousness. So now you've doubled the number of mysteries you need to explain, not only do you need to explain why Evolution invented consciousness you can't even explain why it invented Intelligence. I insist also to distinguish intelligence from competence. My understanding is that intelligence refers to learning ability and behavior adaptability to novel circumstances. Competence is being able to act effectively in a given circumstance, but not necessarily adaptable to new circumstances. If a pipe breaks a plumber will be competent to fix it, but if his computer fails he will not be competent to fix it and he may not be intelligent enough to learn how to fix it. So intelligence is sort of meta-competence, i.e. competence at becoming competent in particular fields. Brent Then please do so. I'm all ears. John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com mailto:everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com mailto:everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On 11/26/2014 10:28 AM, John Clark wrote: On Tue, Nov 25, 2014 at 6:56 PM, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net wrote: I don't agree with John that intelligence is necessarily accompanied by human-like consciousness. His argument is based on evolution, i.e. that if intelligence could exist without consciousness then it would evolved that way. But evolution can be driven by historical accident. If Evolution just stumbled onto consciousness because a astronomically unlikely mutation occurred and not because it was the byproduct of intelligence then it would be of neutral survival value and the human race would have lost that property long ago by genetic drift. No, because consciousness might be a necessary byproduct of human like intelligence, but not of all possible ways to achieving intelligence. Evolution is constrained in what adaptations it can develop. So having two legs is a necessary byproduct of human intelligent, because starting with four limbs that's the only way to free up two for manipulation of objects. But that doesn't mean having two legs is a necessary byproduct of intelligence in general (c.f. octopi). Brent That's the reason creatures that have lived in dark caves for thousands of generations have no eyes; elsewhere a mutation that rendered a creature blind would be a disaster but in a pitch dark cave it wouldn't hinder its genes getting into the next generation at all. In fact lack of eyes would be a advantage, all the resources needed to make a complex organ like the eye could be directed into something more useful, like having more offspring. There are only 2 options, consciousness improves the survival of a organism or it does not, lets examine both possibilities. If consciousness improves survival it can only do so by effecting the behavior of the animal and then we must conclude that the Turing Test works for consciousness as well as intelligence. If on the other hand consciousness does not effect behavior then it MUST be a byproduct of something else that does (like intelligence) or Evolution would never have produced it and never have kept it even if it had, and yet I know for a fact Evolution HAS produced consciousness at least once (me)and probably many billions of times. John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com mailto:everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com mailto:everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On Wed, Nov 26, 2014 meekerdb meeke...@verizon.net wrote: I don't think human consciousness is a spandrel If consciousness does not effect intelligent behavior and if Darwin's Theory is correct then there is no alternative, consciousness is a spandrel. And if consciousness does effect intelligent behavior then the Turing Test works for both consciousness and intelligence. So either way if a fan of Darwin and a fan of logic runs across a computer that passes the Turing Test he MUST conclude that the machine is at least as conscious as his fellow human beings are. there may be different ways of being intelligent Almost certainly. Given that intelligence is the most complex thing in the known universe it would be very surprising indeed if it could be described by just one number, you need 2 for even something as simple as the wind. I think human consciousness and intelligence are both closely linked to language. I think so too. I am quite certain of it. Language is an evolutionarily useful adaptation of social animals. And even if those social animals were put in a non-social situation, marooned all alone on a desert island for example, they could not think properly and efficiently without language. And even a lone brain the size of Jupiter could not think properly unless it had a language to communicate abstract ideas between distant parts of its vast brain. But I see no reason that no-social animals cannot be intelligent (e.g. ocotopi are solitary by are the most intlligent non-vertebrates). All animals have some degree of intelligence and the octopus has more than most, but they are nowhere near smart enough to make radio telescopes, and lets face it that's what people usually mean when they talk about intelligent beings. I think the thing that separates humans from other animals is that about 100,000 years ago we developed a system that can encode even very abstract ideas into a few simple sounds; this not only enabled collective learning but also enormously magnified the power of individual thought. John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On Wed, Nov 26, 2014 at 6:01 PM, meekerdb meeke...@verizon.net wrote: If Evolution just stumbled onto consciousness because a astronomically unlikely mutation occurred and not because it was the byproduct of intelligence then it would be of neutral survival value and the human race would have lost that property long ago by genetic drift. No, because consciousness might be a necessary byproduct of human like intelligence, but not of all possible ways to achieving intelligence. Evolution is constrained in what adaptations it can develop. Then just like Evolution we will find that it is easier to make a conscious intelligent computer than to make a non-conscious intelligent computer; the first human level AI will be conscious, and if we wish to make a computer that is equally smart but not conscious it will take a great deal more RD. John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On Wed, Nov 26, 2014 meekerdb meeke...@verizon.net wrote: I insist also to distinguish intelligence from competence. My understanding is that intelligence refers to learning ability and behavior adaptability to novel circumstances. Competence is being able to act effectively in a given circumstance, but not necessarily adaptable to new circumstances. But Bruno says something can be intelligent without being competent, and that doesn't make one particle of sense to me. It can solve difficult and novel problems but it can't solve easy problems that it sees all the time?? John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On 11/26/2014 4:41 PM, John Clark wrote: On Wed, Nov 26, 2014 meekerdb meeke...@verizon.net mailto:meeke...@verizon.net wrote: I don't think human consciousness is a spandrel If consciousness does not effect intelligent behavior and if Darwin's Theory is correct then there is no alternative, That assumes human beings and human evolution - and I agree with that application. But it does not show that intelligence could not evolve without human-like consciousness which I take to be a inner narrative. consciousness is a spandrel. And if consciousness does effect intelligent behavior then the Turing Test works for both consciousness and intelligence. So either way if a fan of Darwin and a fan of logic runs across a computer that passes the Turing Test he MUST conclude that the machine is at least as conscious as his fellow human beings are. there may be different ways of being intelligent Almost certainly. Given that intelligence is the most complex thing in the known universe it would be very surprising indeed if it could be described by just one number, you need 2 for even something as simple as the wind. I think human consciousness and intelligence are both closely linked to language. I think so too. I am quite certain of it. Language is an evolutionarily useful adaptation of social animals. And even if those social animals were put in a non-social situation, marooned all alone on a desert island for example, they could not think properly and efficiently without language. And even a lone brain the size of Jupiter could not think properly unless it had a language to communicate abstract ideas between distant parts of its vast brain. Language is auditory. Abstract ideas can be represented in images or (per Bruno) numerical relations. You imply that any representation is language, but I think that's wrong. An intelligent might think in three dimensional patterns and not something one-dimensional like language. And neither is it necessary that there be an internal language for subroutines to work. There are encryption systems that provide for computations to be performed on data and results returned with ever decrypting the data; so the part of the system doing the calculation never receives any communication that has meaning to it. But I see no reason that no-social animals cannot be intelligent (e.g. ocotopi are solitary by are the most intlligent non-vertebrates). All animals have some degree of intelligence and the octopus has more than most, but they are nowhere near smart enough to make radio telescopes, and lets face it that's what people usually mean when they talk about intelligent beings. But that doesn't prove that octopi could not be both solitary and intelligent and not have an inner narrative. I think the thing that separates humans from other animals is that about 100,000 years ago we developed a system that can encode even very abstract ideas into a few simple sounds; this not only enabled collective learning but also enormously magnified the power of individual thought. So do you agree that having an inner narrative is the definition of consciousness, something much more restrictive than Bruno's awareness? Brent -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Is Dark Energy Gobbling Dark Matter, and Slowing Universe's Expansion?
On 26 November 2014 at 22:05, zibb...@gmail.com wrote: On Tuesday, November 25, 2014 6:50:00 PM UTC, Liz R wrote: And I said that it seemed to me that if dark matter was being destroyed galaxies should be expanding, and asked if there was any observational evidence to support this. Liz, you said it right at the start...but the point is only valid one time. What you reason above restates the same point in a different form. I repeated it because the other poster ignored what I'd said the first time AND made snarky comments showing he'd missed the point I was making, hence I felt it was worthwhile repeating it. Anyway, the point still holds. Dark matter is responsible for much of the structure of the universe, and if it's being turned into energy and radiated away then its gravitational attraction goes with it. Hence galaxies, held together by dark matter (as I Zwicky discovered in 1933 by studying their rotation curves) should be expanding IF dark matter is being annihilated, because the visible structure is rotating at the same speed around a centre containing a decreasing amount of mass. So, if I've understood this theory correctly, galaxies should be getting bigger. Can someone either explain how I've missed the point of the theory OR tell me if there is evidence of galaxies growing larger due to this effect? If not then I can happily forget this theory because it predicts some startling observational evidence that doesn't exist. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Two apparently different forms of entropy
On 27 November 2014 at 01:29, Richard Ruquist yann...@gmail.com wrote: Turns out that I do not understand it either. The pinhole thought experiment should decrease the coherent photons by a factor of 2 regardless of whether the incoherent photons are in separate branches or not. So the result is the same for MWI and wave collapse. I thought that was the point of them being interpretations? -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Two apparently different forms of entropy
On 27 November 2014 at 04:51, spudboy100 via Everything List everything-list@googlegroups.com wrote: Entropy and Time seem related, or at least one seems at least one aspect of the other. Is it sensible to think then, that there are two or more types of entropy, therefore, there are at least two dimensions of time? Entropy is a large scale statistical effect (classically) and has no direct bearing on time. If it can be made more fundamental then perhaps, yes... -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On 11/26/2014 4:49 PM, John Clark wrote: On Wed, Nov 26, 2014 at 6:01 PM, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net wrote: If Evolution just stumbled onto consciousness because a astronomically unlikely mutation occurred and not because it was the byproduct of intelligence then it would be of neutral survival value and the human race would have lost that property long ago by genetic drift. No, because consciousness might be a necessary byproduct of human like intelligence, but not of all possible ways to achieving intelligence. Evolution is constrained in what adaptations it can develop. Then just like Evolution we will find that it is easier to make a conscious intelligent computer than to make a non-conscious intelligent computer; the first human level AI will be conscious, and if we wish to make a computer that is equally smart but not conscious it will take a great deal more RD. More likely we will make an AI that is intelligent, is not conscious like a human with an inner narrative but is conscious in some other way which will be very difficult for us to recognize. Bruno thinks he recognizes consciousness in jumping spiders because they appear aware of their surrondings - yet it is very unlikely they experience an inner narrative. Watson or Deep Blue might be conscious in this hard to recognize way. A lot of our recognition of consciousness in other people is just based on their similarity to ourselves. Brent -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.
Re: Edge: Myth of A.I.
On 11/26/2014 4:59 PM, John Clark wrote: On Wed, Nov 26, 2014 meekerdb meeke...@verizon.net mailto:meeke...@verizon.net wrote: I insist also to distinguish intelligence from competence. My understanding is that intelligence refers to learning ability and behavior adaptability to novel circumstances. Competence is being able to act effectively in a given circumstance, but not necessarily adaptable to new circumstances. But Bruno says something can be intelligent without being competent, No, he says competence has a negative feedback on intelligence; meaning that when you learn to be competent at some task you stop thinking about it and don't learn anymore. He equates intelligence with ability to learn, so by his definition an infant is more intelligent than an Einstein (isn't it amazing how kids learn a language?). Brent and that doesn't make one particle of sense to me. It can solve difficult and novel problems but it can't solve easy problems that it sees all the time?? John K Clark -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com mailto:everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com mailto:everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Everything List group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.