Re: [singularity] Benefits of being a kook

2007-09-24 Thread Tom McCabe
--- Artificial Stupidity <[EMAIL PROTECTED]> wrote: > Who cares? Really, who does? You can't create an > AGI that is friendly or > unfriendly. It's like having a friendly or > unfriendly baby. No, it is not. A baby comes pre-designed by evolution and genetics. An AGI can be custom-written to

RE: [singularity] Benefits of being a kook

2007-09-24 Thread Tom McCabe
See http://www.topix.net/content/ap/2007/09/techies-ponder-computers-smarter-than-us-4. It's from the Associated Press, so it's written once and then copy-pasted to news sources all over the world. - Tom --- [EMAIL PROTECTED] wrote: > Near the beginning of this discussion, reference is > made t

Re: [singularity] Towards the Singularity

2007-09-08 Thread Tom McCabe
--- Quasar Strider <[EMAIL PROTECTED]> wrote: > On 9/8/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > > > --- Quasar Strider <[EMAIL PROTECTED]> > wrote: > > > > An out-of-context quote does not magically > overrule > > thre

Re: [singularity] Towards the Singularity

2007-09-07 Thread Tom McCabe
--- Quasar Strider <[EMAIL PROTECTED]> wrote: > On 9/7/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > > > --- Quasar Strider <[EMAIL PROTECTED]> > wrote: > > > > If you want to build a spacecraft, you cannot > simply > > put

Re: [singularity] Towards the Singularity

2007-09-07 Thread Tom McCabe
--- Quasar Strider <[EMAIL PROTECTED]> wrote: > On 9/7/07, Matt Mahoney <[EMAIL PROTECTED]> > wrote: > > > > --- Quasar Strider <[EMAIL PROTECTED]> > wrote: > > > > > Hello, > > > > > > I see several possible avenues for implementing > a self-aware machine > > which > > > can pass the Turing test

Re: [singularity] Towards the Singularity

2007-09-07 Thread Tom McCabe
--- Quasar Strider <[EMAIL PROTECTED]> wrote: > Hello, > > I see several possible avenues for implementing a > self-aware machine which > can pass the Turing test: i.e. human level AI. > Mechanical and Electronic. > However, I see little purpose in doing this. Fact > is, we already have self > a

Re: [singularity] AI is almost here (2/2)

2007-08-01 Thread Tom McCabe
So, if you got into an argument with Kent Hovind, you would instantly concede all of your positions because of his "superior education"? :) - Tom --- Alan Grimes <[EMAIL PROTECTED]> wrote: > Tom McCabe wrote: > > If you want to refute an argument, it is your > >

Re: [singularity] AI is almost here (2/2)

2007-08-01 Thread Tom McCabe
You've just admitted that computers can perform a logical operation other than addition (taking a negation). - Tom --- Alan Grimes <[EMAIL PROTECTED]> wrote: > Charles D Hixson wrote: > > Alan Grimes wrote: > >>> Think of asserting that "All computers will be, > at their core, adding > >>> mach

Re: [singularity] AI is almost here (2/2)

2007-08-01 Thread Tom McCabe
If you want to refute an argument, it is your responsibility to explain what is wrong with it. It's this concept called "burden of proof". If you refuse to provide evidence for your arguments, you simply lose. - Tom --- Alan Grimes <[EMAIL PROTECTED]> wrote: > >> Once you have done that, I'll w

Re: [singularity] AI is almost here (2/2)

2007-07-31 Thread Tom McCabe
--- Alan Grimes <[EMAIL PROTECTED]> wrote: > Tom McCabe wrote: > > Even a ridiculously simple CPU, such as the one at > > > http://www.gravitybowl.com/Design_Images/3_CPU_Design.jpg, > > has a heck of a lot more than "memory and a > control > >

Re: [singularity] AI is almost here (2/2)

2007-07-31 Thread Tom McCabe
Even a ridiculously simple CPU, such as the one at http://www.gravitybowl.com/Design_Images/3_CPU_Design.jpg, has a heck of a lot more than "memory and a control unit". A brief overview of some of the main components can be seen at http://www.fujitsu.com/img/EDG/product/asic/chip926.gif. - Tom -

Re: [singularity] AI is almost here (2/2)

2007-07-31 Thread Tom McCabe
inds of steps, to > > its goals - which are, by definition, not derived > > from its original > > programming. The capacity, say, to find a new kind > > of path through a maze or > > forest. > > Tom McCabe: Pathfinding programs, to my knowledge, > are actually >

Re: [singularity] AI is almost here (2/2)

2007-07-31 Thread Tom McCabe
--- Mike Tintner <[EMAIL PROTECTED]> wrote: > AG: The mid-point of the singularity > window could be as close as 2009. A rediculously > pessimistic prediction > would put it around 2012. We're pretty far off from having any kind of Singularity as it stands now. What do you think is going to happ

Re: [singularity] AI is almost here (2/2)

2007-07-30 Thread Tom McCabe
--- Alan Grimes <[EMAIL PROTECTED]> wrote: > om > > In this article I will quote and address some of the > issues raised > against my previous posting. I will then continue > with the planned > discussion of the current state of AI, I will also > survey some of the > choices available to the sin

Re: [singularity] Al's razor(1/2)

2007-07-29 Thread Tom McCabe
--- Alan Grimes <[EMAIL PROTECTED]> wrote: > om > > Today, I'm going to attempt to present an argument > in favor of a theory > that has resulted from my studies relating to AI. > While this is one of > the only things I have to show for my time spent on > AI. I am reasonably > confident in it's

Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-13 Thread Tom McCabe
Is this a moderated list or not? - Tom --- Alan Grimes <[EMAIL PROTECTED]> wrote: > Jey Kottalam wrote: > > On 7/12/07, Alan Grimes <[EMAIL PROTECTED]> > wrote: > > >> White on black text, which I have to manually set > my X-term for every > >> time I open a fucking window on Linux is the best

Re: [singularity] JOIN POST (inspired by Alan & Jey in 'Previous message was a big hit, eh?')

2007-07-10 Thread Tom McCabe
Welcome to the Great Cause! For becoming familiar with the concepts of AGI and the Singularity, I recommend http://www.singinst.org/reading/corereading/. As for becoming an AGI designer, see http://www.singinst.org/aboutus/opportunities/research-fellow and http://www.sl4.org/wiki/SoYouWantToBeASeed

Re: [singularity] critiques of Eliezer's views on AI

2007-07-04 Thread Tom McCabe
lt;[EMAIL PROTECTED]> wrote: > On 7/4/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > --- Randall Randall <[EMAIL PROTECTED]> > > wrote: > > > > > > > > On Jul 4, 2007, at 1:14 AM, Tom McCabe wrote: > > > > > > >

Re: [singularity] critiques of Eliezer's views on AI

2007-07-04 Thread Tom McCabe
--- Randall Randall <[EMAIL PROTECTED]> wrote: > > On Jul 4, 2007, at 3:17 PM, Tom McCabe wrote: > > --- Randall Randall <[EMAIL PROTECTED]> > > wrote: > >> On Jul 4, 2007, at 1:14 AM, Tom McCabe wrote: > >>> That definition isn't accura

Re: [singularity] critiques of Eliezer's views on AI

2007-07-04 Thread Tom McCabe
--- Randall Randall <[EMAIL PROTECTED]> wrote: > > On Jul 4, 2007, at 1:14 AM, Tom McCabe wrote: > > > That definition isn't accurate, because it doesn't > > match what we intuitively see as 'death'. 'Death' > is > > actually

Re: [singularity] critiques of Eliezer's views on AI

2007-07-04 Thread Tom McCabe
Death isn't just the absence of life; it's the cessation of life that once existed. The Bootes Void, so far as we know, has no life at all, and yet nobody feels it is a great tragedy. - Tom --- MindInstance <[EMAIL PROTECTED]> wrote: > >> Objective observers care only about the type of a > pers

Re: [singularity] critiques of Eliezer's views on AI

2007-07-03 Thread Tom McCabe
information that makes up a sentient being's mind. - Tom --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 04/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > Using that definition, everyone would die at an > age of > > a few months, because the brain&#

RE: [singularity] AI concerns

2007-07-03 Thread Tom McCabe
--- Sergey Novitsky <[EMAIL PROTECTED]> wrote: > >Governments do not have a history of realizing the > >power of technology before it comes on the market. > > But this was not so with nuclear weapons... It was the physicists who first became aware of the power of nukes, and the physicists had t

Re: [singularity] critiques of Eliezer's views on AI

2007-07-03 Thread Tom McCabe
Using that definition, everyone would die at an age of a few months, because the brain's matter is regularly replaced by new organic chemicals. - Tom --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 30/06/07, Heartland <[EMAIL PROTECTED]> > wrote: > > > Objective observers care only abo

RE: [singularity] AI concerns

2007-07-03 Thread Tom McCabe
--- "Sergey A. Novitsky" <[EMAIL PROTECTED]> wrote: > >> > >>Are these questions, statement, opinions, sound > bites or what? It seem a > >>bit of a stew. > Yes. A bit of everything indeed. Thanks for noting > the incoherency. > > >>> * As it already happened with nuclear > weapons, there ma

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Charles D Hixson <[EMAIL PROTECTED]> wrote: > Tom McCabe wrote: > > The problem isn't that the AGI will violate its > > original goals; it's that the AGI will eventually > do > > something that will destroy something really > important > > in

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-07-02 Thread Tom McCabe
--- Charles D Hixson <[EMAIL PROTECTED]> wrote: > Tom McCabe wrote: > > -... > > To quote: > > > > "I am not sure you are capable of following an > > argument" > > > > If I'm not capable of even following an argument, &

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > I think we're getting terms mixed up here. By > > "values", do you mean the "ends", the ultimate > moral > > objectives t

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > --- Jef Allbright <[EMAIL PROTECTED]> wrote: > > > > I hope that my response to Stathis might further > > > elucidate. > > >

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/2/07, Stathis Papaioannou <[EMAIL PROTECTED]> > wrote: > > On 02/07/07, Jef Allbright <[EMAIL PROTECTED]> > wrote: > > > > > While I agree with you in regard to decoupling > intelligence and any > > > particular goals, this doesn't mean goals ca

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- BillK <[EMAIL PROTECTED]> wrote: > On 7/2/07, Tom McCabe wrote: > > > > AGIs do not work in a "sensible" manner, because > they > > have no constraints that will force them to stay > > within the bounds of behavior that a human would > >

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > --- Jef Allbright <[EMAIL PROTECTED]> wrote: > > > > > For years I've observed and occasionally > > > participat

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- BillK <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe wrote: > > > > --- BillK <[EMAIL PROTECTED]> wrote: > > > > > On 7/1/07, Tom McCabe wrote: > > > > > > > > These rules exist only in your head. They > aren't &

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > The goals will be designed by humans, but the huge > > prior probability against the goals leading to an > AGI > > that does what people want

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-07-02 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > " > > I am not sure you are capable of following an > argument > > in a manner that makes it worth my while to > continue. > > > >

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > It would be vastly easier for a properly > programmed > > AGI to decipher what we meant that it would be for > > humans. The question is- w

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-07-02 Thread Tom McCabe
" I am not sure you are capable of following an argument in a manner that makes it worth my while to continue. - s" So, you're saying that I have no idea what I'm talking about, so therefore you're not going to bother arguing with me anymore. This is a classic example of an ad hominem argument. T

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
It is very coherent; however, I'm not sure how you would judge a goal's arbitrariness. From the human perspective it is rather arbitrary, since it's unrelated to most human desires. --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Jef Allbright <[EMAIL PROTECTED]> > wrote: > > >

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > Are you suggesting that the AI won't be smart > enough > > > to understand > > > what people mean when they ask for a ban

Re: [singularity] Top AI Services to Humans

2007-07-02 Thread Tom McCabe
True, but an AGI can do all of that stuff a lot faster and easier than humans can, and I believe the original question was "what are the benefits of AGI"? - Tom --- Samantha Atkins <[EMAIL PROTECTED]> wrote: > Tom McCabe wrote: > > > > Okay, to start with: &

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-07-02 Thread Tom McCabe
--- Samantha Atkins <[EMAIL PROTECTED]> wrote: > Tom McCabe wrote: > > --- Samantha Atkins <[EMAIL PROTECTED]> wrote: > > > > > >> > >> Out of the bazillions of possible ways to > configure > >> matter only a > >>

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/1/07, Stathis Papaioannou <[EMAIL PROTECTED]> > wrote: > > > If its top level goal is to allow its other goals > to vary randomly, > > then evolution will favour those AI's which decide > to spread and > > multiply, perhaps consuming humans in

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > For > > > example, if it has as its most important goal > > > obeying the commands of > > > humans, that's what i

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
d the AGI might possibly do in advance, and that isn't going to work. - Tom --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > But killing someone and then beating them on the > > chessboard due to

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > The AGI doesn't care what any human, human > committee, > > or human government thinks; it simply follows its > own > > internal

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- BillK <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe wrote: > > > > These rules exist only in your head. They aren't > > written down anywhere, and they will not be > > transferred via osmosis into the AGI. > > > > They *are* written

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- BillK <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe wrote: > > The constraints of "don't shoot the opponent" > aren't > > written into the formal rules of chess; they exist > > only in your mind. If you claim otherwise, please > give

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 01:51:26PM -0700, Tom McCabe > wrote: > > > All of this applies only to implosion-type > devices, > > which are far more complicated and tricky to pull > off > > than gun-type devices,

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 08:56:24PM +1000, Stathis > Papaioannou wrote: > > > But the constraints of the problem are no less a > legitimate part of > > We're all solving the same problem: sustainable > self-replication > long-term. What does self-rep

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
The constraints of "don't shoot the opponent" aren't written into the formal rules of chess; they exist only in your mind. If you claim otherwise, please give me one chess tutorial that explicitly says "don't shoot the opponent". - Tom --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 12:44:09AM -0700, Tom McCabe > wrote: > > > > They also need knowledge, which is still largely > > > secret. > > > > Knowledge of *what*? How to build a crude gun to > fire &

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 07:01:07PM +1000, Stathis > Papaioannou wrote: > > > What sort of technical information, exactly, is > still secret after 50 years? > > The precise blueprint for a working device. Not a > crude gun assembler, > the full implos

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
physically hurt" the opponent exists only in your head; it doesn't exist in any chess rulebook and isn't automatically transferred to the AGI. - Tom --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > &

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 12:45:20AM -0700, Tom McCabe > wrote: > > > Do you have any actual evidence for this? History > has > > shown that numbers made up on the spot with no > > experimental verification whatsoe

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 12:47:27AM -0700, Tom McCabe > wrote: > > > Because an AGI is an entirely different kind of > thing > > from evolution. AGI doesn't have to care what > > If it's being created b

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
Because an AGI is an entirely different kind of thing from evolution. AGI doesn't have to care what evolution is or how it works; there's no constraint on it whatsoever to act like evolution does. Evolution is actually nicer than most AGIs, because evolution is constrained by the need to have one v

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sat, Jun 30, 2007 at 10:11:20PM -0400, Alan > Grimes wrote: > > > =\ > > For the last several years, the limiting factor > has absolutely not been > > hardware. > > How many years? How much OPS, aggregated network and > memory bandwidth? > What is

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 10:24:17AM +1000, Stathis > Papaioannou wrote: > > > Nuclear weapons need a lot of capital and > resources to construct, > > They also need knowledge, which is still largely > secret. Knowledge of *what*? How to build a crude

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > Why do you assume that "win at any cost" is the > > > default around which > > > you need to work? > > > > Beca

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
What does Vista have to do with hardware development? Vista merely exploits hardware; it doesn't build it. If you want to measure hardware progress, you can just use some benchmarking program; you don't have to use OS hardware requirements as a proxy. - Tom --- Charles D Hixson <[EMAIL PROTECTED

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > But Deep Blue wouldn't try to poison Kasparov in > > > order to win the > > > game. This isn't because it isn't int

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
More like trying to stop nuclear annihilation if before the discovery of the fission chain reaction, everything from your car to your toaster had parts built out of solid U-235. - Tom --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/07/07, Sergey A. Novitsky > <[EMAIL PROTECTED]> wrot

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > An excellent analogy to a superintelligent AGI is > a > > really good chess-playing computer program. The > > computer program doesn't r

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
--- "Sergey A. Novitsky" <[EMAIL PROTECTED]> wrote: > Dear all, > > Perhaps, the questions below were already touched > numerous times in the > past. > > Could someone kindly point to discussion threads > and/or articles where these > concerns were addressed or discussed? > > > > Kind regar

Re: [singularity] critiques of Eliezer's views on AI

2007-06-29 Thread Tom McCabe
I'm going to let the zombie thread die. - Tom --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 29/06/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > But when you talk about "yourself", you mean the > > "yourself" of the copy

Re: Magickal consciousness stuff was Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
--- Randall Randall <[EMAIL PROTECTED]> wrote: > > On Jun 28, 2007, at 11:26 PM, Tom McCabe wrote: > > --- Randall Randall <[EMAIL PROTECTED]> > > wrote: > >> and > >> What should a person before a copying experiment > >> expect to rememb

Re: Magickal consciousness stuff was Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
--- Randall Randall <[EMAIL PROTECTED]> wrote: > On Jun 28, 2007, at 9:08 PM, Tom McCabe wrote: > > --- Randall Randall <[EMAIL PROTECTED]> > wrote: > >> On Jun 28, 2007, at 7:35 PM, Tom McCabe wrote: > >>> You're assuming again that conscio

Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 29/06/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > I think > > it works better to look at it from the perspective > of > > the guy doing the upload rather than the guy being > > uploade

Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 29/06/07, Niels-Jeroen Vandamme > > > Personally, I do not believe in coincidence. > Everything in the universe > > might seem stochastic, but it all has a logical > explanation. I believe the > > same applies to quantum chaos, though quant

Re: [singularity] Previous message was a big hit, eh?

2007-06-28 Thread Tom McCabe
--- Alan Grimes <[EMAIL PROTECTED]> wrote: > ;) > > Seriously now, Why do people insist there is a > necessary connection (as > in A implies B) between the singularity and brain > uploading? > > Why is it that anyone who thinks "the singularity > happens and most > people remain humanoid" is au

Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 29/06/07, Charles D Hixson > <[EMAIL PROTECTED]> wrote: > > > > Yes, you would live on in one of the copies as > if uploaded, and yes > > > the selection of which copy would be purely > random, dependent on the > > > relative frequency of e

Re: Magickal consciousness stuff was Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
--- Randall Randall <[EMAIL PROTECTED]> wrote: > On Jun 28, 2007, at 7:35 PM, Tom McCabe wrote: > > You're assuming again that consciousness is > conserved. > > I have no idea why you think so. I would say that > I think that each copy is conscious only of their

Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
physical analogy I can think of is an electron annihilating a positron. Neither of the emitted gamma rays "is" the electron, as that wouldn't make any sense, but all the electron's energy, charge, quantum numbers and so forth are still there. - Tom --- Randall Randall <[EMAIL PR

Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
--- Randall Randall <[EMAIL PROTECTED]> wrote: > > On Jun 28, 2007, at 5:18 PM, Tom McCabe wrote: > > How do you get the "50% chance"? There is a 100% > > chance of a mind waking up who has been uploaded, > and > > also a 100% chance of a mind waking

Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
--- Niels-Jeroen Vandamme <[EMAIL PROTECTED]> wrote: > >This is a textbook case of what Eliezer calls > >"worshipping a sacred mystery". People tend to act > >like a theoretical problem is some kind of God, > >something above them in the social order, and since > >it's beaten others before you it

Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
How do you get the "50% chance"? There is a 100% chance of a mind waking up who has been uploaded, and also a 100% chance of a mind waking up who hasn't. This doesn't violate the laws of probability because these aren't mutually exclusive. Asking which one "was you" is silly, because we're assuming

Re: [singularity] critiques of Eliezer's views on AI

2007-06-28 Thread Tom McCabe
--- Niels-Jeroen Vandamme <[EMAIL PROTECTED]> wrote: > >From: Charles D Hixson <[EMAIL PROTECTED]> > >Reply-To: singularity@v2.listbox.com > >To: singularity@v2.listbox.com > >Subject: Re: [singularity] critiques of Eliezer's > views on AI > >Date: Thu, 28 Jun 2007 09:56:12 -0700 > > > >Stathis P

Re: [singularity] Top AI Services to Humans

2007-06-26 Thread Tom McCabe
--- Michael LaTorra <[EMAIL PROTECTED]> wrote: > Hey Tom, > You wrote: > > On 6/26/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > > > --- Michael LaTorra <[EMAIL PROTECTED]> wrote: > > > > > Bill Hibbard (author of _Super

Re: [singularity] Top AI Services to Humans

2007-06-26 Thread Tom McCabe
--- Michael LaTorra <[EMAIL PROTECTED]> wrote: > Bill Hibbard (author of _Super-Intelligent Machines_ > and researcher in the > Machine Intelligence Project at the U. of Wisconsin) > wrote (see > http://www.ssec.wisc.edu:80/~billh/visfiles.html): > > "Currently, according to theory, every pair o

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-26 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Mon, Jun 25, 2007 at 11:53:09PM -0700, Tom McCabe > wrote: > > > Not so much "anesthetic" as "liquid helium", I > think, > > How about 20-30 sec of stopped blood flow. Instant > flat EEG.

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-26 Thread Tom McCabe
Ants I'm not sure about, but many species are still here only because we, as humans, are not simple optimization processes that turn everything they see into paperclips. Even so, we regularly do the exact same thing that people say AIs won't do- we bulldoze into some area, set up developments, and

Re: [singularity] critiques of Eliezer's views on AI

2007-06-26 Thread Tom McCabe
(sigh) That's not the point. What Gene Roddenberry thought, and whether Star Trek is real or not, are totally irrelevant to the ethical issue of whether "transportation" would be a good thing, and how it should be done to minimize any possible harmful effects. - Tom --- Colin Tate-Majcher <[EMAI

Re: [singularity] critiques of Eliezer's views on AI

2007-06-25 Thread Tom McCabe
You're confusing memetics and genetics here, I think. We couldn't possibly have an evolutionary instinct to "believe in consciousness" because A), there's no selection pressure for it as hunter-gatherers don't think much about philosophy, and B) there hasn't been enough time for such an instinct to

Re: [singularity] critiques of Eliezer's views on AI

2007-06-25 Thread Tom McCabe
Because otherwise it would be a copy and not a transfer. "Transfer" implies that it is moved from one place to another and so only one being can exist when the process is finished. - Tom --- Jey Kottalam <[EMAIL PROTECTED]> wrote: > On 6/25/07, Matt Mahoney <[EMAIL PROTECTED]> > wrote: > > > >

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-25 Thread Tom McCabe
Not so much "anesthetic" as "liquid helium", I think, to be quadruply sure that all brain activity has stopped and the physical self and virtual self don't diverge. People do have brain activity even while unconscious. - Tom --- Jey Kottalam <[EMAIL PROTECTED]> wrote: > On 6/25/07, Papiewski, J

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-23 Thread Tom McCabe
These questions, although important, have little to do with the feasibility of FAI. I think we can all agree that the space of possible universe configurations without sentient life of *any kind* is vastly larger than the space of possible configurations with sentient life, and designing an AGI to

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-23 Thread Tom McCabe
--- Samantha Atkins <[EMAIL PROTECTED]> wrote: > > On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote: > > > > > We can't "know it" in the sense of a mathematical > > proof, but it is a trivial observation that out of > the > > bazi

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-21 Thread Tom McCabe
--- Panu Horsmalahti <[EMAIL PROTECTED]> wrote: > An AGI is not selected by random from all possible > "minds", it is designed > by humans, therefore you can't apply the probability > from the assumption > that most AI's are unfriendly. True; there is likely some bias towards Friendliness in AIs

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-21 Thread Tom McCabe
--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > (Echoing Joshua Fox's request:) Ben, could you > also tell us where you > > disagree with Eliezer? > > Eliezer and I disagree on very many points, and also > agree on very > many points, but I'll mention a few key points here. > > (I also not

Re: [singularity] What form will superAGI take?

2007-06-16 Thread Tom McCabe
--- Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- Mike Tintner <[EMAIL PROTECTED]> wrote: > > > Perhaps you've been through this - but I'd like to > know people's ideas about > > what exact physical form a Singulitarian or > near-Singul. AGI will take. And > > I'd like to know people's automati

Re: [singularity] Getting ready for takeoff

2007-06-15 Thread Tom McCabe
--- Lúcio de Souza Coelho <[EMAIL PROTECTED]> wrote: > On 6/15/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > >How exactly do you control a megaton-size hunk of > >metal flying through the air at 10,000+ m/s? > > Clarifying this point on speed, in my view

Re: [singularity] Getting ready for takeoff

2007-06-15 Thread Tom McCabe
ct is roughly inversely proportional to its size, because inertia goes up with r^3 while surface area (and hence drag) only goes up with r^2. - Tom --- Lúcio de Souza Coelho <[EMAIL PROTECTED]> wrote: > On 6/15/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > (...) > > Also,

Re: [singularity] Getting ready for takeoff

2007-06-15 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Thu, Jun 14, 2007 at 11:05:23PM -0300, Lúcio de > Souza Coelho wrote: > > > >Check your energetics. Asteroid mining is > promising for space-based > > >construction. Otherwise you'd better at least > have controllable fusion > > >rockets. > > It

Re: [singularity] AI and politics

2007-06-07 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Thu, Jun 07, 2007 at 07:24:32AM -0700, Michael > Anissimov wrote: > > > >You've been sounding like a broken record for a > while. It's because > > >speed kills. What or who is doing the killing is > not important. > > > > Who needs politeness or r

Re: [singularity] Re: Personal attacks

2007-06-07 Thread Tom McCabe
--- Charles D Hixson <[EMAIL PROTECTED]> wrote: > Tom McCabe wrote: > > --- Eugen Leitl <[EMAIL PROTECTED]> wrote: > > > > > >> On Tue, Jun 05, 2007 at 01:24:04PM -0700, Tom > McCabe > >> wrote: > >> > >> > >

Re: [singularity] AI and politics

2007-06-07 Thread Tom McCabe
--- Michael Anissimov <[EMAIL PROTECTED]> wrote: > On 6/7/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: > > > > You've been sounding like a broken record for a > while. It's because > > speed kills. What or who is doing the killing is > not important. > > Who needs politeness or respect for your fe

Re: [singularity] AI and politics

2007-06-07 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Thu, Jun 07, 2007 at 05:53:10AM -0700, Michael > Anissimov wrote: > > > If an AI can come up with better ideas for > improving our lives than we > > can, then wouldn't it make sense to pay attention > to it? Why should > > You've been sounding li

Re: [singularity] AI and politics

2007-06-07 Thread Tom McCabe
That would be nice, but unfortunately it's unrealistic. Just look at what medical science has done over the past millennium: 1. Totally wiped out smallpox, a huge killer. 2. Effectively wiped out many more diseases, such as measles, mumps, rubella, typhus, diphtheria, cholera, tetanus and many oth

Re: [singularity] Re: Personal attacks

2007-06-06 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Tue, Jun 05, 2007 at 01:24:04PM -0700, Tom McCabe > wrote: > > > Unless, of course, that human turns out to be evil > and > > That why you need to screen them, and build a group > with > checks and balances. I

Re: [singularity] Re: Personal attacks

2007-06-05 Thread Tom McCabe
Unless, of course, that human turns out to be evil and proceeds to use his power to create The Holocaust Part II. Seriously- out of all the people in positions of power, a very large number are nasty jerks who abuse that power. I can't think of a single great world power that has not committed atro

  1   2   >