RE: [singularity] AI concerns

2007-07-03 Thread Tom McCabe
--- Sergey Novitsky <[EMAIL PROTECTED]> wrote: > >Governments do not have a history of realizing the > >power of technology before it comes on the market. > > But this was not so with nuclear weapons... It was the physicists who first became aware of the power of nukes, and the physicists had t

RE: [singularity] AI concerns

2007-07-03 Thread Sergey Novitsky
Governments do not have a history of realizing the power of technology before it comes on the market. But this was not so with nuclear weapons... And with AGI, it's about something that has the potential to overthrow the world order (or at least the order within a single country). Would not the

RE: [singularity] AI concerns

2007-07-03 Thread Tom McCabe
--- "Sergey A. Novitsky" <[EMAIL PROTECTED]> wrote: > >> > >>Are these questions, statement, opinions, sound > bites or what? It seem a > >>bit of a stew. > Yes. A bit of everything indeed. Thanks for noting > the incoherency. > > >>> * As it already happened with nuclear > weapons, there ma

RE: [singularity] AI concerns

2007-07-03 Thread Sergey A. Novitsky
>> >>Are these questions, statement, opinions, sound bites or what? It seem a >>bit of a stew. Yes. A bit of everything indeed. Thanks for noting the incoherency. >>> * As it already happened with nuclear weapons, there may be >>> treaties constraining AI development. >>> >> >>Well we ha

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Charles D Hixson <[EMAIL PROTECTED]> wrote: > Tom McCabe wrote: > > The problem isn't that the AGI will violate its > > original goals; it's that the AGI will eventually > do > > something that will destroy something really > important > > in such a way as to satisfy all of its > constraints.

Re: [singularity] AI concerns

2007-07-02 Thread Charles D Hixson
Tom McCabe wrote: The problem isn't that the AGI will violate its original goals; it's that the AGI will eventually do something that will destroy something really important in such a way as to satisfy all of its constraints. By setting constraints on the AGI, you're trying to think of everything

Re: [singularity] AI concerns

2007-07-02 Thread Jef Allbright
On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> wrote: --- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > I think we're getting terms mixed up here. By > > "values", do you mean the "ends", the ultimate > moral > > objectives that the AGI has, thin

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > I think we're getting terms mixed up here. By > > "values", do you mean the "ends", the ultimate > moral > > objectives that the AGI has, things that the AGI > > thinks are good across all pos

Re: [singularity] AI concerns

2007-07-02 Thread Jef Allbright
On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> wrote: I think we're getting terms mixed up here. By "values", do you mean the "ends", the ultimate moral objectives that the AGI has, things that the AGI thinks are good across all possible situations? No, sorry. By "values", I mean something similar

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > --- Jef Allbright <[EMAIL PROTECTED]> wrote: > > > > I hope that my response to Stathis might further > > > elucidate. > > > > Er, okay. I read this email first. > > > > Might I suggest re

Re: [singularity] AI concerns

2007-07-02 Thread Jef Allbright
On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> wrote: --- Jef Allbright <[EMAIL PROTECTED]> wrote: > I hope that my response to Stathis might further > elucidate. Er, okay. I read this email first. Might I suggest reading an entire post for comprehension /before/ beginning to reply. I do app

Re: [singularity] AI concerns

2007-07-02 Thread Alan Grimes
>> http://wwwcsi.unian.it/educa/inglese/brownrob.html >> READ IT AND WEEP! > I read it and shook my head in amazement that you consider this an > argument in this context. My point is that the actual functional capacity of the brain (not counting internal redundancies), is actually much smaller

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/2/07, Stathis Papaioannou <[EMAIL PROTECTED]> > wrote: > > On 02/07/07, Jef Allbright <[EMAIL PROTECTED]> > wrote: > > > > > While I agree with you in regard to decoupling > intelligence and any > > > particular goals, this doesn't mean goals ca

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- BillK <[EMAIL PROTECTED]> wrote: > On 7/2/07, Tom McCabe wrote: > > > > AGIs do not work in a "sensible" manner, because > they > > have no constraints that will force them to stay > > within the bounds of behavior that a human would > > consider "sensible". > > > > > If you really mean the

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > --- Jef Allbright <[EMAIL PROTECTED]> wrote: > > > > > For years I've observed and occasionally > > > participated in these > > > discussions of humans (however augmented and/or > > > organiz

Re: [singularity] AI concerns

2007-07-02 Thread Jef Allbright
On 7/1/07, Tom McCabe <[EMAIL PROTECTED]> wrote: --- Jef Allbright <[EMAIL PROTECTED]> wrote: > For years I've observed and occasionally > participated in these > discussions of humans (however augmented and/or > organized) vis-à-vis > volitional superintelligent AI, and it strikes me as > quit

Re: [singularity] AI concerns

2007-07-02 Thread BillK
On 7/2/07, Tom McCabe wrote: AGIs do not work in a "sensible" manner, because they have no constraints that will force them to stay within the bounds of behavior that a human would consider "sensible". If you really mean the above, then I don't see why you are bothering to argue on this list

Re: [singularity] AI concerns

2007-07-02 Thread Jef Allbright
On 7/2/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: On 02/07/07, Jef Allbright <[EMAIL PROTECTED]> wrote: > While I agree with you in regard to decoupling intelligence and any > particular goals, this doesn't mean goals can be random or arbitrary. > To the extent that striving toward goals

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- BillK <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe wrote: > > > > --- BillK <[EMAIL PROTECTED]> wrote: > > > > > On 7/1/07, Tom McCabe wrote: > > > > > > > > These rules exist only in your head. They > aren't > > > > written down anywhere, and they will not be > > > > transferred via os

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > The goals will be designed by humans, but the huge > > prior probability against the goals leading to an > AGI > > that does what people want means that it takes a > heck > > of a lot

Re: [singularity] AI concerns

2007-07-02 Thread Stathis Papaioannou
On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: The goals will be designed by humans, but the huge prior probability against the goals leading to an AGI that does what people want means that it takes a heck of a lot of design effort to accomplish that. Not as much design effort as building

Re: [singularity] AI concerns

2007-07-02 Thread BillK
On 7/1/07, Tom McCabe wrote: --- BillK <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe wrote: > > > > These rules exist only in your head. They aren't > > written down anywhere, and they will not be > > transferred via osmosis into the AGI. > > > > They *are* written down. > I just quoted fr

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > It would be vastly easier for a properly > programmed > > AGI to decipher what we meant that it would be for > > humans. The question is- why would the AGI want to > > decipher what hu

Re: [singularity] AI concerns

2007-07-02 Thread Stathis Papaioannou
On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: It would be vastly easier for a properly programmed AGI to decipher what we meant that it would be for humans. The question is- why would the AGI want to decipher what human mean, as opposed to the other 2^1,000,000,000 things it could be doing?

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
It is very coherent; however, I'm not sure how you would judge a goal's arbitrariness. From the human perspective it is rather arbitrary, since it's unrelated to most human desires. --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Jef Allbright <[EMAIL PROTECTED]> > wrote: > > >

Re: [singularity] AI concerns

2007-07-02 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > Are you suggesting that the AI won't be smart > enough > > > to understand > > > what people mean when they ask for a banana? > > > > It's not a question of intelligence- it's a > qu

Re: [singularity] AI concerns

2007-07-02 Thread Samantha Atkins
Charles D Hixson wrote: Samantha Atkins wrote: Sergey A. Novitsky wrote: Dear all, ... o Be deprived of free will or be given limited free will (if such a concept is applicable to AI). See above, no effective means of control. - samantha There is *one* effective me

Re: [singularity] AI concerns

2007-07-02 Thread Stathis Papaioannou
On 02/07/07, Jef Allbright <[EMAIL PROTECTED]> wrote: While I agree with you in regard to decoupling intelligence and any particular goals, this doesn't mean goals can be random or arbitrary. To the extent that striving toward goals (more realistically: promotion of values) is supportable by int

Re: [singularity] AI concerns

2007-07-02 Thread Stathis Papaioannou
On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: > Are you suggesting that the AI won't be smart enough > to understand > what people mean when they ask for a banana? It's not a question of intelligence- it's a question of selecting a human-friendly target in a huge space of possibilities. Wh

Re: [singularity] AI concerns

2007-07-02 Thread Samantha Atkins
Alan Grimes wrote: Samantha Atkins wrote: Alan Grimes wrote: Available computing power doesn't yet match that of the human brain, but I see your point, What makes you so sure of that? It has been computed countless times here and elsewhere that I am sure you

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/1/07, Stathis Papaioannou <[EMAIL PROTECTED]> > wrote: > > > If its top level goal is to allow its other goals > to vary randomly, > > then evolution will favour those AI's which decide > to spread and > > multiply, perhaps consuming humans in

Re: [singularity] AI concerns

2007-07-01 Thread Jef Allbright
On 7/1/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: If its top level goal is to allow its other goals to vary randomly, then evolution will favour those AI's which decide to spread and multiply, perhaps consuming humans in the process. Building an AI like this would be like building a bomb

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > For > > > example, if it has as its most important goal > > > obeying the commands of > > > humans, that's what it will do. > > > > Yup. For example, if a human said "I want a > bana

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: > For > example, if it has as its most important goal > obeying the commands of > humans, that's what it will do. Yup. For example, if a human said "I want a banana", the fastest way for the AGI to get the human banana may be to detonate a kilo

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
The problem isn't that the AGI will violate its original goals; it's that the AGI will eventually do something that will destroy something really important in such a way as to satisfy all of its constraints. By setting constraints on the AGI, you're trying to think of everything bad the AGI might p

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > The AGI doesn't care what any human, human > committee, > > or human government thinks; it simply follows its > own > > internal rules. > > Sure, but its internal rules and goals migh

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: The AGI doesn't care what any human, human committee, or human government thinks; it simply follows its own internal rules. Sure, but its internal rules and goals might be specified in such a way as to make it refrain from acting in a particul

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: But killing someone and then beating them on the chessboard due to the lack of opposition does count as winning under the formal rules of chess, since there's nothing in the rules of chess about killing the opponent. The rule "don't kill, strangl

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 02/07/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: > But in the final analysis, the AI would be able to be implemented as > code in a general purpose language on a general purpose computer with Absolutely not. Possibly, something like a silicon compiler with billions to trillions asynchronous

Re: [singularity] AI concerns

2007-07-01 Thread Alan Grimes
BillK wrote: > On 7/1/07, Tom McCabe wrote: >> These rules exist only in your head. They aren't >> written down anywhere, and they will not be >> transferred via osmosis into the AGI. > They *are* written down. > I just quoted from the FIDE laws of chess. > And they would be given to the AGI alon

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- BillK <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe wrote: > > > > These rules exist only in your head. They aren't > > written down anywhere, and they will not be > > transferred via osmosis into the AGI. > > > > They *are* written down. > I just quoted from the FIDE laws of chess. > A

Re: [singularity] AI concerns

2007-07-01 Thread BillK
On 7/1/07, Tom McCabe wrote: These rules exist only in your head. They aren't written down anywhere, and they will not be transferred via osmosis into the AGI. They *are* written down. I just quoted from the FIDE laws of chess. And they would be given to the AGI along with the rest of the rul

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- BillK <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe wrote: > > The constraints of "don't shoot the opponent" > aren't > > written into the formal rules of chess; they exist > > only in your mind. If you claim otherwise, please > give > > me one chess tutorial that explicitly says "don't

Re: [singularity] AI concerns

2007-07-01 Thread BillK
On 7/1/07, Tom McCabe wrote: The constraints of "don't shoot the opponent" aren't written into the formal rules of chess; they exist only in your mind. If you claim otherwise, please give me one chess tutorial that explicitly says "don't shoot the opponent". This is just silly. All competiti

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 01:51:26PM -0700, Tom McCabe > wrote: > > > All of this applies only to implosion-type > devices, > > which are far more complicated and tricky to pull > off > > than gun-type devices, and which are therefore > > unlikely to be

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 08:56:24PM +1000, Stathis > Papaioannou wrote: > > > But the constraints of the problem are no less a > legitimate part of > > We're all solving the same problem: sustainable > self-replication > long-term. What does self-rep

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
The constraints of "don't shoot the opponent" aren't written into the formal rules of chess; they exist only in your mind. If you claim otherwise, please give me one chess tutorial that explicitly says "don't shoot the opponent". - Tom --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 12:44:09AM -0700, Tom McCabe > wrote: > > > > They also need knowledge, which is still largely > > > secret. > > > > Knowledge of *what*? How to build a crude gun to > fire > > one block of cast metal into another block of cas

Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson
Samantha Atkins wrote: Sergey A. Novitsky wrote: Dear all, ... o Be deprived of free will or be given limited free will (if such a concept is applicable to AI). See above, no effective means of control. - samantha There is *one* effective means of control: An AI wil

Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson
Peter Voss wrote: Just for the record: To put it mildly, not everyone is 'Absolutely' sure that AGI can't be implemented on Bill's computer. In fact, some of us are pretty certain that that (a) current hardware is adequate, and (b) AGI software will be with us in (much) less than 10 years. Some

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 07:01:07PM +1000, Stathis > Papaioannou wrote: > > > What sort of technical information, exactly, is > still secret after 50 years? > > The precise blueprint for a working device. Not a > crude gun assembler, > the full implos

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
But killing someone and then beating them on the chessboard due to the lack of opposition does count as winning under the formal rules of chess, since there's nothing in the rules of chess about killing the opponent. The rule "don't kill, strangle, drug, maim, injure, or otherwise physically hurt"

Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson
Samantha Atkins wrote: Charles D Hixson wrote: Stathis Papaioannou wrote: Available computing power doesn't yet match that of the human brain, but I see your point, software (in general) isn't getting better nearly as quickly as hardware is getting better. Well, not at the personally accessib

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 01:51:26PM -0700, Tom McCabe wrote: > All of this applies only to implosion-type devices, > which are far more complicated and tricky to pull off > than gun-type devices, and which are therefore > unlikely to be used. We're arguing something tedious, and meaningless. The

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 12:45:20AM -0700, Tom McCabe > wrote: > > > Do you have any actual evidence for this? History > has > > shown that numbers made up on the spot with no > > experimental verification whatsoever don't work > well. > > You need 10

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 12:47:27AM -0700, Tom McCabe > wrote: > > > Because an AGI is an entirely different kind of > thing > > from evolution. AGI doesn't have to care what > > If it's being created by an evolutionary context What does that even me

Re: [singularity] AI concerns

2007-07-01 Thread Alan Grimes
Eugen Leitl wrote: > On Sun, Jul 01, 2007 at 11:11:06PM +1000, Stathis Papaioannou wrote: >> But in the final analysis, the AI would be able to be implemented as >> code in a general purpose language on a general purpose computer with > Absolutely not. Possibly, something like a silicon compiler

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 10:08:53AM -0700, Peter Voss wrote: > Just for the record: To put it mildly, not everyone is 'Absolutely' sure > that AGI can't be implemented on Bill's computer. What is Bill's computer? A 3 PFlop Blue Gene/P? A Core 2 duo box from Dell? > In fact, some of us are pretty

RE: [singularity] AI concerns

2007-07-01 Thread Peter Voss
e people may be very, very surprised. -Original Message- From: Eugen Leitl [mailto:[EMAIL PROTECTED] Sent: Sunday, July 01, 2007 9:35 AM To: singularity@v2.listbox.com Subject: Re: [singularity] AI concerns On Sun, Jul 01, 2007 at 11:11:06PM +1000, Stathis Papaioannou wrote: > Bu

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 11:11:06PM +1000, Stathis Papaioannou wrote: > But in the final analysis, the AI would be able to be implemented as > code in a general purpose language on a general purpose computer with Absolutely not. Possibly, something like a silicon compiler with billions to trillion

Re: [singularity] AI concerns

2007-07-01 Thread Alan Grimes
Samantha Atkins wrote: > Alan Grimes wrote: >>> Available computing power doesn't yet match that of the human brain, >>> but I see your point, >> What makes you so sure of that? > It has been computed countless times here and elsewhere that I am sure > you are aware of so why do you ask? http://

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 01/07/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: I'm not sure all-purpose hardware would be suitable [for AI]. It depends very much upon which computing paradigm is dominant by that time (40-50 years away from now). Judging from the past, there might be not that much progress there. But in

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 08:56:24PM +1000, Stathis Papaioannou wrote: > But the constraints of the problem are no less a legitimate part of We're all solving the same problem: sustainable self-replication long-term. Artificial systems so far are only instruments (though they do evolve in the human

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 01/07/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: > that disabling your opponent would be helpful, it's because the > problem it is applying its intelligence to is winning according to the > formal rules of chess. Winning at any cost might look like the same > problem to us vague humans, but i

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 12:44:09AM -0700, Tom McCabe wrote: > > They also need knowledge, which is still largely > > secret. > > Knowledge of *what*? How to build a crude gun to fire > one block of cast metal into another block of cast > metal? How about gas centrifuges, materials resistant to U

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: > If its goal is "achieve x using whatever means > necessary" and x is > "win at chess using only the formal rules of chess", > then it would > fail if it won by using some means extraneous to the > formal rules of > chess, just as surely as it

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 07:01:07PM +1000, Stathis Papaioannou wrote: > What sort of technical information, exactly, is still secret after 50 years? The precise blueprint for a working device. Not a crude gun assembler, the full implosion assembly mounty. The HE lens geometries, the timing, the me

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 12:45:20AM -0700, Tom McCabe wrote: > Do you have any actual evidence for this? History has > shown that numbers made up on the spot with no > experimental verification whatsoever don't work well. You need 10^17 bits and 10^23 ops to more or less accurately represent and t

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 01/07/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: They also need knowledge, which is still largely secret. The number of people who could construct a crude working nuclear weapon is below 1% (possibly, way below 1%), and an advanced fusion weapon in ppm range. What sort of technical informat

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 12:47:27AM -0700, Tom McCabe wrote: > Because an AGI is an entirely different kind of thing > from evolution. AGI doesn't have to care what If it's being created by an evolutionary context and is competing with likewise it is precisely evolution, only with a giant fitness

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
Because an AGI is an entirely different kind of thing from evolution. AGI doesn't have to care what evolution is or how it works; there's no constraint on it whatsoever to act like evolution does. Evolution is actually nicer than most AGIs, because evolution is constrained by the need to have one v

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sat, Jun 30, 2007 at 10:11:20PM -0400, Alan > Grimes wrote: > > > =\ > > For the last several years, the limiting factor > has absolutely not been > > hardware. > > How many years? How much OPS, aggregated network and > memory bandwidth? > What is

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 10:24:17AM +1000, Stathis > Papaioannou wrote: > > > Nuclear weapons need a lot of capital and > resources to construct, > > They also need knowledge, which is still largely > secret. Knowledge of *what*? How to build a crude

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 11:35:21AM +1000, Stathis Papaioannou wrote: > Why do you assume that "win at any cost" is the default around which > you need to work? Because these are the rules of the game for a few GYrs. Why do you assume that these have changed? - This list is sponsored by AGIRI

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sat, Jun 30, 2007 at 10:11:20PM -0400, Alan Grimes wrote: > =\ > For the last several years, the limiting factor has absolutely not been > hardware. How many years? How much OPS, aggregated network and memory bandwidth? What is your evidence for your claim? > How much hardware do you claim y

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 10:37:43AM +1000, Stathis Papaioannou wrote: > But Deep Blue wouldn't try to poison Kasparov in order to win the > game. This isn't because it isn't intelligent enough to figure out Yes, it is precisely because the system is not intelligent enough. > that disabling your o

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 10:24:17AM +1000, Stathis Papaioannou wrote: > Nuclear weapons need a lot of capital and resources to construct, They also need knowledge, which is still largely secret. The number of people who could construct a crude working nuclear weapon is below 1% (possibly, way belo

Re: [singularity] AI concerns

2007-06-30 Thread Samantha Atkins
Alan Grimes wrote: Available computing power doesn't yet match that of the human brain, but I see your point, What makes you so sure of that? It has been computed countless times here and elsewhere that I am sure you are aware of so why do you ask? - This list is sponsored by AG

Re: [singularity] AI concerns

2007-06-30 Thread Samantha Atkins
Charles D Hixson wrote: Stathis Papaioannou wrote: Available computing power doesn't yet match that of the human brain, but I see your point, software (in general) isn't getting better nearly as quickly as hardware is getting better. Well, not at the personally accessible level. I understand

Re: [singularity] AI concerns

2007-06-30 Thread Samantha Atkins
Sergey A. Novitsky wrote: Dear all, Perhaps, the questions below were already touched numerous times in the past. Could someone kindly point to discussion threads and/or articles where these concerns were addressed or discussed? Kind regards, Serge --

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > Why do you assume that "win at any cost" is the > > > default around which > > > you need to work? > > > > Because it corresponds to the behavior of the > vast, > > vast majority of

Re: [singularity] AI concerns

2007-06-30 Thread Stathis Papaioannou
On 01/07/07, Alan Grimes <[EMAIL PROTECTED]> wrote: > Available computing power doesn't yet match that of the human brain, > but I see your point, What makes you so sure of that? What's the latest estimate of the processing capacity of the human brain as compared to that of available computer

Re: [singularity] AI concerns

2007-06-30 Thread Stathis Papaioannou
On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: > Why do you assume that "win at any cost" is the > default around which > you need to work? Because it corresponds to the behavior of the vast, vast majority of possible AGI systems. Is there a single AGI design now in existence which wouldn't

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
What does Vista have to do with hardware development? Vista merely exploits hardware; it doesn't build it. If you want to measure hardware progress, you can just use some benchmarking program; you don't have to use OS hardware requirements as a proxy. - Tom --- Charles D Hixson <[EMAIL PROTECTED

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > But Deep Blue wouldn't try to poison Kasparov in > > > order to win the > > > game. This isn't because it isn't intelligent > enough > > > to figure out > > > that disabling your opp

Re: [singularity] AI concerns

2007-06-30 Thread Alan Grimes
> Available computing power doesn't yet match that of the human brain, > but I see your point, What makes you so sure of that? -- Opera: Sing it loud! :o( )>-< - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listb

Re: [singularity] AI concerns

2007-06-30 Thread Charles D Hixson
Stathis Papaioannou wrote: On 01/07/07, Alan Grimes <[EMAIL PROTECTED]> wrote: For the last several years, the limiting factor has absolutely not been hardware. How much hardware do you claim you need to devel a hard AI? Available computing power doesn't yet match that of the human brain, but

Re: [singularity] AI concerns

2007-06-30 Thread Stathis Papaioannou
On 01/07/07, Alan Grimes <[EMAIL PROTECTED]> wrote: For the last several years, the limiting factor has absolutely not been hardware. How much hardware do you claim you need to devel a hard AI? Available computing power doesn't yet match that of the human brain, but I see your point, software

Re: [singularity] AI concerns

2007-06-30 Thread Stathis Papaioannou
On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: > But Deep Blue wouldn't try to poison Kasparov in > order to win the > game. This isn't because it isn't intelligent enough > to figure out > that disabling your opponent would be helpful, it's > because the > problem it is applying its intelli

Re: [singularity] AI concerns

2007-06-30 Thread Alan Grimes
Stathis Papaioannou wrote: >> If AI is going to be super-intelligent, it may be treated by >> governments as >> some sort of super-weapon. >> As it already happened with nuclear weapons, there may be treaties >> constraining AI development. > Nuclear weapons need a lot of capital and resources to

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
More like trying to stop nuclear annihilation if before the discovery of the fission chain reaction, everything from your car to your toaster had parts built out of solid U-235. - Tom --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/07/07, Sergey A. Novitsky > <[EMAIL PROTECTED]> wrot

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > An excellent analogy to a superintelligent AGI is > a > > really good chess-playing computer program. The > > computer program doesn't realize you're there, it > > doesn't know you're

Re: [singularity] AI concerns

2007-06-30 Thread Stathis Papaioannou
On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: An excellent analogy to a superintelligent AGI is a really good chess-playing computer program. The computer program doesn't realize you're there, it doesn't know you're human, it doesn't even know what the heck a human is, and it would gladly p

Re: [singularity] AI concerns

2007-06-30 Thread Stathis Papaioannou
On 01/07/07, Sergey A. Novitsky <[EMAIL PROTECTED]> wrote: If AI is going to be super-intelligent, it may be treated by governments as some sort of super-weapon. As it already happened with nuclear weapons, there may be treaties constraining AI development. Nuclear weapons need a lot of capita

Re: [singularity] AI concerns

2007-06-30 Thread Tom McCabe
--- "Sergey A. Novitsky" <[EMAIL PROTECTED]> wrote: > Dear all, > > Perhaps, the questions below were already touched > numerous times in the > past. > > Could someone kindly point to discussion threads > and/or articles where these > concerns were addressed or discussed? > > > > Kind regar