Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Jef Allbright <[EMAIL PROTECTED]> wrote: > On 7/1/07, Stathis Papaioannou <[EMAIL PROTECTED]> > wrote: > > > If its top level goal is to allow its other goals > to vary randomly, > > then evolution will favour those AI's which decide > to spread and > > multiply, perhaps consuming humans in

Re: [singularity] AI concerns

2007-07-01 Thread Jef Allbright
On 7/1/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: If its top level goal is to allow its other goals to vary randomly, then evolution will favour those AI's which decide to spread and multiply, perhaps consuming humans in the process. Building an AI like this would be like building a bomb

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > > For > > > example, if it has as its most important goal > > > obeying the commands of > > > humans, that's what it will do. > > > > Yup. For example, if a human said "I want a > bana

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: > For > example, if it has as its most important goal > obeying the commands of > humans, that's what it will do. Yup. For example, if a human said "I want a banana", the fastest way for the AGI to get the human banana may be to detonate a kilo

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
The problem isn't that the AGI will violate its original goals; it's that the AGI will eventually do something that will destroy something really important in such a way as to satisfy all of its constraints. By setting constraints on the AGI, you're trying to think of everything bad the AGI might p

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> > wrote: > > > The AGI doesn't care what any human, human > committee, > > or human government thinks; it simply follows its > own > > internal rules. > > Sure, but its internal rules and goals migh

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: The AGI doesn't care what any human, human committee, or human government thinks; it simply follows its own internal rules. Sure, but its internal rules and goals might be specified in such a way as to make it refrain from acting in a particul

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: But killing someone and then beating them on the chessboard due to the lack of opposition does count as winning under the formal rules of chess, since there's nothing in the rules of chess about killing the opponent. The rule "don't kill, strangl

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 02/07/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: > But in the final analysis, the AI would be able to be implemented as > code in a general purpose language on a general purpose computer with Absolutely not. Possibly, something like a silicon compiler with billions to trillions asynchronous

Re: [singularity] AI concerns

2007-07-01 Thread Alan Grimes
BillK wrote: > On 7/1/07, Tom McCabe wrote: >> These rules exist only in your head. They aren't >> written down anywhere, and they will not be >> transferred via osmosis into the AGI. > They *are* written down. > I just quoted from the FIDE laws of chess. > And they would be given to the AGI alon

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- BillK <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe wrote: > > > > These rules exist only in your head. They aren't > > written down anywhere, and they will not be > > transferred via osmosis into the AGI. > > > > They *are* written down. > I just quoted from the FIDE laws of chess. > A

Re: [singularity] AI concerns

2007-07-01 Thread BillK
On 7/1/07, Tom McCabe wrote: These rules exist only in your head. They aren't written down anywhere, and they will not be transferred via osmosis into the AGI. They *are* written down. I just quoted from the FIDE laws of chess. And they would be given to the AGI along with the rest of the rul

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- BillK <[EMAIL PROTECTED]> wrote: > On 7/1/07, Tom McCabe wrote: > > The constraints of "don't shoot the opponent" > aren't > > written into the formal rules of chess; they exist > > only in your mind. If you claim otherwise, please > give > > me one chess tutorial that explicitly says "don't

Re: [singularity] AI concerns

2007-07-01 Thread BillK
On 7/1/07, Tom McCabe wrote: The constraints of "don't shoot the opponent" aren't written into the formal rules of chess; they exist only in your mind. If you claim otherwise, please give me one chess tutorial that explicitly says "don't shoot the opponent". This is just silly. All competiti

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 01:51:26PM -0700, Tom McCabe > wrote: > > > All of this applies only to implosion-type > devices, > > which are far more complicated and tricky to pull > off > > than gun-type devices, and which are therefore > > unlikely to be

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 08:56:24PM +1000, Stathis > Papaioannou wrote: > > > But the constraints of the problem are no less a > legitimate part of > > We're all solving the same problem: sustainable > self-replication > long-term. What does self-rep

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
The constraints of "don't shoot the opponent" aren't written into the formal rules of chess; they exist only in your mind. If you claim otherwise, please give me one chess tutorial that explicitly says "don't shoot the opponent". - Tom --- Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 01/

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 12:44:09AM -0700, Tom McCabe > wrote: > > > > They also need knowledge, which is still largely > > > secret. > > > > Knowledge of *what*? How to build a crude gun to > fire > > one block of cast metal into another block of cas

Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson
Samantha Atkins wrote: Sergey A. Novitsky wrote: Dear all, ... o Be deprived of free will or be given limited free will (if such a concept is applicable to AI). See above, no effective means of control. - samantha There is *one* effective means of control: An AI wil

Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson
Peter Voss wrote: Just for the record: To put it mildly, not everyone is 'Absolutely' sure that AGI can't be implemented on Bill's computer. In fact, some of us are pretty certain that that (a) current hardware is adequate, and (b) AGI software will be with us in (much) less than 10 years. Some

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 07:01:07PM +1000, Stathis > Papaioannou wrote: > > > What sort of technical information, exactly, is > still secret after 50 years? > > The precise blueprint for a working device. Not a > crude gun assembler, > the full implos

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
But killing someone and then beating them on the chessboard due to the lack of opposition does count as winning under the formal rules of chess, since there's nothing in the rules of chess about killing the opponent. The rule "don't kill, strangle, drug, maim, injure, or otherwise physically hurt"

Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson
Samantha Atkins wrote: Charles D Hixson wrote: Stathis Papaioannou wrote: Available computing power doesn't yet match that of the human brain, but I see your point, software (in general) isn't getting better nearly as quickly as hardware is getting better. Well, not at the personally accessib

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 01:51:26PM -0700, Tom McCabe wrote: > All of this applies only to implosion-type devices, > which are far more complicated and tricky to pull off > than gun-type devices, and which are therefore > unlikely to be used. We're arguing something tedious, and meaningless. The

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 12:45:20AM -0700, Tom McCabe > wrote: > > > Do you have any actual evidence for this? History > has > > shown that numbers made up on the spot with no > > experimental verification whatsoever don't work > well. > > You need 10

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 12:47:27AM -0700, Tom McCabe > wrote: > > > Because an AGI is an entirely different kind of > thing > > from evolution. AGI doesn't have to care what > > If it's being created by an evolutionary context What does that even me

Re: [singularity] AI concerns

2007-07-01 Thread Alan Grimes
Eugen Leitl wrote: > On Sun, Jul 01, 2007 at 11:11:06PM +1000, Stathis Papaioannou wrote: >> But in the final analysis, the AI would be able to be implemented as >> code in a general purpose language on a general purpose computer with > Absolutely not. Possibly, something like a silicon compiler

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 10:08:53AM -0700, Peter Voss wrote: > Just for the record: To put it mildly, not everyone is 'Absolutely' sure > that AGI can't be implemented on Bill's computer. What is Bill's computer? A 3 PFlop Blue Gene/P? A Core 2 duo box from Dell? > In fact, some of us are pretty

RE: [singularity] AI concerns

2007-07-01 Thread Peter Voss
Just for the record: To put it mildly, not everyone is 'Absolutely' sure that AGI can't be implemented on Bill's computer. In fact, some of us are pretty certain that that (a) current hardware is adequate, and (b) AGI software will be with us in (much) less than 10 years. Some people may be very,

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 11:11:06PM +1000, Stathis Papaioannou wrote: > But in the final analysis, the AI would be able to be implemented as > code in a general purpose language on a general purpose computer with Absolutely not. Possibly, something like a silicon compiler with billions to trillion

Re: [singularity] AI concerns

2007-07-01 Thread Alan Grimes
Samantha Atkins wrote: > Alan Grimes wrote: >>> Available computing power doesn't yet match that of the human brain, >>> but I see your point, >> What makes you so sure of that? > It has been computed countless times here and elsewhere that I am sure > you are aware of so why do you ask? http://

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 01/07/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: I'm not sure all-purpose hardware would be suitable [for AI]. It depends very much upon which computing paradigm is dominant by that time (40-50 years away from now). Judging from the past, there might be not that much progress there. But in

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 08:56:24PM +1000, Stathis Papaioannou wrote: > But the constraints of the problem are no less a legitimate part of We're all solving the same problem: sustainable self-replication long-term. Artificial systems so far are only instruments (though they do evolve in the human

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 01/07/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: > that disabling your opponent would be helpful, it's because the > problem it is applying its intelligence to is winning according to the > formal rules of chess. Winning at any cost might look like the same > problem to us vague humans, but i

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 12:44:09AM -0700, Tom McCabe wrote: > > They also need knowledge, which is still largely > > secret. > > Knowledge of *what*? How to build a crude gun to fire > one block of cast metal into another block of cast > metal? How about gas centrifuges, materials resistant to U

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 01/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote: > If its goal is "achieve x using whatever means > necessary" and x is > "win at chess using only the formal rules of chess", > then it would > fail if it won by using some means extraneous to the > formal rules of > chess, just as surely as it

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 07:01:07PM +1000, Stathis Papaioannou wrote: > What sort of technical information, exactly, is still secret after 50 years? The precise blueprint for a working device. Not a crude gun assembler, the full implosion assembly mounty. The HE lens geometries, the timing, the me

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 12:45:20AM -0700, Tom McCabe wrote: > Do you have any actual evidence for this? History has > shown that numbers made up on the spot with no > experimental verification whatsoever don't work well. You need 10^17 bits and 10^23 ops to more or less accurately represent and t

Re: [singularity] AI concerns

2007-07-01 Thread Stathis Papaioannou
On 01/07/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: They also need knowledge, which is still largely secret. The number of people who could construct a crude working nuclear weapon is below 1% (possibly, way below 1%), and an advanced fusion weapon in ppm range. What sort of technical informat

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 12:47:27AM -0700, Tom McCabe wrote: > Because an AGI is an entirely different kind of thing > from evolution. AGI doesn't have to care what If it's being created by an evolutionary context and is competing with likewise it is precisely evolution, only with a giant fitness

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
Because an AGI is an entirely different kind of thing from evolution. AGI doesn't have to care what evolution is or how it works; there's no constraint on it whatsoever to act like evolution does. Evolution is actually nicer than most AGIs, because evolution is constrained by the need to have one v

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sat, Jun 30, 2007 at 10:11:20PM -0400, Alan > Grimes wrote: > > > =\ > > For the last several years, the limiting factor > has absolutely not been > > hardware. > > How many years? How much OPS, aggregated network and > memory bandwidth? > What is

Re: [singularity] AI concerns

2007-07-01 Thread Tom McCabe
--- Eugen Leitl <[EMAIL PROTECTED]> wrote: > On Sun, Jul 01, 2007 at 10:24:17AM +1000, Stathis > Papaioannou wrote: > > > Nuclear weapons need a lot of capital and > resources to construct, > > They also need knowledge, which is still largely > secret. Knowledge of *what*? How to build a crude

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 11:35:21AM +1000, Stathis Papaioannou wrote: > Why do you assume that "win at any cost" is the default around which > you need to work? Because these are the rules of the game for a few GYrs. Why do you assume that these have changed? - This list is sponsored by AGIRI

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sat, Jun 30, 2007 at 10:11:20PM -0400, Alan Grimes wrote: > =\ > For the last several years, the limiting factor has absolutely not been > hardware. How many years? How much OPS, aggregated network and memory bandwidth? What is your evidence for your claim? > How much hardware do you claim y

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 10:37:43AM +1000, Stathis Papaioannou wrote: > But Deep Blue wouldn't try to poison Kasparov in order to win the > game. This isn't because it isn't intelligent enough to figure out Yes, it is precisely because the system is not intelligent enough. > that disabling your o

Re: [singularity] AI concerns

2007-07-01 Thread Eugen Leitl
On Sun, Jul 01, 2007 at 10:24:17AM +1000, Stathis Papaioannou wrote: > Nuclear weapons need a lot of capital and resources to construct, They also need knowledge, which is still largely secret. The number of people who could construct a crude working nuclear weapon is below 1% (possibly, way belo