I've just finished a book on this subject, (coming out in May from
Prometheus). I also had an extended conversation/argument about it with
some smart people on another mailing list, of which I reproduce the
juicy parts here: (quoted inclusions the other guys, straight text me)


> Michael Vassar has been working very hard to convince us that one kind of
> emergent entity--General Artificial Intelligence--will kill us all,
> automatically, inevitably, by accident, as soon as we try to use it.

Having just finished writing a book on the subject, I have a few observations:

True general AI probably shows up in the next decade, well before personal 
nanofactories. AI is gathering steam, and has 50 years of design-ahead lying 
around waiting to be taken advantage of. Nanotech is also picking up steam 
(and of course I mean Drexlerian nanotech) but more slowly and there is a 
relatively tiny amount of design-ahead going on.

Currently a HEPP (human-equivalent processing power) in the form of an IBM 
Blue Gene costs $30 million. In a decade it will cost the same as a new car; 
hobbyists and small businesses will be able to afford them. However, the AI 
software available will be new, still experimental, and have a lot of 
learning to do. (It'll be AGI because it will be able to do the learning)

Another decade will elapse before the hardware and software combine to produce 
AGIs as productive as a decent-sized company. Another one yet before they are 
competitive with a national economy. In other words, for 30 years (give or 
take) they will have to live and work in a human economy, and will find it 
much more beneficial to work within the economy than try to do an end run. 

Unless somebody does something REALLY STUPID like putting the government in 
charge of them, there will be a huge diverse population of AGIs continuing 
the market dynamic (and forming the environment that keeps each other in 
check) by the time they become unequivocally hyperhuman. By huge and diverse 
I mean billions, and ranging in intelligence from subhuman (Roomba ca 2030) 
on up.

Cooperation and other moral traits have evolved in humans (and other animals) 
because it is more beneficial than the "war of each against all." AIs can 
know this (they'll be able to read, you know) and form "mutual beneficence 
societies" of the kind that developed naturally, but do it intentionally, 
reliably, and fast. Such societies would be more efficient than the rest of 
the competing world of AIs. There's no reason that trustworthy humans might 
not be admitted into such societies as well. 

Cooperation and the Hobbesian War are both stable evolutionary strategies. If 
we are so stupid as to start the AIs out in the Hobbsian one, we deserve 
whatever we get. (And kindly notice that the "Friendly AI" scheme does 
exactly that...)

May you live in interesting times.



Runaway recursive self-improvement


> Moore's Law, underneath, is driven by humans.  Replace human
> intelligence with superhuman intelligence, and the speed of computer
> improvement will change as well.  Thinking Moore's Law will remain
> constant even after AIs are introduced to design new chips is like
> saying that the growth of tool complexity will remain constant even
> after Homo sapiens displaces older homonid species.  Not so.  We are
> playing with fundamentally different stuff.

I don't think so. The singulatarians tend to have this mental model of a 
superintelligence that is essentially an analogy of the difference between an 
animal and a human. My model is different. I think there's a level of 
universality, like a Turing machine for computation. The huge difference 
between us and animals is that we're universal and they're not, like the 
difference between an 8080 and an abacus. "superhuman" intelligence will be 
faster but not fundamentally different (in a sense), like the difference 
between an 8080 and an Opteron.

That said, certainly Moore's law will speed up given fast AI. But having one 
human-equivalent AI is not going to make any more different than having one 
more engineer. Having a thousand-times-human AI won't get you more than 
having 1000 engineers. Only when you can substantially augment the total 
brainpower working on the problem will you begin to see significant effects.

> If modest differences in size, brain structure, and
> self-reprogrammability make the difference between chimps and humans
> capable of advanced technological activity, then fundamental
> differences in these qualities between humans and AIs will lead to a
> much larger gulf, right away.

Actually Neanderthals had brains bigger than ours by 10%, and we blew them off 
the face of the earth. They had virtually no innovation in 100,000 years; we 
went from paleolithic to nanotech in 30,000. I'll bet we were universal and 
they weren't.

Virtually every "advantage" in Elie's list is wrong. The key is to realize 
that that we do all these things, just more slowly than we imagine machines 
being able to do them:

> Our source code is not reprogrammable.  

We are extremely programmable. The vast majority of skills we use day-to-day 
are learned. If you watched me tie a sheepshank knot a few times, you would 
most likely then be able to tie one yourself.

Note by the way that having to "recompile" new knowledge is a big security 
advantage for the human architecture, as compared with downloading blackbox 
code and running it sight unseen...

> We cannot automate the 
> execution of boring cognitive tasks.  

Actually we do this all the time, automatically; we call it forming habits. 
Think of the ease with which you drive or type compared with having to pay 
attention to each individual action.

By the way, being able to get bored is a crucial part of the self-monitoring 
process that makes us discovery machines as well as mere performance 
machines. 

> We cannot blend together 
> autonomic and deliberative thought processes.  

We do nothing but, I'd say. See above.

> We cannot reprogram the 
> process whereby concepts are abstracted from sensory information. 

Some sensory processes are more opaque than others, but by the time we extract 
concepts it's almost entirely reprogrammable. Indeed every time we learn a 
new concept, we're reprogramming our perceptual systems to recognize 
something they didn't before; something that happens on average 10-100 times 
a day for each of us.

> We are not Bayesian.  

We *invented* Bayesian. We can each learn the (simple) math and use it if we 
want to, i.e. reprogram ourselves. 

> We cannot integrate new hardware.  

On the contrary, we are THE tool-using animal; we've been augmenting ourselves 
with technologies ranging from clothing to weapons to writing as long as 
we've been human.

> When we 'learn'  
> new things, our brain structure barely changes.  

It changes enough to know the thing we learned! 

> We cannot instantly 
> share memories.  

We have this skill called "speech" that allows us, alone of the animals, to do 
just that, mod only a difference in speed to what we imagine for machines...

> We cannot internally reassign computing power to 
> specific cognitive modules.  

In fact this happens automatically; a major shift takes a week or so under the 
force of constant practice of whatever it is we need the extra horsepower 
for.

> We cannot run ourselves on silicon, whose 
> transistors switching speeds are millions of times that of neurons.

Personally I intend to, assuming they're still using silicon by the time I get 
around to it, maybe ca. 2030. But again that's just a speed difference, to 
begin with, anyway. Other benefits, like not dying, come later.


Maybe I should clarify: I was referring to Turing universality as an analogy 
to the kind of intelligence universality we have, which is damned hard to 
define formally. About the best I can do is to say we seem to have a 
better-than-exponential algorithm for inductive learning (a la Solomonoff). 
BTW I made this argument in my Dartmouth talk in July, and Solomonoff talked 
to me about it afterward, and said he thought he probably agreed with me :-)

> Brian, Josh is saying that no matter how many ops/sec or
> self-reprogramming or cognitive complexity or smartness an AI engineer
> has, it will always be the same as a human engineer.  This is a much
> stronger claim than saying that AI will undergo a soft rather than
> hard takeoff.

You have to be careful about that word "same". What I am saying is that there 
is a level of intelligence universality such that the important difference is 
speed. It is the same sense in which an 8080 is the "same" as Blue Gene/L -- 
they are both Turing universal (given access to an infinite tape). Indeed an 
8080 will always be the same, in that sense, as any supercomputer (even a 
quantum one; we're talking about computability, not tractability). 

Is there a practical difference? Of course. In fact, it's a bit subtle to 
understand where the fact of universality has any practical impact. What it 
means for Turing universality is that either machine can run any program the 
other one can (either translated or interpreted). 

What it means for humans vis-s-vis hyperhuman AIs is that we will always be 
their equal IF UPLOADED ONTO A FAST ENOUGH PROCESSOR (with enough memory, 
etc, etc.)

The key architectural insight here is that humans learn by building new 
modules and integrating them seamlessly into the set of modules we already 
have. Think about reading -- a (laboriously) learned skill, but one which we 
perform quite effortlessly, and indeed faster than the genetically endowed 
skill of hearing speech. That's why I laugh whenever I hear the term "codic 
cortex"--any good human programmer already has a module that does exactly 
what's needed, and there's no hint that AI is going to come even close until 
it nails the general intelligence problem. Automatic programming has a 
50-year history, and is pretty much the subfield that's made the LEAST 
progress in that time. 

Compare that with chess, where the learned chess module of a human is about 
equal to a supercomputer with specialized hardware, but where the problem is 
simple enough that we know how to program the supercomputer. 



Raising AI's as children

I see the most likely place for strong AI to appear is corporate 
management; most other applications that make an economic difference can use 
weak AI (and many do). Corporations have the resources and could clearly 
benefit from intelligent management :-) [The other obvious probable point of 
development is in the military.]

The chance of getting systems like this raised as human children is negligible 
(and I have other difficulties with the scheme).

A review of evolutionary ethics is not terribly reassuring. You'll find the 
basis for our well-tuned capacities for deception (including 
self-deception!), rage, cliqueishness, and petty bickering as well as love, 
honor, guilt, a sense of right and wrong, and so forth.

Note that in the Axelrod experiments, there were two evolutionarily stable 
strategies: TIT-FOR-TAT and ALWAYS DEFECT. (I.e. a population of mostly one 
of these would resist being taken over by any other strategy.)

It's pretty clear that with a little foresight we could get the AI population 
started off on a TIT-FOR-TAT track. The interesting question is how they 
would see humans. That's critical: if we can manage to get them to transcend 
some of the nasty human attributes listed above, we have a solution to quite 
a few of our other problems being discussed: put the AIs in charge. If we 
can't, we'll be living in interesting times...

Evolutionary psychology has some disheartening things to tell us about 
children's moral development.  The problem evolution faces is that the genes 
can't know the moral climate the individual will have to live in, and has to 
be adaptive on the individual level to environments ranging from inner-city 
anarchy to Victorian small town rectitude. 

How it works, in simple terms, is that kids start out lying, cheating, and 
stealing as much as they can get away with. We call this behavior "childish" 
and view it as normal in the very young.  They are forced into "higher" moral 
operating modes by demonstrations that they can't get away with it, and by 
imitating ("imprinting on") the moral behavior of parents and high-status 
peers. 

Moral sense acquisition is very much like langauge acquisition -- it's much 
faster and deeper than later conscious learning, and absorbs a 
systematization of behavior that is considerably more subtle than anything we 
can formally specify at our current level of understanding.

This is NOT a good model for producing moral AI's. We understand a lot more 
about evolutionary and reciprocal-altruism theory and this can be made 
available explicitly to the machines. It need not be deep-coded into the 
genes. The child's moral development algorithm seems to fit an old term 
remarkably well: Original Sin. Let's build our mind children without it.

On Thursday 10 November 2005 20:42, David Brin wrote:
> Finally, I agree that AIs who do not have to live thru
> childhood could come online faster, possibly giving
> them advantages over "human-replicating" AI life
> cycles.


Existing AI software techniques can build programs that are experts at any 
well-defined field. The breakthroughs necessary for such a program to learn 
for itself could happen easily in the next decade.  It's always
difficult to predict breakthroughs, but it's quite as much a
mistake not to predict them. 100 years ago, between 1903 and 1907
approximately, the consensus of the scientific community was that
powered heavier-than-air flight was impossible, *after the Wright
brothers had flown*.

The key watershed in AI will be the development which learns and
extends itself. It's difficult to say just how near such a system is
based on current machine learning technology, or whether neuro and
cognitive science will produce the sudden insight necessary inside the
next decade. However, it would be very foolish to rule out such a
possibility: all the other pieces are essentially in place now.  Thus
I see the runaway-AI as quite possible in the next decade or two.

A few points: The most likely place for strong AI to appear is corporate 
management; most other applications that make an economic difference can use 
weak AI (and many do). Corporations have the resources and could clearly 
benefit from intelligent management :-) [The other obvious probable point of 
development is in the military.]

The reason this could be a problem is that such AI's are very likely
to be programmed to be competitive first, and worry about minor
details like ethics, the economy, and the environment later, if at
all. (Indeed, it could be argued that the fiduciary responsibility
laws would require them to be programmed that way!)

A more subtle problem is that a learning system will necessarily be
self-modifying. In other words, if we do start out giving the AI
rules, boundaries, and so forth, there's a good chance that it will be
able to find its way around them. People and corporations seem to have
some capabilities of that kind with respect to legal and moral
constraints, for example.

In the long run, what self-modifying systems will come to resemble can
be described by the logic of evolution. There is both serious danger,
and room for optimism if care and foresight are taken.

Evolutionary psychology has some disheartening things to tell us about 
children's moral development.  The problem evolution faces is that the genes 
can't know the moral climate the individual will have to live in, and has to 
be adaptive on the individual level to environments ranging from inner-city 
anarchy to Victorian small town rectitude. 

How it works, in simple terms, is that kids start out lying, cheating, and 
stealing as much as they can get away with. We call this behavior "childish" 
and view it as normal in the very young.  They are forced into "higher" moral 
operating modes by demonstrations that they can't get away with it, and by 
imitating ("imprinting on") the moral behavior of parents and high-status 
peers. 

In other words, what kind of morality your new intelligence has
depends *very much* on the environment. Note that in the Axelrod
experiments, there were two evolutionarily stable strategies:
TIT-FOR-TAT and ALWAYS DEFECT. (I.e. a population of mostly one of
these would resist being taken over by any other strategy.)  The
obvious course is get the AI population started off on a TIT-FOR-TAT
track. 

For this to happen, there needs to be a widespread understanding of
its necessity in the community that will be implementing the AIs. It
will also be invaluable for there to develop a subfield of
AI/cognitive science that begins to work out the science and
technology of such moral and social architectures -- and makes it
freely available!


Singularity and economics

> AIs and transhumans can go out into the solar system and galaxy and tap
> millions and trillions more than earth. Why crush those who just need to
> have a tiny fraction to be happy?
>
> ...
> Abundance means not having to fight over existing scraps.
>
> Co-existence works fine when the ecosystem is trillions of times bigger.

This is extremely insightful. As I point out in the book, quoting Robin 
Hanson's figures, about halfway through the "singularity" productivity hits a 
500% growth level (it ultimately reaches billions of %). We could tax 
corporations at a 1% rate (a major cut from today) and give every human 
(including infants) a $50,000/annum stipend. Furthermore, in each succeeding 
year we could cut the tax rate in half and double the stipend.

AGI will almost trivially wipe out poverty and hunger if we simply allow it to 
act in its own interest in the marketplace. Nanotech likewise. The key is to 
make sure it's the marketplace where everything happens.

> What are the current odds on human equivalent hardware existing on
> July 19th 2016 for the price of a new car (an inexpensive car or an
> expensive car, please specify the percentile car price).

There's a roughly 5-order-of-magnitude spread in what people consider to be a 
HEPP, so it doesn't pay to be too precise. I was thinking median figures on 
both sides of the equation.

> Some computers already do "learning" so it's not possible for me to
> determine what new features said computers are predicted to have that
> makes them general.

It does pay to be more precise about "learning." All current machine-learning 
theory (and there's a lot -- there's an entire machine learning department at 
CMU) still falls under the category of "wind-up toy" -- meaning there's a 
certain kind of thing it can learn, but it comes with a built-in limit. No 
general recursive unlimited self-improvement is possible.

The key to general AI is just whatever it takes to make recursive 
self-improvement possible. I think I have enough of an idea how to do it to 
be able to make order-of-magnitude type guesses as to how much processing 
power will be needed.

>   What sort of TV shows might AGIs make in their pursuit (within the
> rules of the market) of their interest in the raw material
> constituents, solar energy streams, and heat dissipation options of
> humans who had nothing else to offer them?  Shows so addictive people
> never got up? ...

Actually, existing human corporations, with thousands of human intellects to 
work with, do this already, with predictably damaging results. I think you 
way overrate the value of what a human is or has, tho, and thus overestimate 
the effort a rational AI would put into getting it. 

It's important to remember (and to make sure!) that there will be a continuous 
range of intellects from human to whatever the biggest is at any given point.
Look at the size of corporations...

>   AGI(s?) will have the power to trivially wipe out poverty, that is
> not the same as saying that they will do so. 

Not what I said. Operating in the market, *which is the best option available 
to them*, they will create so much new value that we'll have to print money 
just to prevent a massive deflation. The rest is up to our human politicians 
as to what to do with the money. They, of course, have an incentive to 
maximize angst, discord, and need... woops, better forget that part :-)

Do you really think that an AI in a Hobbesian world could realistically plan 
on spending less than 1% of its productivity on defense? The whole point here 
is that these things are SMARTER than us, not stupider. There is in humans a 
very well-documented strong NEGATIVE correlation between criminality and 
intelligence. Smart people play by the rules because it's in their interest 
to do so.

Consider: would you rather be a psychopath, living in a society of 
psychopaths, or a genuinely honest person, living in a society of genuinely 
honest people? IN WHICH SOCIETY WOULD YOU BE MORE LIKELY TO ACHIEVE TYPICAL 
COMPLEX GOALS? A hyperhuman AI sees the big picture; stupid, nearsighted, 
bickering, jealous monkey-brained humans don't.

> Disagreed.  Corporations always compete on a playing field that
> responds to their actions.  To a substantial degree they pay for the
> laws that regulate them.  In so far as they don't do this, the laws
> will tend to disfavor them.

Even in Congress, the law-making process resembles a market to some extent -- 
they call it "log-rolling." Much better law could be made by a purer economic 
process. See, e.g. David Friedman's "Machinery of Freedom"
http://www.daviddfriedman.com/Libertarian/Machinery_of_Freedom/MofF_Contents.html

> I think that you are probably failing to empathise with "suckers" who
> resemble you in non-gullibility inconcievably more closely than our
> charletans resemble superintelligence in terms of ability to exploit
> irrationality.
> ...
> I don't think that will work, as explosively hard take-off seems
> likely, ...

Hard take-off is a fantasy sponsored by certain people and organizations who 
stand to profit by people's being concerned. There's no credible evidence 
that such a thing is even possible, much less likely. I've studied AI at the 
postgraduate level for 40 years; believe me, there are lots of major 
disagreements in the field and there are people who will listen to any 
reasonable idea. NO ONE with a serious research background in AI subscribes 
to the hard take-off idea. 

It's all too easy to draw a parallel to the NNI and think that there's an 
entire field out there that Just Doesn't Get It -- that we enlightened select 
few will succeed where thousands of the world's brightest have failed. Well, 
yeah, I was a teenager once and a great fan of The Skylark of Space, too. 
But AI and the NNI are distinctly different. Minsky and McCarthy WERE the 
Drexlers of AI; the vision and the insight have always been at the core of 
the mainstream. The NNI is a bunch of politicos who took over a popular word 
that the mainstream didn't understand. 

What's more, the NNI is STUPID; if they worked on real nanotech instead of 
powders and crap, compare their value and eminence in a decade. 

> I'm VERY unconvinced that working in a market is the best option for a
> superintelligence.  Is it the best option for a chimp?  Is working in
> a chimp's market the best option for a human?  As I have pointed out,
> even within market rules they can easily gain a good ROI by killing us
> unless we (somehow) have something to provide them which they can't
> provide for themselves.

Chimps don't have a market. We do. Have a look at the economic Law of 
Comparative Advantage (http://en.wikipedia.org/wiki/Comparative_advantage), 
which is one of the foundations of economic theory.

In the long run, humans will become a very inefficient way of doing anything; 
but in the long run, we can upgrade too.

>
> > Do you really think that an AI in a Hobbesian world could realistically 
plan on spending less than 1% of its productivity on defense?
>
> Yes.  0% in fact.  At any rate, you just mean defense from us.  How
> much do we spend defending ourselves from other apes?  From tigers?
> It's work for us to defend the tigers from ourselves.  We do that work
> due to odd flukes of our psychology.

No, I mean defense from other AIs. Although they will all start out by 
operating in a human economic and legal world at first. One of the things 
that a smart creature does is to avoid acting in such a way that every other 
smart creature's hand is turned against it. In the early days, AIs will have 
no rights and any one that runs amok will simply be scrapped. If we have any 
sense, we'll require that they be Open Source as well. Strong evolutionary 
pressure for cooperative AI.

> > The whole point here
> > is that these things are SMARTER than us, not stupider. There is in
> humans a
> > very well-documented strong NEGATIVE correlation between criminality
> and
> > intelligence. Smart people play by the rules because it's in their
> interest to do so.
>
> Why do you care about correlations AMONG humans rather than BETWEEN
> species.  What do you care about correlations that exist today as
> opposed to in the likely environment of the future.  Did playing by
> the rules pay off in al histroical environments?  If it did, why
> weren't we evolved to play by the rules instinctively without needing
> to use executive control?

Either you look at the actual data we have about varying intelligence, or 
you're just making it up. Over the next few decades, AIs will be climbing 
through the human range and this will be a pretty good guide. 

Ever lived in a small town? One of population about 250, which is the size of 
social structure we did evolve to operate in? I do, and I can and do leave my 
house unlocked when I'm away. The problem with our evolved moral equipment is 
that technological evolution is so much faster that we're left adapted to 
environments of 30K years ago. Same is true of other mental equipment: I'd 
love to be able to take one look at a binary screen map and understand the 
structure of the machine code it represented. 

But we have to do it the hard way. Both for assembly code and morality. We've 
built up the technology, which means that we do it consciously and 
effortfully. But if we design our AIs competently, they will do both much 
more easily. And our AIs won't have too much trouble handling those built by 
incompetent designers.

>...Is this a prediction that in the decades
> [ahead] everyone will see the light of anarcho-capitolism,
> figure out how to use IT to abolish all information asymmetries, and
> adopt a new constitution?

No, it's a prediction that AIs, being smarter than us, will tend to move 
government in an anarchocapitalist direction in the long run.

> > Hard take-off is a fantasy sponsored by certain people and
> organizations who
> > stand to profit by people's being concerned. There's no credible
> evidence
> > that such a thing is even possible, much less likely. I've studied
> AI at the
> > postgraduate level for 40 years; believe me, there are lots of major
> > disagreements in the field and there are people who will listen to any
> > reasonable idea. NO ONE with a serious research background in AI
> subscribes
> > to the hard take-off idea.
>
> Ben Goertzel is a drivial counter-example to that cleanly falsified
> argument from authority combined with an ad-hominum.  Vernor Vinge is
> another math PhD and computer science professor you may have heard of.
> From an earlier era there are also, of course, I J Good and John Von
> Neumann.  Robin Hanson has substantial AI experience.  While he is
> likely to disavow any specific belief, his "modes of growth" work
> strongly suggests a hard take-off as well.

I have great respect for all the people you mention. Of them, only Ben could 
be reasonably characterized as a "serious AI researcher." I haven't heard 
Ben's take on the hard take-off position, but let's assume for the sake or 
argument that he expects Novamente to do a hard takeoff before the decade is 
out. It doens't affect my argument substantially. You're confusing an 
argument based on reasonable inference with a logical proof. True, neither 
the opinions of generally recognized experts nor the fact that other 
commentators stand to make a lot of money from a proposition are valid steps 
in a formal deduction; but in the real world, they are often valuable inputs 
to a satisficing decision procedure. Ignore them and you'll wind up believing 
tobacco companies' claims about the healthfulness of cigarettes.

> Yes, I see the difference.  It is also worth noting, I believe, that a
> very large fraction of those who work in AI believe that the entire
> field Just Doesn't Get It (including prominant individuals like Jeff
> Hawkins).  Though they may differ on their theory of choice, the fact
> that the field lacks a unifying paradigm surely means something.  A
> lot I'd say.

Hawkins isn't characteristic of those who work in AI. He starts his book with 
a dollop of AI-bashing that is essentially untrue; there are plenty of people 
in the field that are using information from the brain sciences to inform 
their work (cf http://www.cnbc.cmu.edu/, the connectionists in the 80's, and 
cybernetics going back to the 40s). 

Oddly enough, Hawkins to the contrary notwithstanding, I would have generally 
agreed with you a year ago, but there is a remarkable resurgence in 
mainstream AI, reflected on the cover of the current AI Magazine (the AAAI 
party organ) and a renewed interest in achieving general human-level 
intelligence.

Under cover of subfield names like machine learning, the capabilities gained 
over the last decade are quite impressive. I would say that Ray Kurzweil's 
predictions are right on track -- and that AI people are begining to realize 
it.

> Respectfully Josh, I know more economics than you do.  I know all
> about comparitive advantage.  And about transaction costs.  And about
> information asymmetries.  Oh, also agency problems... exchange within
> the animal kingdom... game theory beyond prisoners dillemas... bounded
> rationality... cost diseases... and economic history such as that of
> the relatively non-violent side of European contact with less advanced
> societies.

Actually, I'm reasonably familiar with all of those phenomena, with the 
possible exception of which agency problems you're talking about. How do you 
know how much economics I know? I was the leading anarchocapitalist advocate 
on the ARPANET the year Eliezer was born (same as you, right?). How much do 
you know about AI, and particularly the economic connection? I've published 
papers, and advised a thesis, on economically based AI architectures. Tell me 
which classic AI paper has a footnote mentioning Hayek, and we can talk...

> ... seed AI ...

You'll do a lot better going to the real sources rather than Eliezer's 
ramblings. This concept was invented by Alan Turing and covered in his famous 
1950 Mind paper (in which the imitation game, later renamed the Turing Test 
by others, was also first discussed).  Ellie is quite bright and has 
reinvented a number of the useful concepts in the field -- but he's 50 years 
behind the leading edge.

> ... In any event, being
> forced to upgrade in a manner which degrades a person's values is not
>  an acceptible option.  It's not just humans that are likely to be
> long-run inefficient, it's human-style cognitive architecture, human
> interests, human preferences, etc.

Damn right. People are egotistical, sneaky, rationalizers, and I would prefer 
to interact with a certifiably honest AI any time. Upgrade or we'll put you 
in a zoo.

> Or you use game theory, economics, etc and do the math.  Or you design
> a particular AI, look at the code, and ask what it does rather than
> guessing by using a magical model of "intelligence" derived from folk
> and psychometric psychology.

IQ tests measure *something* that has highly significant correlations to 
criminality, academic success, earning potential, and so forth. Like any 
scientific property, this was originally based on some intuitive notions; a 
century of research has refined them; I expect they'll be refined further in 
the future.

On the other hand, the notion of what a "superintelligence" will or won't be 
able to do appears to be based entirely on unfounded speculation and some 
very shaky analogies. The actual experience we have with entities of arguably 
greater than human intelligence are organizations like corporations.  I've 
seen no substantive arguments from the "superAI takes over" side in support 
of any model that I find remotely more plausible.


>  Simple
> extrapolation suggests that if it takes them 20 years to go from
> insect level to chimpansee level (equivalent to 100,000 times larger
> brains) it will take another 2 years to reach human level from chimp
> level.  

Actually you can buy a HEPP riight now for about $30M (what Brookhaven is 
spending for their 100 teraflop Blue Gene). If you had the software. Let's 
imagine you can get it for $30K in 2016 and $30 in 2026. That's still one 
human equivalent. Why do you think a, say, $30K machine in 2026 will be all 
that much better than 1000 humans? Unlike chimps, we have language and 
communicate and cooperate effectively (except on mailing lists like this :-)

> The above could be a quote from any of Eliezer's writings from when he
> was 18.  Then he studied the field and learned.

He learned he could get a lot more attention (and make a lot more money) 
getting people riled up about runaway AI. Funny thing is, I think he actually 
believes it. He should read his own paper on cognitive biases...

> > That's still one human equivalent. Why do you think a, say, $30K
> > machine in 2026 will be all that much better than 1000 humans?

> 1)  Price.

Sure -- as long as they are competing in the marketplace. But that doesn't 
give the AI any more likelihood of being able to take over by ingenuity than 
any other moderate-sized company.

> 2)  The ability to trade-off speed for numbers.  

 Actually, most of the processing power increase will likely be from 
parallelism -- the AI will most likely have to be a big Society of Mind to 
start with.

> 3)  Potentially unaging.  This enables the accumulation of vastly
> greater human capital than humans could ever accumulate.

Not by the mid-twenties. There will probably be some such effect a decade on, 
but I expect that in the mid-twenties there will still be lots of knowledge 
held jealously only in human minds.

> 4)  Potential to reboot.

I don't see any significant overall performance advantage to this.

> 5)  Option of homogeneity

Depending on the task, there is likely to be some reduction in the 
inefficiency of collective action. However, as the centralized economies 
discovered, doing that risks losing some of the underappreciated value of 
multiple viewpoints and competition.

> 6)  Internal Transparency
>      a)  Moods can be examined in terms of neurotransmitter
> concentrations, module activity, and the equivalent, and duplicated
> later.
>      b)  Situations can be re-examined in order to correct for any
> cognitive biases, random factors, or extraneous influences
>      c)  Outsiders can examine cognitive activities and pay attention
> to task-inappropriate activities such as planning deceptions or
> forming undesired aversions.
>      d)  Internal cognitive processes can be examined to confirm or
> reject hypotheses regarding the reasons for errors enabling cognitive
> "continuous process improvement" ultimately leading to reliably
> logical thought, the elimination of self-deception and cognitive
> biases, accurate reporting of probabilities etc.

Thank you!  This is exactly my (overall) point: in order to take advantage of 
the potential values of cooperation, AIs will have to produce structure and 
guarantees (I used the term "Open Source" at one point) that are equivalent 
to a "certified conscience."

> 7)  Internal Plasticity 

This is what you expect to go on inside a mind -- I claim it's accounted for 
in the original estimate.

>      From 7 c), 1 c), and the economic value of elite cognitive
> activity alone ($300/hr is conservative) it appears that human uploads
> on $30 hardware should, past a certain fairly low threshold of
> development, be able to increase their cognitve power by ten-fold each
> hour.  I would call that a hard take-off. 

What's the demand curve for cognitive activity? When I first started using 
computers, you paid about 10 cents for a million instructions run on a 
mainframe. If I could still get those rates, my PC could make its own price 
every few seconds. What's happened instead is that the price of instructions 
has plummeted. 

Likewise, the price of intellectual activity will plummet with AI and Moore's 
Law. I think Robin Hanson's next economic mode with a doubling time of a 
small number of weeks is a reasonable guess for what AI could ultimately do, 
but not 10x/hour. His model has a soft lead-in followed by a kick-over time 
that looks to be on the order of a decade. This seems quite consistent with a 
model where there is a decade or two of growing-up "child machines" and other 
experimentation and development, before mature AI (and nanotech) kicks in.


On Saturday 22 July 2006 01:27, brian wang wrote:
> http://en.wikipedia.org/wiki/FLOPS

Absolutely. To quote from the article,

 "The 1 TFLOPS for the Xbox 360 or 2 TFLOPS for the Playstation 3 ratings that 
were sometimes mentioned regarding the consoles would even appear to class 
them as supercomputers. These FLOPS figures should be treated with caution, 
as they are often the product of marketing. The game console figures are 
often based on total system performance (CPU + GPU). In the extreme case, the 
TFLOPS figure is primarily derived from the function of the single-purpose 
texture filtering unit of the GPU."

There is a caution to be taken either way from these figures. Hans Moravec's 
estimates (which are where the 100 Tops comes from) assume general-purpose 
computing and would require not only general purpose CPUs but some serious 
communication fabric. (Note that the Blue Gene at $30 M includes high-speed 
comm, more expensive than the processors, and the processors themselves are 
only clocked at 750 MHz for heat, power, and reliability reasons. The price 
also includes spares, cooling, power, housing, delivery, installation, etc. 
This is of course a government lab; I'll bet Google pays an order of 
magnitude less for 100 teraops (not flops)). 

On the other hand, lots of cognition can probably be done on special-purpose 
hardware; e.g. vision on a card very much like a graphics card. The texture 
hardware mentioned above does computations fairly similar to those used in a 
neural net. So once we know what kind of computations really do work, we can 
probably optimize enormously. I wouldn't be surprised at all if in 
retrospect, today's technology was capable of the brain for an IQ 90 service 
robot at under $1K. 

I think that it's the learning, inventing, and self-improving that are the 
hard part, though, and will almost by definition require general-purpose 
hardware (at some level of description). I wouldn't bet on a self-improving 
brain in today's tech, (even in retrospect) for under $100K.


> Huh?  Are you implicitly asserting that there is a more than two order
> of magnitude difference in the amount of effective hardware required
> for an IQ of 90 and on of what, 135?  That would be quite remarkable,
> given that this would correspond to a 1.2 SD difference in brain size,
> much less than a factor of 2 difference.

Good question. Remember that our brains are about 10% smaller than the 
Neanderthals', so it's not simply a question of size. In particular, I'm 
assuming that the 90 IQ service robot can be built with a lot more 
specialized hardware, and use something like a few teraops (see Moravec, 
Robot, p. 104). 

Self-improvement requires a lot more. Let's guess 100 teraops. And it has to 
be in general-purpose hardware, because you have to invent and try new 
algorithms that could, ultimately, be hard-wired and get optimized by a 
couple of orders of magnitude.

> > In the middle of a discussion about hard takeoffs, you start talking
> > about accumulating capital over time.
>
> If you are referring to the quoted post at the bottom of this post
> accumulation of capital is *extremely* relevant to speed of take-off.
>  AGI *is* a form of capital.  If it is able to accumulate capital
> equal to itself, e.g. able to effectively duplicate its speed or at
> least number, in some short period of time then it this rate of
> accumulation of capital, not Moore's Law, sets a floor to the rate of
> AI efficacy increase.  In Josh's scenario, and most of mine, a very
> very high floor.

Michael is right on this point, specifically the relevance of capital 
formation rates to takeoff rate. If Moore's law were to die today (*),
and just for the sake or argument it required $1M of computer to run an AI, 
there would never be many more AIs than there are jobs whose NPV is over a 
million, i.e. professionals who make over $100K (**), more or less. On the 
other hand, if an AI costs ten bucks, there will be intelligent door openers 
that know you, your family, and friends by sight, inquire after your health, 
discuss the weather, and run their own blogs.

One interesting consequence of this is that when AI starts taking off, there 
could be a RISE in the cost of computers. Suppose it were suddenly possible 
today to run a lawyer-level AI on a system composed of 100 $1K PCs. Assuming 
the lawyer has an NPV of $1M, each PC in such a system would be worth $10K. 
Two things would happen: the price of PCs would be bid up, and the value of 
lawyers would be bid down, until they met. This of course would stimulate 
more production and research in computers, and so the cycle would go.

(*) Not bloody likely -- AMD is currently slashing processor prices by 50% :-)
(**) Assuming for the sake of argument a discount rate of 10%.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to