Yet another prediction of the Death Of Moore's Law. But also interesting
on a number of other levels, not least expansion of some of the thought
threads that led to _Halting State_.

Udhay

http://www.antipope.org/charlie/blog-static/2009/05/login_2009_keynote_gaming_in_t.html

LOGIN 2009 keynote: gaming in the world of 2030

I've just given one of the keynote speeches at the LOGIN 2009 conference
here in Seattle. Here's more or less that I said ... Imagine you're
sitting among a well-fed audience of MMO developers and gaming startup
managers (no, nobody video'd the talk):

Good morning. I'm Charlie Stross; I write science fiction, and for some
reason people think that this means I can predict the future. If only I
could: the English national lottery had a record roll-over last week,
and if I could predict the future I guess I'd have flown here on my new
bizjet rather than economy on Air France.

So that's just a gentle reminder to take what I'm going to say with a
pinch of salt.

For the past few years I've been trying to write science fiction about
the near future, and in particular about the future of information
technology. I've got a degree in computer science from 1990, which makes
me a bit like an aerospace engineer from the class of '37, but I'm not
going to let that stop me.

The near future is a particularly dangerous time to write about, if
you're an SF writer: if you get it wrong, people will mock you
mercilessly when you get there. Prophecy is a lot easier when you're
dealing with spans of time long enough that you'll be comfortably dead
before people start saying "hey, wait a minute ..."

So: what do we know about the next thirty years?

Quite a lot, as it turns out — at least, in terms of the future of
gaming. Matters like the outcome of next year's superbowl, or the
upcoming election in Germany, are opaque: they're highly sensitive to a
slew of inputs that we can't easily quantify. But gaming is highly
dependent on three things: technological progress, social change, and you.

Let's look at the near-future of the building blocks of computing
hardware first.

On a purely technological level, we've got a pretty clear road-map of
the next five years. You know all about road maps; the development cycle
of a new MMO is something like 5 years, and it may spend another half
decade as a cash cow thereafter, The next five years is a nice
comfortable time scale to look at, so I'm going to mostly ignore it.

In the next five years we can expect semiconductor development to
proceed much as it has in the previous five years: there's at least one
more generation of miniaturization to go in chip fabrication, and that's
going to feed our expectations of diminishing power consumption and
increasing performance for a few years. There may well be signs of a
next-generation console war. And so on. This isn't news.

One factor that's going to come into play is the increasing cost of
semiconductor fab lines. As the resolution of a lithography process gets
finer, the cost of setting up a fab line increases — and it's not a
linear relationship. A 22nm line is going to cost a lot more than a 33nm
line, or a 45nm one. It's the dark shadow of Moore's Law: the cost per
transistor on a chip may be falling exponentially, but the fabs that
spit them out are growing pricier by a similar ratio.

Something like this happened, historically, in the development of the
aerospace industry. Over the past thirty years, we've grown used to
thinking of the civil aerospace industry as a mature and predictable
field, dominated by two huge multinationals and protected by prohibitive
costs of entry. But it wasn't always so.

Back in the nineteen-teens, it cost very little to get in on the game
and start building aircraft; when a timber magnate called Bill went
plane-crazy he and one of his buddies took a furniture shop, bought a
couple of off-the-shelf engines, and built some birds to sell to the US
navy. But today, it takes the company he founded close to a decade and
ten billion dollars to roll out an incremental improvement to an
existing product — to go from the Boeing 747-100 to the 747-400.

It turns out that the power-to-weight ratio of a modern high-bypass
turbofan engine is vastly higher than that of an early four-stroke
piston engine, modern construction materials are an order of magnitude
stronger, and we're just a hell of a lot better at aerodynamics and
design and knowing how to put the components together to make a working
airliner.

However, the civil airliner business hit a odd brick wall in the late
1960s. The barrier was a combination of increasing costs due to
mushrooming complexity, and the fact that aerodynamic drag goes up
nonlinearly once you try to go supersonic. Concorde and the Tupolev 144
— both supersonic airliners — turned out to be dead ends, uneconomical
and too expensive to turn into mass consumer vehicles. And today, our
airliners are actually slower than they were thirty years ago.

In the medium term (by which I mean 5-15 years) we're going to reach the
end of the exponential curve of increasing processing power that Gordon
Moore noticed back in the late 1960s. Atoms are on the order of one
nanometre in size; it's hard to see how we can miniaturize our
integrated circuits below the 10nm scale. And at that point, there's
going to be a big shake-up in the semiconductor business. In particular,
Intel, AMD and the usual players won't be able to compete on the basis
of increasing circuit density any more; just as the megahertz wars ended
around 2005 due to heat dissipation, the megaflop wars will end some
time between 2015 and 2020 due to the limits of miniaturization.

There's still going to be room for progress in other directions. It's
possible to stack circuits vertically by depositing more layers on each
die; but this brings in new challenges — heat dissipation and
interconnection between layers, if nothing worse. There's room for
linear scaling here, but not for the exponential improvements we've come
to expect. Stacking a hundred layered chips atop each other isn't going
to buy us the kind of improvement we got between the 8080 and the i7
core — not even close.

This is going to force some interesting economies of scale. Over the
past couple of decades we've seen an initially wide-open playing field
for processors diminish as bit players were squeezed out: we had SPARC
and PA-RISC and IBM's Power architecture and SGIs MIPS and ARM and the
68000 series and, and, and. But today we're nearly down to two
architectures in the consumer space: Intel on the PCs and Macs — which
are basically just a PC with a different user interface, these days —
and ARM on handhelds. Actually, ARM is about 95% of everything, consumer
and embedded both — as long as you remember that the vast majority of
consumer-owned computers are phones or embedded gizmos. The other
architectures hang on in niches in the server and embedded space but get
no love or attention outside them.

I expect to see a similar trend towards convergence of GPUs, too. It's
expensive to develop them and graphics processors aren't made of sparkly
unicorn turds; it's semiconductors all the way down, and constrained by
the same as other components — memory, cpu, whatever. So I expect we'll
see a market in the next decade where we're down to a couple of
processor architectures and a handful of GPU families — and everything
is extremely boring. New components will be either the result of heroic
efforts towards optimization, or built-in obsolescence, or both.

I don't want to predict what we end up with in 2020 in terms of raw
processing power; I'm chicken, and besides, I'm not a semiconductor
designer. But while I'd be surprised if we didn't get an order of
magnitude more performance out of our CPUs between now and then — maybe
two — and an order of magnitude lower power consumption — I don't expect
to see the performance improvements of the 1990s or early 2000s ever
again. The steep part of the sigmoid growth curve is already behind us.

Now that I've depressed you, let's look away from the hardware for a minute.

After processor performance (and by extension, memory density), the next
factor we need to look at is bandwidth. Here, the physical limits are
imposed by the electromagnetic spectrum. I don't think we're likely to
get much more than a terabit per second of bandwidth out of any channel,
be it wireless or a fibre-optic cable, because once you get into soft
X-rays your network card becomes indistinguishable from a death ray. But
between fixed points we can bundle lots of fibres, and use ultrawideband
for the last ten or a hundred metres from the access point to the user.

So: let's consider the consequences of ubiquitous terabit per second
wireless data.

The quiet game-changing process underneath the radar is going to be the
collision between the development of new user interfaces and the
build-out of wireless technologies. Ubiquitous UMTS and follow-on
developments of WCDMA giving phones download speeds of 7.2mbps as
standard. WiMax and the embryonic 4G standards offering 50-100mbps on
the horizon. Wifi everywhere.

We're still driving up the steep shoulder of the growth curve of mobile
bandwidth; we're nowhere near that terabit-per-second plateau at the
top. Wireless LANs are now ubiquitous, and adopting is heading towards
70mbps this year and 200mbps in the next couple of years. On the WWAN
front, the mobile phone operators have already been forced to give up
their walled gardens of proprietary services and start to compete purely
on supply of raw bandwidth: not willingly, but the threat of wifi has
them running scared. Their original vision of making money by selling
access to proprietary content — TV over mobile phone — has failed; plan
B is the ubiquitous 3G dongle or wireless-broadband- enabled laptop.

Telephony itself is turning weird this decade. If your phone is an
always-on data terminal with 100mbps coming into it, why would you want
to make voice calls rather than use Skype or some over VoIP client?
Computers are converging with television, and also with telephones. Or
rather, both TV and phones are shrinking to become niche applications of
computers (and the latter, telephony, is already a core function of the
mobile computers we call mobile phones), and computers in turn are
becoming useful to most of us primarily as networked devices.

The iPhone has garnished a lot of attention. I've got one: how about
you? As futurist, SF writer and design guru Bruce Sterling observed, the
iPhone is a Swiss army knife of gadgets — it's eating other devices
alive. It's eaten my digital camera, phone, MP3 player, personal video
player, web browser, ebook reader, street map, and light saber. But the
iPhone is only the beginning.

Add in picoprojectors, universal location and orientation services, and
you get the prerequisites for an explosion in augmented reality
technologies.

The class of gadgets that the iPhone leads — I want you to imagine the
gadget class that is the PC today, in relation to the original Macintosh
128K back in 1984 — is something we don't really have a name for yet.
Calling it a "smart phone" seems somehow inadequate. For one thing,
we're used to our mobile phones being switched on, or off (at least, in
standby mode). This gadget is never off — it is in constant
communication with the internet. It knows where it is, and it knows
which way up it is (it's orientation sensitive). It can see things you
point it at, and it can show you pictures. (Oh, and it does the
smartphone thing as well, when you want it to.)

Let me give you a handle on this device, the gadget, circa 2020, which
has replaced our mobile phones. It's handheld, but about as powerful as
a fully loaded workstation today. At it's heart is a multicore CPU
delivering probably about the same performance as a quad-core Nehalem,
but on under one percent of the power. It'll have several gigabytes of
RAM and somewhere between 256Gb and 2Tb of Flash SSD storage. It'll be
coupled to a very smart radio chipset: probably a true software-directed
radio stack, where encoding and decoding is basically done in real time
by a very fast digital signal processor, and it can switch radio
protocols entirely in software. It'll be a GPS and digital terrestrial
radio receiver and digital TV receiver as well as doing 802.whatever and
whatever 4G standard emerges as victor in the upcoming war for WWAN
preeminance.

One of the weaknesses of today's smartphones is that they're poor
input/output devices: tiny screens, useless numeric keypads or chicklet
QWERTY thumboards. The 2020 device will be somewhat better; in addition
to the ubiquitous multitouch screen, it'll have a couple of cameras,
accelerometers to tell it which way it's moving, and a picoprojector.

The picoprojector is really cool right now: it's the next solid-state
gizmo that your phone is about to swallow. Everyone from Texas
Instruments to Samsung are working on them. The enabling technologies
are: compact red, blue, and green solid-state lasers, and a
micro-electromechanical mirror system to scan them across a target —
such as a sheet of paper held a foot in front of your phone. Or a
tabletop. Picoprojectors will enable a smartphone to display a
laptop-screen-sized image on any convenient surface.

The other promising display technology is, of course, those hoary old
virtual reality goggles. They've come a long way since 1990;
picoprojectors in the frames, reflecting images into your eyes, and
cameras (also in the frames), along with UWB for hooking the thing up to
the smartphone gizmo, may finally make them a must-have peripheral: the
2020 equivalent of the bluetooth hands-free headset.

Now, an interesting point I'd like to make is that this isn't a mobile
phone any more; this device is more than the sum of its parts. Rather,
it's a platform for augmented reality applications.

Because it's equipped with an always-on high bandwidth connection and
sensors, the device will be able to send real-time video from it's
cameras to cloud-hosted servers, along with orientation information and
its GPS location as metadata. The cloud apps can then map its location
into some equivalent information space — maybe a game, maybe a
geographically-tagged database — where it will be convolved with objects
in that information space, and the results dumped back to your screen.

For example: if you point your phone at a shop front tagged with an
equivalent location in the information space, you can squint at it
through the phone's screen and see ... whatever the cyberspace
equivalent of the shop is. If the person you're pointing it at is
another player in a live-action game you're in (that is: if their phone
is logged in at the same time, so the game server knows you're both in
proximity), you'll see their avatar. And so on.

Using these gizmos, we won't need to spend all our time pounding keys
and clicking mice inside our web browsers. Instead, we're going to end
up with the internet smearing itself all over the world around us,
visible at first in glimpses through enchanted windows, and then
possibly through glasses, or contact lenses, with embedded projection
displays.

There are many non-game applications for phones with better output, of
course. For starters, it'll address all our current personal computing
needs: stick a camera chip next to the microprojector to do video motion
capture on the user's fingers, and you've got a virtual keyboard for
grappling with those thorny spreadsheet and presentation problems. But
then it'll do new stuff as well. For example, rather than just storing
your shopping list, this gadget will throw the list, and your meatspace
location, at the store's floor map and inventory database and guide you
on a handy path to each each item on the list.

And then there's the other stuff. Storage is basically so cheap it's
nearly free. Why not record a constant compressed video stream of
everything you look at with those glasses? Tag it by location and
vocalization — do speech-to-text on your conversation — and by proximity
to other people. Let your smartphone remember things and jog your
memory: you'll be able to query it with things like, "who was that
person sitting at the other side of the table from me in the Pike
Brewery last Tuesday evening with the fancy jacket I commented on?" Or
maybe "what did Professor Jones say fifteen minutes into their Data
Structures lecture on Friday while I was asleep?" I don't know about
you, but I could really do with a prosthetic memory like that — and as
our populations age, as more people have to live with dementia, there'll
be huge demand for it. In Japan today, the life expectancy of a girl
baby is 102 years. Which sounds great, until you learn that in Japan
today, 20% of over-85s have Alzheimers.

Bouncing back to the present day, one of the weird side-effects of
dropping GPS into a communications terminal is that traditional paper
maps are rapidly becoming as obsolescent as log tables were in the age
of the pocket calculator. When we have these gizmos and add access to a
geolocation-tagged internet, not only are we going to know where we are
all the time, we're going to know where we want to be (which is subtly
different). And with RFID chips infiltrating everything, we're probably
also going to know where everything we need to find is. No more getting
lost: no more being unable to find things.

There are many other uses for the output devices we'll be using with
these gizmos, too. Consider the spectacles I'm wearing. They're made of
glass, and their design has fundamentally not changed much since the
fifteenth century — they're made of better materials and to much better
specifications, but they're still basically lenses. They refract light,
and their focus is fixed. This is kind of annoying; I'm beginning to
suffer from presbyopia and I need new lenses, but spectacle fashions
this year are just plain boring.

I've already mentioned using picoprojectors to provide a head-up display
via spectacles. I'd like you to imagine a pair of such video glasses —
but with an opaque screen, rather than an overlay. Between the camera on
the outside of each "lens" and the eye behind it, we can perform any
necessary image convolution or distortion needed to correct my visual
problems. We can also give our glasses digital zoom, wider viewing
angles, and low light sensitivity! Not to mention overlaying our
surroundings with a moving map display if we're driving. All great
stuff, except for the little problem of such glasses blocking eye
contact, which means they're not going to catch on in social
environments — except possibly among folks who habitually wear mirrorshades.

So let's put this all together, and take a look at where the tech side
is going in the next 25 years.

For starters, once you get more than a decade out (around 2020 or
thereabouts) things turn weird on the hardware front. We can expect to
get another generation of fab lines out of our current technology, but
it's not obvious that we'll see chip fabrication processes push down to
a resolution of less than 20nm. By 2030 it's almost inevitable that
Moore's Law (in its classic formulation) will have hit a brick wall, and
the semiconductor industry will go the way of the civil aerospace industry.

There'll be a lot of redundancy checks, and consolidation, and
commodification of the product lines. Today we don't buy airliners on
the basis of their ability to fly higher and faster; we buy them because
they're more economical to operate, depreciate less, or fill specialized
niches. Airliners today are slower than they were thirty years ago; but
they're also cheaper, safer, and more efficient.

In the same time frame, our wireless spectrum will max out. Our wireless
recievers are going to have to get smarter to make optimal use of that
bandwidth; it'll be software-directed radio all round, dynamically
switching between protocols depending on whether they need to maximize
transmission path or bit rate in the horribly noisy environment. But
we're going to hit the wireless buffers one way or the other in the same
period we hit the Moore's Law buffers.

There may, of course, be wildcard technologies that will save us.
Quantum computing (if anyone knows how to make it work). Massively
parallel processing (ditto). We may see more efficient operating systems
— Microsoft's Windows 7 seems set to roll back the bloat relative to
Vista, which was designed against a backdrop of the megahertz wars for
the 5GHz desktop processors that turned out not to be viable. On a
similar note, Linux derivatives like Android and Moblin, and that
BSD/Mach hybrid, OS/X, are being pared down to do useful work on the
sort of low-end processors we can run off the kind of batteries that
don't require fire extinguishers and safety goggles. If we can work out
how to reduce the operating system overheads by an order of magnitude
without sacrificing their utility, that's going to have interesting
implications.

But ultimately, the microcomputer revolution is doomed. The end is nigh!

By 2030 we're going to be looking at a radically different world: one
with hard limits to available processing power and bandwidth. The hard
limits will be generous — there's room for one or two orders of
magnitude more processing power, and maybe five orders of magnitude more
bandwidth — but they'll be undeniable.

The next thing I'd like to look at is the human factor.

Let's start with the current day. Today, gamers are pretty evenly split
by gender — the days when it was possible to assume that there were many
more males than females are over — and the average age is north of
thirty and rising. I don't know anyone much over fifty who's a serious
gamer; if you didn't have consoles or personal computers in your world
by the time you hit thirty, you probably didn't catch the habit. This is
rather unlike the uptake pattern for film or TV, probably because those
are passive media — the consumer doesn't actually have to do anything
other than stare at a screen. The learning curve of even a console
controller is rather off-putting for folks who've become set in their
ways. I speak from experience: my first console was a Wii, and I don't
use it much. (PCs are more my thing.) At a guess, most gamers were born
after 1950 — the oldest today would have been in their mid-20s in the
mid-seventies, when things like the Atari 2600 roamed the Earth and the
Apple II was the dizzy pinnacle of home electronics — and the median age
demographic were born around 1975 and had an NES.

We talk about the casual/hardcore split, but that's a bit of a chimera.
We've always had hardcore gamers; it's just that before they had
consoles or PCs, they played with large lumps of dead tree. I lost a
good chunk of the 1970s and early 1980s to Dungeons and Dragons, and I'm
not afraid to admit it. You had to be hardcore to play in those days
because you had the steep learning curve associated with memorizing
several hundred pages of rule books. It's a somewhat different kind of
grind from levelling up to 80 in World of Warcraft, but similarly
tedious. These days, the age profile of tabletop RPGers is rising just
like that of computer-assisted gamers — and there are now casual gamers
there, too, using a class of games designed to be playable without
exotic feats of memorization.

So, let's look ahead to 2030.

We can confidently predict that by then, computer games will have been
around for nearly sixty years; anyone under eighty will have grown up
with them. The median age of players may well be the same as the median
age of the general population. And this will bring its own challenges to
game designers. Sixty year olds have different needs and interests from
twitchy-fingered adolescents. For one thing, their eyesight and hand-eye
coordination isn't what it used to be. For another, their socialization
is better, and they're a lot more experienced.

Oh, and they have lots more money.

If I was speccing out a business plan for a new MMO in 2025, I'd want to
make it appeal to these folks — call them codgergamers. They may be
initially attracted by cute intro movies, but jerky camera angles are
going to hurt their aging eyes. Their hand/eye coordination isn't what
it used to be. And like sixty-somethings in the current and other
cohorts they have a low tolerance for being expected to jump through
arbitrary hoops for no reward. When you can feel grandfather time
breathing down your neck, you tend to focus on the important stuff.

But the sixty-something gamers of 2020 are not the same as the
sixty-somethings you know today. They're you, only twenty years older.
By then, you'll have a forty year history of gaming; you won't take
kindly to being patronised, or given in-game tasks calibrated for
today's sixty-somethings. The codgergamers of 2030 will be comfortable
with the narrative flow of games. They're much more likely to be bored
by trite plotting and cliched dialog than todays gamers. They're going
to need less twitchy user interfaces — ones compatible with aging
reflexes and presbyopic eyes — but better plot, character, and narrative
development. And they're going to be playing on these exotic gizmos
descended from the iPhone and its clones: gadgets that don't so much
provide access to the internet as smear the internet all over the
meatspace world around their owners.

If this sounds like a tall order, and if you're wondering why you might
want to go for the sixty-something hardcore gamer demographic, just
remember: you're aiming to grab the share of the empty-nester
recreational budget that currently goes in the direction of Winnebago
and friends. Once gas regularly starts to hit ten bucks a gallon (which
it did last year where I come from) they'll be looking to do different
things with their retirement — the games industry is perfectly
positioned to clean up.

And then there are the younger generation. Let's take a look at
generation Z:

The folks who are turning 28 in 2030 were born in 2002. 9/11 happened
before they were born. The first President of the United States they
remember is Barack Obama. The space shuttle stopped flying when they
were eight. Mobile phones, wifi, broadband internet, and computers with
gigabytes of memory have been around forever. They have probably never
seen a VHS video recorder or an LP record player (unless they hang out
in museums). Oh, and they're looking forward to seeing the first man on
the moon. (It's deja vu, all over again.)

I'm not going to even dare to guess at their economic conditions. They
might be good, or they might be terrible — insert your worst case
prognostications about global climate change, rising sea levels, peak
oil, and civil disorder here.

Moreover, I don't think I'm sticking my neck too far above the parapet
if I say that by 2030, I think the American market will be something of
a backwater in the world of online gaming. China is already a $4Bn/year
market; but that's as nothing compared to the 2030 picture. The Chinese
government is currently aiming to make an economic transition which, if
successful, will turn that country into a first world nation. Think of
Japan, only with ten times the population. And then there's India, also
experiencing stupefying growth, albeit from a poverty-stricken starting
point. Each of these markets is potentially larger than the United
States, European Union, and Japan, combined.


The world of 2030: what have I missed?

I said earlier that I'm not a very accurate prophet. Our hosts have only
given me an hour to stand up here and drone at you; that limits my scope
somewhat, but let me try and give a whistle-stop tour of what I've
missed out.


    * I am assuming that we are not all going to die of mutant swine
flu, or run out of energy, or collectively agree that computer games are
sinful and must be destroyed. This assumption — call it the "business as
usual" assumption — is a dubious one, but necessary if we're going to
contemplate the possibility of online games still existing in 2030.

    * I have short-sightedly ignored the possibility that we're going to
come up with a true human-equivalent artificial intelligence, or some
other enabling mechanism that constitutes a breakthrough on the software
or content creation side and lets us offload all the hard work. No
HAL-9000s here, in other words: no singularity (beyond which our current
baseline for predictions breaks down). Which means, in the absence of
such an AI, that the most interesting thing in the games of 2030 will
be, as they are today, the other human players.

    * I am assuming that nothing better comes along. This is the most
questionable assumption of all. Here in the world of human beings — call
it monkeyspace — we are all primates who respond well to certain types
of psychological stimulus. We're always dreaming up new ways to push our
in-built reward buttons, and new media to deliver the message.
Television came along within fifty years of cinema and grabbed a large
chunk of that particular field's lunch. Cinema had previously robbed
theatre's pocket. And so on. Today, MMO gaming is the new kid on the
block, growing ferociously and attracting media consumers from older
fields. I can't speculate on what might eat the computer games field's
lunch -- most likely it'll be some new kind of game that we don't have a
name for yet. But one thing's for sure: by 2030, MMOs will be seen as
being as cutting edge as 2D platform games are in 2009.


In fact, I'm making a bunch of really conservative assumptions that are
almost certainly laughable. For all I know, the kids of 2030 won't be
playing with computers any more — as such — rather they'll be playing
with their nanotechnology labs and biotech in a box startups, growing
pocket-sized dragons and triffids and suchlike. Nothing is going to look
quite the way we expect, and in a world where the computing and IT
revolution has run its course, some new and revolutionary technology
sector is probably going to replace it as the focus of public attention.

Nevertheless ...

Welcome to a world where the internet has turned inside-out; instead of
being something you visit inside a box with a coloured screen, it's
draped all over the landscape around you, invisible until you put on a
pair of glasses or pick up your always-on mobile phone. A phone which is
to today's iPhone as a modern laptop is to an original Apple II; a
device which always knows where you are, where your possessions are, and
without which you are — literally — lost and forgetful.

Welcome to a world where everyone is a gamer — casual or hardcore, it
makes little difference — and two entire generational cohorts have been
added to your market: one of them unencumbered by mortgage payments and
the headaches of raising a family.

This is your future; most of you in this audience today will be alive
and working when it gets here. Now is probably a bit early to start
planning your development project for 2025; but these trends are going
to show up in embryonic form well before then.

And if they don't? What do I know? I've got an aerospace engineering
degree from 1937 ....



-- 
((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com))

Reply via email to