On 01/25/2011 09:42 PM, Luke Kenneth Casson Leighton wrote:
>  at 28nm it's going to be... irrelevant that the main RISC CPU is
> 74,000 transistors (MIPS 64-bit) because it'll be running at 2ghz, be
> running in a quad-core or even 16-core arrangement and... who gives a
> damn if an x86 gets even 100% more performance at those kinds of
> speeds!  especially when x86 does so by having to still be a thousand
> times more transistors and so uses vastly more power.
> 
>  ... or am i preaching to the converted, here? :)

No, you are not preaching to the converted.  Underestimating the ability
to adapt a specific instruction set to take advantage of manufacturing
improvements is something you'd think people would grow out of after 30
years.

I'm saying that network effects mean the system with the most users is
the one everybody wants to write code for, and the system with the most
software is the one everybody wants to use.  Costs are almost entirely a
question of unit volume, it's all start-up amortized over a production
run.  When you say "RISC will make chips cheap", you're making arguments
that people made 30 years ago and it simply did not happen.

You keep talking about Windows.  Who cares about that?  The mainframe
gave way to minicomputer gave way to microcomputer which is giving way
to the smart phone as we speak.  (Sure, minicomputer got renamed
"Personal Computer" by IBM's marketing department, apparently forseeing
all the porn on the internet.  The switch to laptops was a sustaining
technology, not disruptive, and thus doesn't really matter in this context.)

The emphasis on "cloud" is a giveaway: each previous technology got
kicked up into the "server" space as it stopped being what people
directly interfaced with to get their computing done.  Carrying around
decks of punched cards became archaic, sitting at a minicomputer TTY
became archaic, having your own laptop isn't archaic yet but once
everybody carries a smart phone that can do everything the laptop could
it's only a matter of time.

A Nexus one has half a gig of ram, a gigahertz CPU, up to 32 gigs of SD
card, and a couple different types of internet access which is plenty
powerful enough to be a self-hosting development enviornment with the
right software.  It also has a USB port that you can plug a
http://us.toshiba.com/computers/accessories/dynadock or similar into to
give you the full PC UI, except you carry it around in your pocket and
it's available to you all the time.  That's _today_, and the next
versions will be better and cheaper.  Why would you bother owning a 30
pound paperweight with a fan five years from now?  (People are
experimenting with UI stuff ala iPad, but it's based on scaling smart
phone programs and usage patterns up rather than scaling PC programs down.)

Linux is a good server OS, so the transition to "cloud" doing fine for
it.  Meanwhile, the real fight is between Apple's arm variant and
Google's arm variant to establish the new dominant end-user OS that
people will be using those servers through, but it has nothing to do
with RISC.  (Arm has multiple instruction sets with Thumb2 even before
you get to Neon with all that floating point and vector SIMD stuff, plus
the modern ones are SMP with all the cache coherency IPC stuff that
implies.)

And yet you talk about 64 bit _Mips_?  I agree that's irrelevant to the
new emerging standard that's going to get the unit volume to become cheap.

You're acting like you can confidently predict the future when you
clearly don't understand the _present_.  I think your level of certainty
probably contra-indicates accuracy in your predictions, dude.

Rob
_______________________________________________
Celinux-dev mailing list
[email protected]
http://tree.celinuxforum.org/mailman/listinfo/celinux-dev

Reply via email to