david.dev...@sse.com (David Devine) writes:
> Looking at processor and software costs in isolation doesnt tell the whole 
> story.
>
> Yes, software cost are a big chunk, but doesnt Microsoft charge like a
> Rhino for each Windows licence?
>   
> What would you attach your E5-2600 blade to and using what? fibre or
> ethernet? whose disk systems? tape for backup?
> how resilient is it? how many staff would it take to manage? 
>
> The elephant in the room is reliability.
>  
> Z/series and associated kit is solid and dependable (baring a few
> exceptions) having grown ergonomically over 50 years.
>
> How much down time do you get from windows or Unix farms? 
> Would you risk running your key billing platform's on flaky kit? you
> can't send your bills out you can't get your money in.

re:
http://www.garlic.com/~lynn/2013b.html#5 mainframe "selling" points
http://www.garlic.com/~lynn/2013b.html#6 mainframe "selling" points

one of the things recently mentioned is that the large brand vendors
(HP, DELL, IBM, etc) are no longer the major consumer of i86 server
chips ... that the large cloud operators (both public and private) are
ordering chips directly and building out mega-datacenters with several
hundred thousand blade configurations (and millions of cores). There is
lot of similarity between the millions of core supercomputers and the
millions of core cloud operators. Periodic press is also that the cloud
operators are building their own blades at 1/3rd the price of brand name
blades (bring cost/BIP into the $1 range). The big cloud operators have
also done extensive work on choice of system components to optimize the
price/reliability, price/maintenance, price/adminstration, etc issues.

With the enormous drop in processing costs ... the large cloud operators
have also turned their attention to all the other operating costs that
are now starting to dominate ... maintenance, power, cooling,
administration, etc. With hundreds of thousands of blades and millions
of cores ... they have done an enormous amount of work optimizing all
these other costs. For the majority of the e5-2600 blades out there in
the large cloud operations (public & private), there are running various
flavors of Linux (significanlty reducing those costs) and have processes
where a relatively few people are able to operate a mega-datacenter with
millions of cores. With the lots of on-demand characteristic in many
cloud operations ... they are behind pushing for almost zero
power&cooling while idle and able to be brought up to full operation
nearly instantaneously.

Guess is that any one of the numerous mega-datacenters around the world
has more processing power than the aggregate of all mainframes in the
world today.

For other drift ... real CKD disks haven't been manufactored for decades
... they are all done with simulation using the same disks that are used
by e5-2600 systems. Also as mentioned upthread ... native FCS has
enormously better throughput than when FICON is layered on top of FCS
... aka peak z196 I/O benchmark with 104 FICON channels at 2.2M IOPS
compared to announcement of a *single* native FCS for e5-2600 capable of
over 1M IOPS.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to