cfmpub...@ns.sympatico.ca (Clark Morris) writes:
> Are there any benchmarks available comparing a z series against a
> blade configuration doing the same work and comparing the cost per
> benchmark unit?  Given the complexity of instruction sets for both
> Intel and the z series and the different nature of them, I agree with
> others that straight instruction speed comparisons may be meaningless.
> For example how much of the work for an i-o is done by the main
> processor on a blade versus on the z series?  What is the MP effect on
> the blades versus the z series?

re:
http://www.garlic.com/~lynn/2012l.html#28 X86 server
http://www.garlic.com/~lynn/2012l.html#29 X86 server

both refer to TPC benchmarks ... which have benchmarks that are RDBMS
transaction oriented with heavy disk i/o (as mentioned looking
at number of transactions/thruput, cost of transactions/thruput, and
power consumption of transactions/thruput)

Note that the e5-2600 blade is two socket-chip multiprocessor ... eight
cores (or processors) per chip for total of 16 processors (simulating 32
processor with hyperthreading) ... benchmark at 527BIPS (compared to
50BIPS for 80processor z196). e5-4600 blade are out there (four
sockets-chip multiprocessor, aggregate 32 processors, simulating 64
processors with hyperthreading) ... but having seen the dhrystone/bips
benchmarks ... but expecting over 1TIP for e5-4600. This post mentions
that mainframe sales of $5B last years translates into 180 $28m
max. configured z196 ... which at 50BIPS represents aggregate of 9TIPS.
http://www.garlic.com/~lynn/2012l.html#20 X86 server

this has some e5-4600 benchmarks
http://www.intel.com/content/www/us/en/benchmarks/server/xeon-e5-4600.html

this has SPEC benchmarks for 64chip, 512 core-processor (no hyperthread)
http://www.sgi.com/company_info/newsroom/press_releases/2012/may/intel.html

Another part of the issue is that mainframe CKD disks haven't been
manufactured for decades ... CKD being an extra layer of simulation
(delay and overhead) on top of the same disks all the other platforms
are using.

these recent posts reference the appearance of Harrier (morphs into SSA)
and fiber-channel (mainframe ficon is built on fibre channel standard)
in the late 80s and early 90s that were packetized serial technology
(other variations on serial i/o technology would come along later).
http://www.garlic.com/~lynn/2012i.html#47 IBM, Lawrence Livermore aim to meld 
supercomputing, industries
http://www.garlic.com/~lynn/2012i.html#54 IBM, Lawrence Livermore aim to meld 
supercomputing, industries
http://www.garlic.com/~lynn/2012i.html#95 Can anybody give me a clear idea 
about Cloud Computing in MAINFRAME ?
http://www.garlic.com/~lynn/2012j.html#13 Can anybody give me a clear idea 
about Cloud Computing in MAINFRAME ?
http://www.garlic.com/~lynn/2012k.html#69 ESCON
http://www.garlic.com/~lynn/2012k.html#77 ESCON
http://www.garlic.com/~lynn/2012k.html#80 360/20, was 1132 printer history

a growing bottleneck for mainframe was the half-duplex channel
architecture requiring synchronized end-to-end operation on every
channel command and data transfer. the serialized i/o architecture that
started to appear in the late 80s had dedicated outbound and inbound
serial i/o data paths. The equivalent of a channel program was packaged
as data and transmitted down the outbound channel. This would be
followed by data-being written (on the same outbound channel as well as
other packetized channel programs) and asynchronously with data being
read flowing on the (dedicated) inbound channel.

Harrier was dual serial copper (out of Hursley from early 90s) ... that
packetized SCSI commands and operated at 80mbits/sec concurrent in both
directions. I mention here that I tried to evolved Harrier into
interoperate with fibre-channel ... old post discussing meeting in
Ellison's conference room early Jan1992
http://www.garlic.com/~lynn/95.html#13

However, instead it involved into proprietary SSA operating at
160mits/sec concurrent in both directions, fibre channel wiki
http://en.wikipedia.org/wiki/Fibre_Channel

from above:

When Fibre Channel started to compete for the mass storage market its
primary competitor was IBM's proprietary Serial Storage Architecture
(SSA) interface. Eventually the market chose Fibre Channel over SSA,
depriving IBM of control over the next generation of mid- to high-end
storage technology.

... snip ...

There were lots of discussion on the fibre channel standards mailing
list about pok/mainframe channel engineers trying to do unnatural acts
layering ficon ontop of the base fibre channel standard

The current fibre channel standard web site:
http://www.fibrechannel.org/

fiber channel standard wiki page ... lists earliest standard from 1994
http://en.wikipedia.org/wiki/List_of_Fibre_Channel_standards

I had been working off&on with LLNL on a number of things ... fibre
channel standard work comes from 1988 time-frame with LLNL looking to
standardized a serial technology they were using that had a non-blocking
switch for interconnect. This old email mentions doing benchmark on 4341
for LLNL ... LLNL were looking at getting 70 machines based on the
benchmark.
http://www.garlic.com/~lynn/2006y.html#email790220

customers looking at buying clusters of 4341 was a threat to POK
mainframes (somewhat the current situation of blade clusters and POK
mainframes) ... clusters of 4341 with greater aggregate processing and
thruput was much cheaper and had much smaller footprint of equivalent
POK mainframes. folklore is at one point, head of POK managed to have
allocation of a critical 4341 manufacturing component cut in half.
misc. old 43xx posts
http://www.garlic.com/~lynn/lhwemail.html#43xx

A big part of the myth around high mainframe i/o thruput comes from the
big increase in the number channels needed for 3090. The issue was that
from 3330 to 3380 disks, the transfer rate increased by factor of ten
... but the change from 3830 disk controller to 3880 disk controller
radically increased the channel busy time for operations (other than
pure transfer). The problem was that they went from 3830 fast horizontal
microprocessor to a 3880 slow vertical microprocessor. When the 3090
engineers found out how bad the 3880 channel busy was going to be they
realized they would have to drastically increase the number of channels
... in order to offset the drastically increased channel busy and to
achieve the desired throughput (which increased the number of TCMs,
which increased 3090 manufacturing costs; there were jokes that 3090 was
going to charge the 3880 group for the cost increase in 3090
manufacturing).

Note that the technology used in server blades can be significantly more
robust than what might be found in a desktop for a couple hundred
dollars (there periodic equating i/o capability of server blades with
what is typically found on entry desktop machines).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to