st...@trainersfriend.com (Steve Comstock) writes:
> Well, all the below is interesting history (as usual), but I'm
> interested in what you believe is true today regarding your
> statement above. ("I've claimed ..." : do you still claim that?
> What, in your opinion, is John Gilmore's stand: true or false?

re:
http://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off 
Another Light - Univ. of Tennessee

lets say z eliminated all channel program single CCW execution at a time
latency/overhead and ships complete channel program down to the
controller/device (aka justification for large number of channels was to
handle the inefficiencies having the controller/device always having to
go back to processor memory), ... then FICON still adds an extra layer &
inefficiency on top of underlying fibre-channel ... and CKD emulation
adds an extra emulation layer & inefficiency on top of industry standard
disks.

There are lots of intel & risc benchmarks on how well they do ... but I
haven't been able to find the same benchmark numbers for z operation
... so don't have how bad these legacy issues degrade z operation. I86 &
risc need fibre-channel links purely for the aggregate data transfer
requirements ... additional large numbers aren't needed for the vaquries
of legacy channel program operation ... purely for the aggregate data
transfer requirements.

pretty much anytime there is talk about large number of channels ... and
not talking about aggregate data transfer throughput ... it is related
to mainframe legacy channel program problems. i86 & risc theoritically
can get nearly 100% utilization concurrently on both the inbound and
outbound fibre-channel links
http://en.wikipedia.org/wiki/Fibre_Channel

with 16GFC i86/risc should be capable of nearly 3.2Gbyte/sec
full-duplex.  Additional links become a thruput issue as opposed to
needing paths to handle legacy channel program chatter and inefficiency.
since at least 3090, large number of channels have been featuring a
"bug" with the enormous thruput penalty paid by legacy mainframe channel
operation (compared to shipping the channel program out to the
controller/device for remote, asynchronous execution).

lots of the mainframe/non-mainframe issues repeat 60s-80s stuff.  Lots
of the mainframe issues from the 60s that were positive trade-offs have
turned negative ... and lots of the non-mainframe stuff from the 80s has
been completely changed. The i86/risc have been much more agile and
adaptable to the introduction of new technology and paradigms. Sometimes
there are comparisons between mainframe & desktops ... when there will
be significant throughput differences between desktops and servers.

there are lots & lots of published benchmark numbers for both the
desktops and servers. Problem is that there aren't the equivalent
numbers for mainframes ... so need to infer differences based on known
structural differences.

driving factors in i86 have been the big numeric intensive cluster
supercomputers ... that at least require super efficient, high-bandwidth
network throughput between members in the clusters (as well as processor
power), similar but different, the large cloud mega-datacenters that
require efficient network bandwidth and efficient disk operation
(somewhat more price sensitive processor power, network thruput and disk
efficiency), and the large RDBMS vendors that require efficient disk
operation.

This FICON express zhpf seems to show starting to come closer to what is
expected of native fibre channel i86/risc
http://www-03.ibm.com/systems/z/hardware/connectivity/ficon_performance.html

also PCIe interface was original native development for i86.

This hs23 (e5-2600) blade abstract ... doesn't quite have the equivalent
kind offiber channel numbers ... but mentions lots of ibm
"high-performance" references ...  including replacing spinning disks
with SSD can get 100 times more I/O operations per second ...
http://www.redbooks.ibm.com/abstracts/tips0843.html
part of any I/O limitation of hs23 can be how much can be crammed
into HS23 blade formfactor (although allowing up to four expansion
blades
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=dd&subtype=ca&&htmlfid=897/ENUS112-044

there z ficon numbers seem to be doing hardware function level
substituting for large number of system level benchmarmks that are
readily available for all the other platforms. the controller vendors do
talk about multi-gbyte system thruput and large hundreds of thousands
IOP ... apparently expected that high-performance servers would
obviously be able to handle 100% media thruput data rates.
http://www.garlic.com/~lynn/2012l.html#100 Blades versus z was Re: Turn Off 
Another Light - Univ. of Tennessee

this claims e5-2600 275,000 physical IOPS (with some form of benchmark)
http://kevinclosson.wordpress.com/2012/06/10/simple-slob-init-ora-parameter-file-for-read-iops-testing/

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to