john.mck...@healthmarkets.com (McKown, John) writes:
> I don't know the I/O capacity of the newest PC "fibre" I/O, but did
> find a Web site which says 4 GiB/Sec. But I'm relatively sure that
> there are fewer fibre HBAs in most servers than there are FICON
> channels, the nearest z equivalent (I think), on a z. And I also
> wonder if such devices or PC servers have "multipath" capability,
> similar to the z's. I am totally certain (watch somebody prove me
> wrong) that it is impossible to have "shared DASD" on a PC like we are
> used to.

re:
http://www.garlic.com/~lynn/2010h.html#51 25 reasons why hardware is still hot 
at IBM
http://www.garlic.com/~lynn/2010h.html#56 25 reasons why hardware is still hot 
at IBM
http://www.garlic.com/~lynn/2010h.html#62 25 reasons why hardware is still hot 
at IBM

way back when ... one of the austin engineers took some fiber technology
knocking around POK for a long time, made it full-duplex, increased
media thruput by about 10% and had significantly cheaper drivers. The
original eventually shipped from POK as (half-duplex) ESCON ... that had
enormously lower aggregate thruput than the RS6000 SLA (more than just
difference between 200mbits/sec versus 220mbits/sec media transfer)

We had been working with NCAR on HYPERChannel NAS/SAN supercomputer
access to ibm dasd farm ... we then participated on various standards
committees with regard to HIPPI, HIPPI switches and IPI3 disk arrays
... including what was called "3rd party transfer" in the HIPPI switch
to simulate the NCAR NAS/SAN operation.

Then the austin engineer wanted to enhance SLA to 800mbits and we
convinced him to instead working on fiber-channel standard (1gbit/sec
full-duplex). 

We also worked with the Hursley engineers on 9330 ...  basically
asynchronous, full-duplex, packetized SCSI commands over 80bit serial
copper ... this evolved into SSA (running at 160mbit serial copper,
full-duplex ... able to simultaneously write and read 160mbit/sec for
320mbit/sec aggregate).

The (even late 80s, early 90s) FCS standards stuff included basic 64-way
non-blocking, full-duplex cascading switch ... i.e. being able to
cascade multiple 64-way for larger than 64-way connectivity. A "port" on
the switch could be a processor, a disk controller, or some other sort
of device. Old post mentioning (jan92) FCS, SSA, large cluster scaleup
(aka 128 processors interconnected to large disk farm):
http://www.garlic.com/~lynn/95.html#13

old email mentioning the cluster scaleup work
http://www.garlic.com/~lynn/lhwemail.html#medusa

other past posts mentioning ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

In that time period ... there is a bunch of stuff from the FCS standards
mailing list about lots of churn and furry from the POK channel
engineers trying to layer the complexity of half-duplex channel
operation on top of the underlying FCS full-duplex, asynchronous
standards activity.

As mentioned in the 95 post and the cluster scaleup email ...  at the
end of Jan ... the cluster scaleup work was transfered (announced for
numerical intensive market) and we were told we couldn't work on
anything with more than four processors.

Trivial SCSI at the time (late 80s & early 90s) ... not much scaleup and
not very high thruput was being able to have four scsi adapter cards and
four scsi controllers all on the same scsi bus (i.e. 8 positions).

One of the issues for rs6000 in this time-frame was that the workstation
group had been told that they had to use PS2 microchannel adapter cards
(i.e. rs6000 moved to microchannel bus) rather than doing their own. The
problem was that the PS2 microchannel adapter cards had been designed
for the desktop, terminal emulation market ... with very high command
processing overhead and low adapter thruput.

Joke was that if rs6000 had to use all the PS2 microchannel adapter
cards (helping their corporate brethern) ... that the rs6000 thruput
would run as slow as a PS2.  It wasn't just the scsi adapter cards and
the display adapter cards ...  but also things like the 16mbit T/R card.

The PS2 microchannel 16mbit T/R card had been designed for terminal
emulation market with possibly 300+ PS2 sharing same 16mbits. It had
lower per card thruput than the PC/RT ISA 4mbit T/R card (that austin
had designed for the precursor to the rs6000). past posts mentioning
terminal emulation
http://www.garlic.com/~lynn/subnetwork.html#emulation

a couple recent IBM references:

DB2 announces technology that trumps Oracle RAC and Exadata
http://freedb2.com/2009/10/10/for-databases-size-does-matter
IBM pureScale Technology Redefines Transaction Processing Economics.
New DB2 Feature Sets the Bar for System Performance on More than 100 IBM
Power Systems
http://www-03.ibm.com/press/us/en/pressrelease/28593.wss
IBM responds to Oracle's Exadata with new systems
http://www.computerworld.com/s/article/9174967/IBM_responds_to_Oracle_s_Exadata_with_new_systems

it isn't limited to power ... but also to high-end PC servers:

IBM goes elephant with Nehalem-EX iron; Massive memory for racks and
blades
http://www.theregister.co.uk/2010/04/01/ibm_xeon_7500_servers/

from above:

With so much of its money and profits coming from big Power and
mainframe servers, you can bet that IBM is not exactly enthusiastic
about the advent of the eight-core "Nehalem-EX" Xeon 7500 processors
from Intel and their ability to link up to eight sockets together in a
single system image. But IBM can't let other server makers own this
space either, so it had to make some tough choices.

... snip ...

Past posts referencing "release no software before its time":
http://www.garlic.com/~lynn/2009p.html#43 From The Annals of Release No 
Software Before Its Time
http://www.garlic.com/~lynn/2009p.html#46 From The Annals of Release No 
Software Before Its Time
http://www.garlic.com/~lynn/2009p.html#49 big iron mainframe vs. x86 servers
http://www.garlic.com/~lynn/2009p.html#54 big iron mainframe vs. x86 servers
http://www.garlic.com/~lynn/2009p.html#55 MasPar compiler and simulator
http://www.garlic.com/~lynn/2009p.html#57 MasPar compiler and simulator
http://www.garlic.com/~lynn/2009p.html#85 Anyone going to Supercomputers '09 in 
Portland?
http://www.garlic.com/~lynn/2009q.html#19 Mainframe running 1,500 Linux servers?
http://www.garlic.com/~lynn/2009q.html#21 Is Cloud Computing Old Hat?
http://www.garlic.com/~lynn/2009q.html#42 The 50th Anniversary of the Legendary 
IBM 1401
http://www.garlic.com/~lynn/2009q.html#68 Now is time for banks to replace core 
system according to Accenture
http://www.garlic.com/~lynn/2009r.html#4 70 Years of ATM Innovation
http://www.garlic.com/~lynn/2009r.html#9 The 50th Anniversary of the Legendary 
IBM 1401
http://www.garlic.com/~lynn/2009r.html#33 While watching Biography about Bill 
Gates on CNBC last Night
http://www.garlic.com/~lynn/2010g.html#0 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2010g.html#4 Handling multicore CPUs; what the 
competition is thinking
http://www.garlic.com/~lynn/2010g.html#25 Intel Nehalem-EX Aims for the 
Mainframe
http://www.garlic.com/~lynn/2010g.html#27 Intel Nehalem-EX Aims for the 
Mainframe
http://www.garlic.com/~lynn/2010g.html#28 Intel Nehalem-EX Aims for the 
Mainframe
http://www.garlic.com/~lynn/2010g.html#32 Intel Nehalem-EX Aims for the 
Mainframe
http://www.garlic.com/~lynn/2010g.html#33 SQL Server replacement
http://www.garlic.com/~lynn/2010g.html#35 Intel Nehalem-EX Aims for the 
Mainframe
http://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2010g.html#44 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2010g.html#48 Handling multicore CPUs; what the 
competition is thinking
http://www.garlic.com/~lynn/2010g.html#77 IBM responds to Oracle's Exadata with 
new systems
http://www.garlic.com/~lynn/2010h.html#10 Far and near pointers on the 80286 
and later
http://www.garlic.com/~lynn/2010h.html#12 OS/400 and z/OS
http://www.garlic.com/~lynn/2010h.html#19 How many mainframes are there?
http://www.garlic.com/~lynn/2010h.html#24 How many mainframes are there?

-- 
42yrs virtualization experience (since Jan68), online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to