>>>
Remember "Tandem Non Stop?" Tandem's big selling-point was reliability
(fault tolerance) through redundancy. As best I can recall I went to a
presentation in the early 80s, when we had Amdahl 43xxs and IBM's
mainframes were 3080 series. I think we had some dodgy disk drives, but
otherwise reliability was fine.
<<<

When I was working at SIAC we had a bunch of Tandem systems and one
VP figured that he could replace the minimally configured UNIVAC 1100/80
with a Tandem cluster.  This is back in the early 1980s, BTW, so the
Tandems were 16bit CPUs that were NOT x86, Motorola or whatnot.  (I
remember looking at the instruction set and came away confused;  even
the Xerox Sigma-9's instruction set seemed to make more sense.)

So this VP, in his bid to undermine the 1100/80 mainframe, ran a
performance
test, using sortation as the metric.  I forget how many records I was
told it covered but do recall that it wasn't all that small a job.

For the Tandem setup he had an empty cluster of 20 CPUs.  Note that,
at the time, the Tandem systems were all loosely coupled.

For the UNIVAC end he had the sort job dropped into our regular work
stream (and our half of the 1100/80 was under-configured for the things
we were doing) so the system was already busy with a bunch of "demand"
(time-sharing) users.

I recall being told that the loaded-down /80 smoked the Tandems by over
20:1 in performance--  because, for every job that the Tandems could
subdivide (assuming they did) the merge phase *requires* single thread
performance.

Now I realize that it is likely that "how" the sort application is
coded probably had a fair amount of impact on the disparity of results.

Moving on to another remark elsewhere in this thread...

I believe the advantage of the zSeries CP versus the Intel CPU is that
the CP "isn't king"--  the CP is subject to reliability monitoring and
may be turned off should it be failing.  So as a CP fails it isn't the
same model as, say, an Intel or AMD CPU;  when a CP is getting flaky
it gets noticed immediately (Note that this is based on what I read
from Appendix A in the "Linux for the S/390" Redbook) so reliability
is apparently enforced by making failure immediately detectible.

I do *not* know when "hot servicing" was introduced in the mainframe
world because, up until the mid-1980s, my mainframe experiences were
limited to "off brand" (Xerox, Sperry-UNIVAC, for instance) and, since
then, I've been talking in a falsetto (a Unix "greppie").

As for the Xerox Sigma-9 (and -5, -7 models) the I/O model was VERY
close to the S/370.  From my perusal of documentation on the UNIVAC
1100/80 (and /60) the same I/O model ("cuu" addressing, SIOs, etc)
seemed to be popular.  (I had some fun writing assembly programs that
would boot from the card reader that would do some channel programming
and entertaining I/O.  Toggling stuff into an idle Xerox Sigma-9 is
a trip, too.)

--------------------
John R. Campbell, Speaker to Machines (GNUrd)      (813) 356-5322 (t/l 697)
Adsumo ergo raptus sum
MacOS X: Because making Unix user-friendly was easier than debugging
Windows.
Red Hat Certified Engineer (#803004680310286)
IBM Certified: IBM AIX 4.3 System Administration, System Support

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to