Phil Payne wrote:
>> I'm confused. And I've been following this whole discussion from
>> the beginning. Can one of you, or any of you, reiterate?
>
> I've ben here since 1969, and I still get confused.

        Yup.  Mind you, I like to *sow* some confusion, given how
        often I seem to reap it without originally planting it.

> The ultimate issue is the assumption that there is - somewhere -
> a "magic metric".  Something that will let us divide what (e.g.)
> a zSeries does with what some iteration of what Intel does and
> derive a factor letting us compare price/performance.

        Ah, the Holy Grail.

        What metric?  Is it behind that little bunny?

> T'ain't so.  And the problem is that it is incredibly easy for
> the purveyors of such low-end and (apparently) cheap boxes to
> postulate these challenges, and it's a multi-million dollar
> issue (literally) for someone like IBM to demonstrate the
> superiority of (e.g.) zSeries in a really serious environment
> involving dozens or hundreds of images on a single system.

        Actually, I think Appendix "A" of the original "Linux for S/390"
        redbook had a good comparison of the various trade-offs and
        priorities between a mainframe like the s/390 and a desktop.

        I was once surprised to find a definition of a mainframe that
        I did as a lark used as a quoting point.  I originally said
        that a mainframe takes the following into account:

          1)    Maximum single-thread performance.
          2)    Maximum I/O Connectivity.
          3)    Maximum I/O Throughput.

        SANs impact 2 and 3 above but they take no prisoners in their
        effort to provide connectivity (capacity expansion) and speed
        (delivery).

        The first point is due to many of the workloads being single
        threaded (the "merge" phase of a Sort, no matter how much
        you can subdivide it, can't be multithred without some
        serious Heisenbugs forming).

        I missed one point in my original comment:

          4)    Maximum Reliability.

        Which is one place where the S/390 itself has few competitors.
        Based on "Appendix A" the throughput of an s/390 is limited by
        the desire to ensure accurate and reliable results, so there's
        a performance hit (though I doubt it's all that much of an
        impact) in the desperate quest to make sure that all the results
        are CORRECT.

> "Add another 500 users."
>
> "No change."
>
> "Add another 500."
>
> "OK.  We got something.  4% response time degradation."

        Unlike some of the "stories" about TSS/360, but that was over 30
        years ago.  I suspect a *lot* was learned from VM/CMS that was
        added to TSO.

        Note that I was most familiar with the off-brand stuff until
        some 6 years ago, having done a lot of time with Xerox Sigma-9s
        and UNIVAC 1100s (I even did microcode for an array processor
        used in the energy industry).

        Speaking of response time, one fellow confided to me that it
        was best measured by "obscenities between returns".

--
 John R. Campbell             Speaker to Machines                 [EMAIL PROTECTED]
  "As a SysAdmin, yes, I CAN read your e-mail, but I DON'T get that bored!"-me
   Disclaimer:  All opinions expressed above are those of John Campbell and
                do not reflect the opinions of his employer(s) or lackeys.
                Anyone who says differently is itching for a fight!

Reply via email to