Re: context switching

2003-04-06 Thread Alan Cox
On Sul, 2003-04-06 at 16:17, Barton Robinson wrote:
 When somebody tells me something is very fast - I always
 wonder at the value. Like having a car that will easily
 do 150MPH on roads supposedly limited to 65mph... It's
 fun, it's fast, it's expensive, but not all that useful.

The PC is cheap, fast and fun, and there are as yet no
speed limits (although crashes due to equipment failures
are far too common ;))

Alan


Re: context switching

2003-04-06 Thread John R. Campbell
Phil Payne wrote:
 I'm confused. And I've been following this whole discussion from
 the beginning. Can one of you, or any of you, reiterate?

 I've ben here since 1969, and I still get confused.

Yup.  Mind you, I like to *sow* some confusion, given how
often I seem to reap it without originally planting it.

 The ultimate issue is the assumption that there is - somewhere -
 a magic metric.  Something that will let us divide what (e.g.)
 a zSeries does with what some iteration of what Intel does and
 derive a factor letting us compare price/performance.

Ah, the Holy Grail.

What metric?  Is it behind that little bunny?

 T'ain't so.  And the problem is that it is incredibly easy for
 the purveyors of such low-end and (apparently) cheap boxes to
 postulate these challenges, and it's a multi-million dollar
 issue (literally) for someone like IBM to demonstrate the
 superiority of (e.g.) zSeries in a really serious environment
 involving dozens or hundreds of images on a single system.

Actually, I think Appendix A of the original Linux for S/390
redbook had a good comparison of the various trade-offs and
priorities between a mainframe like the s/390 and a desktop.

I was once surprised to find a definition of a mainframe that
I did as a lark used as a quoting point.  I originally said
that a mainframe takes the following into account:

  1)Maximum single-thread performance.
  2)Maximum I/O Connectivity.
  3)Maximum I/O Throughput.

SANs impact 2 and 3 above but they take no prisoners in their
effort to provide connectivity (capacity expansion) and speed
(delivery).

The first point is due to many of the workloads being single
threaded (the merge phase of a Sort, no matter how much
you can subdivide it, can't be multithred without some
serious Heisenbugs forming).

I missed one point in my original comment:

  4)Maximum Reliability.

Which is one place where the S/390 itself has few competitors.
Based on Appendix A the throughput of an s/390 is limited by
the desire to ensure accurate and reliable results, so there's
a performance hit (though I doubt it's all that much of an
impact) in the desperate quest to make sure that all the results
are CORRECT.

 Add another 500 users.

 No change.

 Add another 500.

 OK.  We got something.  4% response time degradation.

Unlike some of the stories about TSS/360, but that was over 30
years ago.  I suspect a *lot* was learned from VM/CMS that was
added to TSO.

Note that I was most familiar with the off-brand stuff until
some 6 years ago, having done a lot of time with Xerox Sigma-9s
and UNIVAC 1100s (I even did microcode for an array processor
used in the energy industry).

Speaking of response time, one fellow confided to me that it
was best measured by obscenities between returns.

--
 John R. Campbell Speaker to Machines [EMAIL PROTECTED]
  As a SysAdmin, yes, I CAN read your e-mail, but I DON'T get that bored!-me
   Disclaimer:  All opinions expressed above are those of John Campbell and
do not reflect the opinions of his employer(s) or lackeys.
Anyone who says differently is itching for a fight!


Re: context switching

2003-04-06 Thread Paul Raulerson
Well, how about this for confusing then.

If you can run Linux/390 on a PC using an emulator, and the emulator cranks out about 
12 MIPS on a 1ghz machine,
then it is pretty fair to say that the PC is doing something around 12 MIPS, right?  
(Well, maybe not quite that simple.
We do run several sets of baselines that both measure processor, I/O, and mixed 
processing loads. We take that in
account.

Now the way I do it is to use a handy dandy little program we have here that measures 
performance in terms of
doing the things we normally do in our processing, such as read/write files, process 
data with several various and
more or less complex formulae, retrieve records in various ways from data files, and 
lastly, do some kind of interactive
performance.  The reason the interactive is last is that it is the hardest to measure, 
of course.

We run the program under Emulation, then recompile it and run it under Intel Linux. 
That gives us a rough baseline measure.  we can
then extrapolate by factoring in the MIPS we expect to be available to the process.

While the measure is still pretty rough, it does give us general guidelines to 
expected performance. If it performs
x well on the emulated system, the we can expect it to perform x(y) on a non-emulated 
system with differences
in DASD and etc taken into account.

Of course, one nasty factor to take into account is that MIPS are devilishly hard to 
determine on an Intel processor. I think they
did that on purpose. But anyway, a 1.3 gigahertz PIII (or comparable chip in the same 
range) delivers just under 1000 MIPS of
performance in our boxes.

1000 Intel MIPS  == 12 Mainframe MIPS  is the very rough measure. Actually, all the 
other factors combined
work out (In our case! Your mileage will vary! :)  to 1000 Intel MIPS = 18.3 Mainframe 
MIPS.

Roughly speaking, in terms of the processing we do, that means we would need an Intel 
box processing around
13.5ghz to equal a 192MIP IFL engine. Obviously, other factors come into play that 
tend to either mollify or exaggerate this rough
ratio, but we find it pretty darn close in terms of pure processor work.

Note, I am not saying that a 13.5ghz Intel Box can replace a mainframe. I am merely 
saying that with our processing mixture, we find
that ratio to be more or less true for comparison.

- Original Message -
From: John R. Campbell [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, April 06, 2003 8:29 PM
Subject: Re: context switching


 Phil Payne wrote:
  I'm confused. And I've been following this whole discussion from
  the beginning. Can one of you, or any of you, reiterate?
 
  I've ben here since 1969, and I still get confused.

 Yup.  Mind you, I like to *sow* some confusion, given how
 often I seem to reap it without originally planting it.

  The ultimate issue is the assumption that there is - somewhere -
  a magic metric.  Something that will let us divide what (e.g.)
  a zSeries does with what some iteration of what Intel does and
  derive a factor letting us compare price/performance.

 Ah, the Holy Grail.

 What metric?  Is it behind that little bunny?

  T'ain't so.  And the problem is that it is incredibly easy for
  the purveyors of such low-end and (apparently) cheap boxes to
  postulate these challenges, and it's a multi-million dollar
  issue (literally) for someone like IBM to demonstrate the
  superiority of (e.g.) zSeries in a really serious environment
  involving dozens or hundreds of images on a single system.

 Actually, I think Appendix A of the original Linux for S/390
 redbook had a good comparison of the various trade-offs and
 priorities between a mainframe like the s/390 and a desktop.

 I was once surprised to find a definition of a mainframe that
 I did as a lark used as a quoting point.  I originally said
 that a mainframe takes the following into account:

   1)Maximum single-thread performance.
   2)Maximum I/O Connectivity.
   3)Maximum I/O Throughput.

 SANs impact 2 and 3 above but they take no prisoners in their
 effort to provide connectivity (capacity expansion) and speed
 (delivery).

 The first point is due to many of the workloads being single
 threaded (the merge phase of a Sort, no matter how much
 you can subdivide it, can't be multithred without some
 serious Heisenbugs forming).

 I missed one point in my original comment:

   4)Maximum Reliability.

 Which is one place where the S/390 itself has few competitors.
 Based on Appendix A the throughput of an s/390 is limited by
 the desire to ensure accurate and reliable results, so there's
 a performance hit (though I doubt it's all that much of an
 impact) in the desperate quest to make sure that all the results
 are CORRECT.

  Add another 500 users.
 
  No change