The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


re:
http://www.garlic.com/~lynn/2009p.html#12 Secret Service plans IT reboot

there use to be a joke about TSO users not realizing how deplorable
performance was because they couldn't see the difference by operating
with & w/o MVS (actually in large part, CKD & multi-track search).

CKD & multi-track search introduced with original 360 was scarce
resource use trade-off of the period ... by the mid-70s, the relative
amounts of resources had nearly inverted (which resources were the
scarcest), starting to make multi-track search the exact wrong thing to
do,

there was a large national retail operation with a consolidated
datacenter (large number of systems in loosely-coupled configuration)
... which started to run into severe throughput problem during peak
periods. This went on for awhile, lots of experts being brought in over
period of time, until they eventualy got around to calling me in.

I was brought into a class room with large number of long class tables
... covered with high stacks of paper performance details from all the
systems. while i started to leaf through all the pages (for shared disk
activity, I had to aggregate drive activity from different
systems/reports in my head ... while they started through overall
summary of the symptoms).

After about 20-25 minutes ... I started to notice a somewhat anomolous
circumstance ... about the only correlation between "good" thruput and
"nearly no thruput" was a specific pack had aggregated i/o counts
between 6 and 7 during high-load/low-throughput (which would seem to
hardly be a thruput limitation).

After a little more investigation ... it turned out, the pack contained
the shared application library for the whole complex ... more
investigation was that the PDS had a three cylinder PDS directory.

Back of the envelope calculations was that avg. depth of search was
cylinder and half (PDS member lookup) ... that would be two multi-track
search I/Os that took elapsed time of nearly 1/2 second. Assumption then
was the two PDS directory lookup I/Os would be followed by a single I/O
for a PDS member load. That accounts for aggregate of six I/Os per
second saturating the drive ... basically limiting the whole national
loosely-coupled infrastructure to performing an aggregate of two
application (PDS) program library loads per second.

Each full-cylinder multi-track search represented enormous busy elapsed
time for the processor channel (locking out any other activity on the
same channel).  The full-cylinder multi-track searches also locked up
the (shared) controller, string and drive ... locking out all systems
from accessing anything else associated with those resources.

The eventual result was reconfiguring everything to try and come as
close as possible to eliminating the long multi-track searches
(drastically reduced PDS directory size) ... and replicating the shared
application library on non-shared drives for each system.

PDS directory (& vtoc) multi-track searches alleviated needing the real
storage to contain the directory information (at enormous cost in I/O
resources). By the mid-70s, real storage was becoming plentiful enough
that it was practical to keep high-useage (vtoc &) PDS directory
information cached in system storage (allowing fast lookup of instorage
index) ... so program loads could happen at "normal" disk activity
thruput speeds (say 30-50/second) ... instead of at 2/second (limited by
the enormous PDS directory multi-track search penalty).

This resource trade-off also showed up with RDBMS ... the original
relational/sql was done on vm system in bldg. 28 ... system/r
... misc. past posts: http://www.garlic.com/~lynn/subtopic.html#systemr

In the 70s, there was somewhat rivalry between the IMS group in STL and
system/r in bldg. 28 on the main plant site. IMS group claimed better
trade-offs because record pointers were exposed as part of the data
... and it was possible to go directly to a specific piece of data.
This was contrasted with RDBMS implementation that had an implicit index
... which could take 4-5 disk i/os to eventually find the location of
the desired data record. This implicit index also tended to double the
physical disk space required (vis-a-vis same data in IMS). The system/r
group countered that the exposed record pointers created a significant
administrative and maintanance overhead ... especially for adding data
... nearly eliminated by the implicit indexes).

The resource trade-offs argument changed with combination of enormous
disk size increases and drastic fall in cost/mbyte (muting the issue
regarding doubling disk space for the indexes). At the same time there
was significant increase in available system real storage ... making it
practical to cache a large portion of the (implicit) RDBMS indexes
(drastically reducing separate physical disk i/os to find data record).

-- 
40+yrs virtualization experience (since Jan68), online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to