The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (, IBM Mainframe Discussion List) writes:
> I made a mistake.  A track not in the cache would take on the  order of 20 
> milliseconds, so that would equate to 20 days instead of one  day.  A track 
> already cached would result in an access time of one  millisecond.  If the 4K 
> block can be found in a buffer somewhere in virtual  storage inside the 
> processor, 
> it might take from 100 to 1000 instructions to  find and access that data, 
> which would equate to 100 to 1000 seconds, or roughly  one to 17 minutes.  
> And 
> that assumes that the page containing the 4K block  of data can be accessed 
> without a page fault resulting in a page-in operation  (another I/O), in 
> which 
> case we are back to several days to do the I/O.
>  
> By the way, it takes at least 5000 instructions in z/OS to start and finish  
> one I/O operation, so you can add about two hours of overhead to  perform the 
> I/O that lasts for 20 days.
>  
> You really want to avoid doing an I/O if at all possible.

reply to comment about RPS-miss (in the vmesa-l flavor of this thread)
http://www.garlic.com/~lynn/2007s.html#5 Poster of computer hardware events?

i had been making comments over a period of yrs that disk relative
system thruput had declined by an order of magnitude (i.e. disks were
getting faster but processors were getting much faster, faster).  this
eventually led to somebody in the disk division (gpd) to assigning the
gpd performance group to refute the statements. after several weeks they
came back and effectively said that i had somewhat understated the disk
relative system thruput degradation ... when RPS-miss was taken into
account.

they then put a somewhat more positive spin on it and turned it
into share 63 presentation b874 ... some past references:
http://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
http://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)

one of the issues is does the 5k instruction pathlength roundtrip from
EXCP (including channel program translation overhead) or roundtrip just
after it has been passed to i/o supervisor???

for comparison numbers ... i had gotten cp67 total "roundtrip" for page
fault down to approx. 500 instructions ... this included page fault
handling, page replacement algorithm, a prorated fraction of page i/o
write pathlength (which includes everything to start/finish i/o), total
page i/o read pathlength (including full i/o supervisor), and two task
switches thru dispatcher (one to switch to somebody else, waiting on the
page fault to finish and another to switch back after the page i/o read
finishes). to get it to 500 instructions involved touch almost every
piece of code involved in all of the operations.

I believe the "5000" instruction number was one of the reasons that
3090 extended store was a syncronous instruction (since the asyncronous
overhead and all related gorp in mvs was so large).

earlier, there had been some number of "electronic" 2305 paging device
deployed at internal datacenters ... referred to as "1655" model (from
an outside vendor). these involved effectively low latency but limited
to channel transfer and cost whatever the asyncronous processing
overhead.

the 3090 extended store was done because of physical packaging issues
... but later when physical packaging was no longer an issue ... there
were periodic discussions about configuring portions of regular memory
as simulated extended store ... to compensate for various shortcomings
in page replacement algorithms.

with regard to the cp67 "500" instruction number vis-a-vis MVS ... i
would periodically take some heat regarding MVS having much more robust
error recovery as part of the 5000 number (even tho the 500 number was
doing significantly more). so later when i was getting to play in bldgs
14 & 15 (dasd engineering lab and dasd product test lab), i had
opportunity to rewrite vm370 i/o supervisor. the labs in bldg. 14&15
were running processor "stand-alone" testing for the dasd/controller
"testcells" (one at a time). They had tried doing this under MVS but had
experienced 15min MTBF (system crashing and/or hanging with just a
single testcell). I undertook to completely rewrite i/o supervisor to
make it absolutely bullet proof, allow concurrent testcell operation
in operating system environment. lots of past posts mentioning
getting to play disk engineer
http://www.garlic.com/~lynn/subtopic.html#disk

some old postings about comparisons of degradation of disk relative
system thruput. the claim was that doing similar type of cms workload
... in going from cp67 on 360/67 with 80 users to vm370 on 3081 ... it
should have shown an increase to several thousand online uses
... instead of increase to 300 or so online users. The increase in
online users is roughly the change in disk system thruput ... as opposed
to difference in processor thruput
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the 
Door
http://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other 
irrelevant concepts
http://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to 
Today's Micros?
http://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
http://www.garlic.com/~lynn/98.html#46 The god old days(???)
http://www.garlic.com/~lynn/99.html#4 IBM S/360
http://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran 
as slow as today's supercomputers?
http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
http://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
http://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
http://www.garlic.com/~lynn/2002.html#5 index searching
http://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
http://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to