bi...@mainstar.com (Bill Fairchild) writes:
> I remember now about Amdahl's CSIM.  Thanks for the lengthy post on it.
>
> Cache and NVS sizes were indeed vanishingly small in the 1980s
> compared to today's models.  I remember attending a SHARE session,
> ca. 1989, in which an IBM cache control unit person from Tucson said
> that IBM had modeled vast amounts of traced user I/O requests and
> decided that 4M, or at most 8M, of NVS was all that anyone would ever
> need to support DASD fast writes.  This reminds of me T. J. Watson's
> prediction in 1943 that "there is a world market for maybe five
> computers."  lol

re:
http://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by 
dataset

I had gotten into some disputes with Tucson over some of their cache
conclusions. The first two 3880 cache controllers were ironwood
(3880-11) and sherif (3880-13) ... they were both 8mbyte cache
controller caches ... ironwood was 4k "page" cache, and sherif was
full-track cache.

(hardware) fast-write allowed system logic to continue as soon as record
was in controller cache ... but before arm had been moved and data
actually deposited on disk. for no-single-point-of-failure ... this
required that the electronic storage was replicated and could survive
power-failure (marketing would tend to claim that whatever was shipping
was what was actually needed).  in some sense, it is temporary staging
area to compensate for disk arm delay (and possibly being able to
optimally re-arrange order of writes tailored to disk arm motion).

fast-write logic shows up in 1980s DBMS implementation (not necessarily
mainframe) where the DBMS is directly managing cache of records ...  and
transaction is considered commited as soon as the transaction image log
record has been written ... but the actual data record hasn't yet been
written to disk "home" location. The aggregate amount of (outstanding)
"fast-write" records would tend to be related to how fast the system was
executing transactions.  I ran into some issues with this attempting to
extend to cluster environment ... frequently used past reference (jan92
meeting in ellison's office)
http://www.garlic.com/~lynn/95.html#13

some number of the (non-mainframe) implementations were getting the DBMS
vendorsto move their vax/cluster implementation over to ha/cmp. at the
time, when a record had to be moved from one cluster member to another,
their vax/cluster implementation was to first force any "fast-write"
records to their home disk location ... before the other cluster member
read it off disk. This ignored the fast interconnect technologies that
would allow direct cache-to-cache (of fast-write records) transfers. It
turns out to get them off the first write to disk scenario ... there
were some tricky issues with correctly merging transactions commits from
multiple (cluster) logs during a recovery (say after total power
outage). Early on there was apprehension of deploying direct
cache-to-cache transfer (of potentially fast-write records) ... because
of the complexities with log merging during recovery. misc. ha/cmp posts
(direct cache-to-cache transfers, w/o first forcing to disk, was part of
cluster scaleup in dbms environment):
http://www.garlic.com/~lynn/subtopic.html#hacmp

This is somewhat independent of cache size issues and (re-use) hit
ratios (i.e. once in cache, what is the probability that the same record
would again be requested). The early 3880-13 (full-track) cache
documentation claimed 90% hit rates. Their model was sequential read of
track formatted with ten records. The first record read from a track
would bring in the whole track ... and then the next nine sequential
record reads would be found in the cache. I raised the issue if the
application switched to full-track sequential read, it would drop the
numbers to zero percent cache hit ratio.

The 3880-11 was being pitched as paging device ... to somewhat
compensate for lack of 2305 followon. I had done page migration and some
work on dup/no-dup algorithms in the 70s. Relative large system storage
with relatively same amount of paging cache could result in zero percent
hit rate. The issue is that if the page is brought into the system
... and the sizes of aggregate cache and system storage were compareable
... then every page that was in the cache would also be in system
storage (and therefor would never be requested) ... only pages that
weren't in system storage would be requested (but they then weren't
likely to be in cache ... because cache was full of duplicates of what
was in system storage). In that situation, I created a dynamic
"no-duplication" switch ... heavily loaded 2305s would deallocate any
record read into system storage.

So when 3880-11 was announced, a typical system configuration was 3081
with 32mbytes of real storage. Adding four 3880-11 to the configuration
would only have total of 32mbyte of cache. There would easily be the
situation that every page in cache would also be in 3081 memory ... and
therefor would never be used again. Only pages that would be read into
the 3081 would be pages that had very low probability of also being in
cache (zero percent hit rate). I proposed a "no-dup" strategy for
3880-11, similar to what I had done for 2305s in the 70s. 3880-11 had a
special read CCW ... that if the record was in cache ... would read it
from cache and purge it from cache ... and if it wasn't in the cache,
would do a direct cache-bypass read from disk. The result, was that the
only way a page could get into cache was on a write (presumably when it
was being replaced in system storage).

3880-11 & 3880-13 were later upgraded from 8mbyte to 32mbyte cache as
3889-21 & 3880-23 (if total aggregate controller cache size was much
larger than system memory, it mitigates the need for no-dup strategy).

misc. past posts mentioning dup/no-dup paging:
http://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
http://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2001l.html#55 mainframe question
http://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
http://www.garlic.com/~lynn/2002f.html#20 Blade architectures
http://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
http://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004h.html#19 fast check for binary zeroes in memory
http://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer 
cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2005c.html#27 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006i.html#41 virtual memory
http://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
http://www.garlic.com/~lynn/2007c.html#0 old discussion of disk controller 
chache
http://www.garlic.com/~lynn/2007e.html#60 FBA rant
http://www.garlic.com/~lynn/2007l.html#61 John W. Backus, 82, Fortran 
developer, dies
http://www.garlic.com/~lynn/2008f.html#19 
Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
http://www.garlic.com/~lynn/2008h.html#84 Microsoft versus Digital Equipment 
Corporation
http://www.garlic.com/~lynn/2008k.html#80 How to calculate effective page fault 
service time?
http://www.garlic.com/~lynn/2010.html#47 locate mode, was Happy DEC-10 Day
http://www.garlic.com/~lynn/2010g.html#73 Interesting presentation

-- 
42yrs virtualization experience (since Jan68), online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to