Re: How to analyze a volume's access by dataset

2010-05-06 Thread Miklos Szigetvari

On 5/1/2010 12:00 AM, John Norgauer wrote:

Is there any software available that will show the access by dataset(or by
CCHR)  for a given volume?



John Norgauer
Senior Systems Programmer
Mainframe Technical Support Services
University of California Davis Medical Center
2315 Stockton Blvd
ASB 1300
Sacramento, Ca 95817
916-734-0536

  SYSTEMS PROGRAMMING..  Guilty, until proven innocent !! JN  2004

Hardware eventually breaks - Software eventually works  anon


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


   

Hi

Found a 15 years old C++ program with an asm READVTOC routine, 
processing GTF seek reports.

If you need I can send offlist

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-05 Thread Bill Fairchild
I remember now about Amdahl's CSIM.  Thanks for the lengthy post on it.

Cache and NVS sizes were indeed vanishingly small in the 1980s compared to 
today's models.  I remember attending a SHARE session, ca. 1989, in which an 
IBM cache control unit person from Tucson said that IBM had modeled vast 
amounts of traced user I/O requests and decided that 4M, or at most 8M, of NVS 
was all that anyone would ever need to support DASD fast writes.  This reminds 
of me T. J. Watson's prediction in 1943 that there is a world market for maybe 
five computers.  lol

Bill Fairchild

Software Developer 
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com 
Web: www.rocketsoftware.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Larry Chenevert
Sent: Tuesday, May 04, 2010 7:45 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: How to analyze a volume's access by dataset

For those who were not there at the time -- in the 80's, when cache and 
fast-write were first introduced, caches were tiny compared to current 
technology, and NVS sizes were even smaller -- much smaller.  Memory for 
cache and NVS was quite expensive.  Caching and fast-write were, for a short 
time, specified on a dataset by dataset basis.

Many internal marketing tools have corners cut in development (well I guess 
some companies even cut corners on their products!) and have very rough user 
interfaces, but not CSIM, which had all the attributes, look and feel of a 
flagship product.  It was not a product, but was a tool for internal people 
to use -- although it was probably left with some customers.

I suppose this tool could have been used to model the performance of 
different cache algorithms but I doubt it was ever used in that mode.

Larry Chenevert

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-05 Thread Anne Lynn Wheeler
bi...@mainstar.com (Bill Fairchild) writes:
 I remember now about Amdahl's CSIM.  Thanks for the lengthy post on it.

 Cache and NVS sizes were indeed vanishingly small in the 1980s
 compared to today's models.  I remember attending a SHARE session,
 ca. 1989, in which an IBM cache control unit person from Tucson said
 that IBM had modeled vast amounts of traced user I/O requests and
 decided that 4M, or at most 8M, of NVS was all that anyone would ever
 need to support DASD fast writes.  This reminds of me T. J. Watson's
 prediction in 1943 that there is a world market for maybe five
 computers.  lol

re:
http://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by 
dataset

I had gotten into some disputes with Tucson over some of their cache
conclusions. The first two 3880 cache controllers were ironwood
(3880-11) and sherif (3880-13) ... they were both 8mbyte cache
controller caches ... ironwood was 4k page cache, and sherif was
full-track cache.

(hardware) fast-write allowed system logic to continue as soon as record
was in controller cache ... but before arm had been moved and data
actually deposited on disk. for no-single-point-of-failure ... this
required that the electronic storage was replicated and could survive
power-failure (marketing would tend to claim that whatever was shipping
was what was actually needed).  in some sense, it is temporary staging
area to compensate for disk arm delay (and possibly being able to
optimally re-arrange order of writes tailored to disk arm motion).

fast-write logic shows up in 1980s DBMS implementation (not necessarily
mainframe) where the DBMS is directly managing cache of records ...  and
transaction is considered commited as soon as the transaction image log
record has been written ... but the actual data record hasn't yet been
written to disk home location. The aggregate amount of (outstanding)
fast-write records would tend to be related to how fast the system was
executing transactions.  I ran into some issues with this attempting to
extend to cluster environment ... frequently used past reference (jan92
meeting in ellison's office)
http://www.garlic.com/~lynn/95.html#13

some number of the (non-mainframe) implementations were getting the DBMS
vendorsto move their vax/cluster implementation over to ha/cmp. at the
time, when a record had to be moved from one cluster member to another,
their vax/cluster implementation was to first force any fast-write
records to their home disk location ... before the other cluster member
read it off disk. This ignored the fast interconnect technologies that
would allow direct cache-to-cache (of fast-write records) transfers. It
turns out to get them off the first write to disk scenario ... there
were some tricky issues with correctly merging transactions commits from
multiple (cluster) logs during a recovery (say after total power
outage). Early on there was apprehension of deploying direct
cache-to-cache transfer (of potentially fast-write records) ... because
of the complexities with log merging during recovery. misc. ha/cmp posts
(direct cache-to-cache transfers, w/o first forcing to disk, was part of
cluster scaleup in dbms environment):
http://www.garlic.com/~lynn/subtopic.html#hacmp

This is somewhat independent of cache size issues and (re-use) hit
ratios (i.e. once in cache, what is the probability that the same record
would again be requested). The early 3880-13 (full-track) cache
documentation claimed 90% hit rates. Their model was sequential read of
track formatted with ten records. The first record read from a track
would bring in the whole track ... and then the next nine sequential
record reads would be found in the cache. I raised the issue if the
application switched to full-track sequential read, it would drop the
numbers to zero percent cache hit ratio.

The 3880-11 was being pitched as paging device ... to somewhat
compensate for lack of 2305 followon. I had done page migration and some
work on dup/no-dup algorithms in the 70s. Relative large system storage
with relatively same amount of paging cache could result in zero percent
hit rate. The issue is that if the page is brought into the system
... and the sizes of aggregate cache and system storage were compareable
... then every page that was in the cache would also be in system
storage (and therefor would never be requested) ... only pages that
weren't in system storage would be requested (but they then weren't
likely to be in cache ... because cache was full of duplicates of what
was in system storage). In that situation, I created a dynamic
no-duplication switch ... heavily loaded 2305s would deallocate any
record read into system storage.

So when 3880-11 was announced, a typical system configuration was 3081
with 32mbytes of real storage. Adding four 3880-11 to the configuration
would only have total of 32mbyte of cache. There would easily be the
situation that every page in cache would also be in 3081 memory ... and
therefor 

Re: How to analyze a volume's access by dataset

2010-05-05 Thread Larry Chenevert
I could probably dig out my old speeds and feeds documents (actually 
little laminated cards we gave out to customers and prospects) that listed 
the NVS size options of, as I recall, 4M, 8M, 12M, and 16M and some other 
stuff.  They are probably in the attic.  I may have time to do this in the 
coming weeks before it gets too hot to forage in the attic.  Maybe I will 
post a scan of one or more of them.


T. J. Watson . . . yeah -- maybe five computers...   I, for one, am glad he 
underestimated!


Larry Chenevert
- Original Message - 
From: Bill Fairchild bi...@mainstar.com

Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Wednesday, May 05, 2010 9:21 AM
Subject: Re: How to analyze a volume's access by dataset



I remember now about Amdahl's CSIM.  Thanks for the lengthy post on it.

Cache and NVS sizes were indeed vanishingly small in the 1980s compared to 
today's models.  I remember attending a SHARE session, ca. 1989, in which 
an IBM cache control unit person from Tucson said that IBM had modeled 
vast amounts of traced user I/O requests and decided that 4M, or at most 
8M, of NVS was all that anyone would ever need to support DASD fast 
writes.  This reminds of me T. J. Watson's prediction in 1943 that there 
is a world market for maybe five computers.  lol


Bill Fairchild

Software Developer
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com
Web: www.rocketsoftware.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On 
Behalf Of Larry Chenevert

Sent: Tuesday, May 04, 2010 7:45 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: How to analyze a volume's access by dataset

For those who were not there at the time -- in the 80's, when cache and
fast-write were first introduced, caches were tiny compared to current
technology, and NVS sizes were even smaller -- much smaller.  Memory for
cache and NVS was quite expensive.  Caching and fast-write were, for a 
short

time, specified on a dataset by dataset basis.

Many internal marketing tools have corners cut in development (well I 
guess
some companies even cut corners on their products!) and have very rough 
user

interfaces, but not CSIM, which had all the attributes, look and feel of a
flagship product.  It was not a product, but was a tool for internal 
people

to use -- although it was probably left with some customers.

I suppose this tool could have been used to model the performance of
different cache algorithms but I doubt it was ever used in that mode.

Larry Chenevert

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-04 Thread John Ticic
Cast your mind back to GTFPARS. This IBM FDP would build seek 
histograms
using IEHLIST VTOC Listings as input to map the extents of the 
datasets on
the volume.


Ah Ron, the good old days. One of my tasks at NAB was using GTFPARs 
to map out the SYSRES access pattern and build allocation JCL to 
optimize data set placement around the VTOC. 

Regards

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-04 Thread Ron Hawkins
John,

I remember, and Ian Lee wouldn't believe me when I wanted to change the
ordering and VTOC location after we put SYSRES on the 3880-23. Now we do
this with volume placement in array groups, instead of datasets on volumes.

Ron



 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 John Ticic
 Sent: Tuesday, May 04, 2010 12:33 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] How to analyze a volume's access by dataset
 
 Cast your mind back to GTFPARS. This IBM FDP would build seek
 histograms
 using IEHLIST VTOC Listings as input to map the extents of the
 datasets on
 the volume.
 
 
 Ah Ron, the good old days. One of my tasks at NAB was using GTFPARs
 to map out the SYSRES access pattern and build allocation JCL to
 optimize data set placement around the VTOC.
 
 Regards
 
 John
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-04 Thread Neil Duffee
SMF type 42  S42DSN/S42DSVOL?  I have partial SAS layouts for dispersal
if anyone wants them.

--  signature = 6 lines follows --
Neil Duffee, Joe SysProg, U d'Ottawa, Ottawa, Ont, Canada
telephone:1 613 562 5800 x4585 fax:1 613 562 5161
mailto:NDuffee of uOttawa.ca http:/ /aix1.uottawa.ca/ ~nduffee
How *do* you plan for something like that? Guardian Bob, Reboot
For every action, there is an equal and opposite criticism.
Systems Programming: Guilty, until proven innocent John Norgauer 2004
 
 -Original Message-
 From: IBM Mainframe Discussion List 
 [mailto:ibm-m...@bama.ua.edu] On Behalf Of John Norgauer
 Sent: Friday, April 30, 2010 3:01 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: How to analyze a volume's access by dataset
 
 Is there any software available that will show the access by 
 dataset(or by CCHR)  for a given volume?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-04 Thread Larry Chenevert

Amdahl had an internal tool that did this.

With the introduction of caching control units in the mid '80's, Amdahl 
developed a marketing tool acronym'd (application of the Fairchild verbing 
rule) CSIM -- Cache Simulator.  The intended use of CSIM was to size caches 
in 6880 and 6100 control units and later NVS in 6100's based on GTF CCW 
trace data.  If I recall correctly, a CSIM utility took a snapshot of the 
subject volume(s) VTOC during tracing and this is what CSIM used to convert 
seek addresses to dataset names.  CSIM allowed the user to rerun the CCW 
trace data through it with different cache configurations and CSIM 
approximated the results.


The marketing people usually delivered a CSIM install tape to a site and 
worked with a customer person to do the monitoring and simulation.


One of the CSIM reports accurately resembled an RMF Direct Access Device 
Activity Report -- with the exception that rather than producing one line 
per DASD volume, this report contained one line *per dataset*.  This allowed 
the user to drill down from the RMF DASD Activity report to the dataset 
level.


As an SE and later a consultant, I most often used CSIM to identify *hot 
datasets* where RMF was not able go deeper than the volume level.


For those who were not there at the time -- in the 80's, when cache and 
fast-write were first introduced, caches were tiny compared to current 
technology, and NVS sizes were even smaller -- much smaller.  Memory for 
cache and NVS was quite expensive.  Caching and fast-write were, for a short 
time, specified on a dataset by dataset basis.


Many internal marketing tools have corners cut in development (well I guess 
some companies even cut corners on their products!) and have very rough user 
interfaces, but not CSIM, which had all the attributes, look and feel of a 
flagship product.  It was not a product, but was a tool for internal people 
to use -- although it was probably left with some customers.


I suppose this tool could have been used to model the performance of 
different cache algorithms but I doubt it was ever used in that mode.


I distinctly remember the name of the CSIM developer and I am hesitant to 
post his name here because I never met him in person (only exchanged a few 
emails) and don't know his current status.  As far as I can tell he has 
never posted here -- sorry Bill.  The CSIM developer was once active in CMG 
and may still be.


Larry Chenevert

- Original Message - 
From: Bill Fairchild bi...@mainstar.com

Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Monday, May 03, 2010 12:27 PM
Subject: Re: How to analyze a volume's access by dataset


I never worked with GTFPARS, and only now vaguely remember it, thanks to 
your mentioning it.


After CA bought UCCEL in 1987 and redundanted me [1], I lost all contact 
with FastDASD's goings-on.  So I don't know about its functional 
replacement by Astex which occurred with CA's subsequent acquisition of 
Legent.


It used to have a pretty good LRU cache modeler built in.  Assuming that 
it means FastDASD, then I don't remember the upper limit on the 
supported cache size.  Supporting that function was one of my all-time 
favorite projects.  And thanks for the honorable mention.


Bill Fairchild

[Most nouns and adjectives can easily be verbed, as in The operator 
onlined the volume.]


Software Developer
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com
Web: www.rocketsoftware.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On 
Behalf Of Ron Hawkins

Sent: Monday, May 03, 2010 11:21 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: How to analyze a volume's access by dataset

Bill,

Cast your mind back to GTFPARS. This IBM FDP would build seek histograms
using IEHLIST VTOC Listings as input to map the extents of the datasets on
the volume.

CA-ASTEX does some very good IO level analysis. As with FASTDASD, CA-ASTEX
intercepts every IO. I think that this replaced FASTDASD after CA bought
Legent.

It used to have a pretty good LRU cache modeler built in. I wonder if it
still works and supports 512GB or more of cache?

Ron

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET 

Re: How to analyze a volume's access by dataset

2010-05-04 Thread Anne Lynn Wheeler
larrychenev...@verizon.net (Larry Chenevert) writes:
 Amdahl had an internal tool that did this.

there were two different simulators done in late 70s ...  one used
standard i/o vm370 trace and modeled activity. it was modified to take a
full 3330 configuration and output the configuration for migration to
3350 ... with load-balancing across 3350 drives.

The other was DMKCOL (internal mods to vm) which did super high
performance cchr capture and drove it various thru cache design and
replacement algorithms. There was also work on abstracting the
information in real time so that it could be run as part of normal
production operation for providing input into dynamic disk allocation.
While the initial work was under vm370 ... it was used for capturing
information from production cms-intensive operations as well as guest
operating systems (under vm370) ... the methodology could be added to
other operating systems.

Early cache simulation results was looking at optimal placement of fixed
amount of electronic storage cache ... i.e. trade-off between disk-level
cache, controller level cache, channel level cache, (303x channel)
director level cache or system level cache. One of the results was that
single system level cache was more efficient than dividing the
electronic memory available multiple smaller caches. This result was
purely from the standpoint of cache hit ratios and aggregate amount of
fixed electronic storage. The limitation at the time (late 70s) was now
way to have system level managed addressability for large amounts of
cache ... and no easy way to have independent processor managing the
information. Even tho it showed that multiple 8mbytes in 3880 controller
caches was less efficient (in terms of cache hit ratios) than single
large system cache ... there was no easy way of packaging and shipping
the system cache (although it might have contributed to justifying
expanded store in 3090).

I had done a lot of work for DMKCOL ... and it was somewhat satisfying
that the different cache level simulation showing that single global
cache had higher hit ratio than equivalent electronic storage
partitioned into different 3880 controllers. This corresponded to the
work I had done as undergraduate in the 60s as undergraudate and showing
global replacement was more efficient than local/partitioned
replacement.

Slightly later I got pulled into academic dispute over global versus
local ... there was some amount of concerted opposition to granting a
stanford PHD on global replacement. At acm sigops '81 meeting, I was
asked to provide supporting evidence on global replacement from my 60s
undergraduate days. Presumably some sort of internal corporate politics
resulted in my not being allowed to respond until oct82 (sounds better
than assuming that they were taking sides in the academic dispute)
http://www.garlic.com/~lynn/2006w.html#email821019
in this post:
http://www.garlic.com/~lynn/2006w.html#46

a couple past posts mentioning DMKCOL work
http://www.garlic.com/~lynn/2006y.html#35 The Future of CPUs: What's After 
Multi-Core?
http://www.garlic.com/~lynn/2007.html#3 The Future of CPUs: What's After 
Multi-Core?

misc. past posts referencing global replacement work
http://www.garlic.com/~lynn/subtopic.html#wsclock

Longer term DMKCOL collecting (several months) ... after identifying
relatively short term use patterns (used for things like cache design)
... started turning up other kinds of longer period patterns ... certain
collections of accesses done on periodic basis.

some of this shows up backup/archive containers ... collections
treated as single unit ... I had done the original CMSBACK that then
morphed into workstation datasave facility, then ADSM and is now TSM
... some old email
http://www.garlic.com/~lynn/lhwemail.html#cmsback

misc. past backup/archive posts
http://www.garlic.com/~lynn/submain.html#backup

-- 
42yrs virtualization experience (since Jan68), online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-04 Thread Ed Finnell
 
In a message dated 5/4/2010 7:55:20 P.M. Central Daylight Time,  
larrychenev...@verizon.net writes:

post his name here because I never met him in person (only  exchanged a few 
emails) and don't know his current status.  As far as  I can tell he has 
 

IIRC it came with the 9880 cache controllers too. Helped  size and when to 
reorg.
Think they only kept 255 extents in CACHE. The guy I  remember was Steve 
Terwilliger.
 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-03 Thread Bill Fairchild
GTF traces the seek address (CCHR, et al.) stored in the IOSB but the 
post-processing to determine which of the data sets on the volume has each 
given traced CCHR within its allocated extents is non-trivial, and I don't know 
of anyone who has written such code, unless I did it and have forgotten about 
it.  FastDASD, marketed by CA the last I heard, does all that by sampling, not 
tracing, the DASD addresses used by most access methods, then reads the VTOC to 
determine in which data set each I/O occurred.  I used to be involved with 
FastDASD, and may even have added support for GTF input, but if so, it was too 
long ago for my mercury delay line memory cells to remember.

TMON/MVS, on the other hand, gets the CCHRs by intercepting I/Os rather than 
sampling z/OS control blocks.  Its analysis is thus far more accurate, but the 
amount of data analyzed is much smaller than is capable with FastDASD.

I'm not sure how unabridged the SMF records are; i.e., probably not all I/Os 
are accounted for, especially those done by the operating system and/or exotic 
subsystems using low-level access methods.  Both TMON/MVS and FastDASD are 
capable of catching such SMF-avoiding I/Os as long as they are done by 
components that put the I/O's starting DASD seek address in the IOSB, but that 
is not required of authorized programs that do I/O.

Bill Fairchild

Software Developer 
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com 
Web: www.rocketsoftware.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Longnecker, Dennis
Sent: Friday, April 30, 2010 6:20 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: How to analyze a volume's access by dataset

TMON/MVS gives us that information for a real time snapshot.  Can either start 
a summary or detail I/O trace and look at the trace once it is completed.  I 
imagine a GTF I/O would be able to also get you that information, but since we 
have TMON, we use that.

Dennis

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
John Norgauer
Sent: Friday, April 30, 2010 3:01 PM
To: IBM-MAIN@bama.ua.edu
Subject: How to analyze a volume's access by dataset

Is there any software available that will show the access by dataset(or by 
CCHR)  for a given volume?



John Norgauer
Senior Systems Programmer
Mainframe Technical Support Services
University of California Davis Medical Center
2315 Stockton Blvd
ASB 1300
Sacramento, Ca 95817
916-734-0536

 SYSTEMS PROGRAMMING..  Guilty, until proven innocent !! JN  2004

Hardware eventually breaks - Software eventually works  anon


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-03 Thread Ron Hawkins
Bill,

Cast your mind back to GTFPARS. This IBM FDP would build seek histograms
using IEHLIST VTOC Listings as input to map the extents of the datasets on
the volume.

CA-ASTEX does some very good IO level analysis. As with FASTDASD, CA-ASTEX
intercepts every IO. I think that this replaced FASTDASD after CA bought
Legent.

It used to have a pretty good LRU cache modeler built in. I wonder if it
still works and supports 512GB or more of cache?

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Bill Fairchild
 Sent: Monday, May 03, 2010 8:27 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] How to analyze a volume's access by dataset
 
 GTF traces the seek address (CCHR, et al.) stored in the IOSB but the
post-
 processing to determine which of the data sets on the volume has each
given
 traced CCHR within its allocated extents is non-trivial, and I don't know
of
 anyone who has written such code, unless I did it and have forgotten about
it.
 FastDASD, marketed by CA the last I heard, does all that by sampling, not
 tracing, the DASD addresses used by most access methods, then reads the
VTOC
 to determine in which data set each I/O occurred.  I used to be involved
with
 FastDASD, and may even have added support for GTF input, but if so, it was
too
 long ago for my mercury delay line memory cells to remember.
 
 TMON/MVS, on the other hand, gets the CCHRs by intercepting I/Os rather
than
 sampling z/OS control blocks.  Its analysis is thus far more accurate, but
the
 amount of data analyzed is much smaller than is capable with FastDASD.
 
 I'm not sure how unabridged the SMF records are; i.e., probably not all
I/Os
 are accounted for, especially those done by the operating system and/or
exotic
 subsystems using low-level access methods.  Both TMON/MVS and FastDASD are
 capable of catching such SMF-avoiding I/Os as long as they are done by
 components that put the I/O's starting DASD seek address in the IOSB, but
that
 is not required of authorized programs that do I/O.
 
 Bill Fairchild
 
 Software Developer
 Rocket Software
 275 Grove Street * Newton, MA 02466-2272 * USA
 Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
 Email: bi...@mainstar.com
 Web: www.rocketsoftware.com
 
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-05-03 Thread Bill Fairchild
I never worked with GTFPARS, and only now vaguely remember it, thanks to your 
mentioning it.

After CA bought UCCEL in 1987 and redundanted me [1], I lost all contact with 
FastDASD's goings-on.  So I don't know about its functional replacement by 
Astex which occurred with CA's subsequent acquisition of Legent.

It used to have a pretty good LRU cache modeler built in.  Assuming that it 
means FastDASD, then I don't remember the upper limit on the supported cache 
size.  Supporting that function was one of my all-time favorite projects.  And 
thanks for the honorable mention.

Bill Fairchild

[Most nouns and adjectives can easily be verbed, as in The operator onlined 
the volume.]

Software Developer 
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com 
Web: www.rocketsoftware.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Ron Hawkins
Sent: Monday, May 03, 2010 11:21 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: How to analyze a volume's access by dataset

Bill,

Cast your mind back to GTFPARS. This IBM FDP would build seek histograms
using IEHLIST VTOC Listings as input to map the extents of the datasets on
the volume.

CA-ASTEX does some very good IO level analysis. As with FASTDASD, CA-ASTEX
intercepts every IO. I think that this replaced FASTDASD after CA bought
Legent.

It used to have a pretty good LRU cache modeler built in. I wonder if it
still works and supports 512GB or more of cache?

Ron

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-04-30 Thread Longnecker, Dennis
TMON/MVS gives us that information for a real time snapshot.  Can either start 
a summary or detail I/O trace and look at the trace once it is completed.  I 
imagine a GTF I/O would be able to also get you that information, but since we 
have TMON, we use that.

Dennis

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
John Norgauer
Sent: Friday, April 30, 2010 3:01 PM
To: IBM-MAIN@bama.ua.edu
Subject: How to analyze a volume's access by dataset

Is there any software available that will show the access by dataset(or by 
CCHR)  for a given volume?



John Norgauer
Senior Systems Programmer
Mainframe Technical Support Services
University of California Davis Medical Center
2315 Stockton Blvd
ASB 1300
Sacramento, Ca 95817
916-734-0536

 SYSTEMS PROGRAMMING..  Guilty, until proven innocent !! JN  2004

Hardware eventually breaks - Software eventually works  anon


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to analyze a volume's access by dataset

2010-04-30 Thread Ron Hawkins
John,

Look at the SMF Type 42 subtype 6 record.

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 John Norgauer
 Sent: Friday, April 30, 2010 3:01 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: [IBM-MAIN] How to analyze a volume's access by dataset
 
 Is there any software available that will show the access by dataset(or by
 CCHR)  for a given volume?
 
 
 
 John Norgauer
 Senior Systems Programmer
 Mainframe Technical Support Services
 University of California Davis Medical Center
 2315 Stockton Blvd
 ASB 1300
 Sacramento, Ca 95817
 916-734-0536
 
  SYSTEMS PROGRAMMING..  Guilty, until proven innocent !! JN  2004
 
 Hardware eventually breaks - Software eventually works  anon
 
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html