On Mar 1, 2007, at 7:47 AM, Veilleux, Jon L wrote:
We are running into issues with the volume of SMF records that we need
to process. We create a GDG every day for that day's records.
Occasionaly we run into contention issues during the day but mostly it
hits us at the end of the day when we read the day's records from the
GDG and split the output into various other tapes for specific
functions
(accounting, performance, etc). While that job is reading the many
tapes
created during the day, the GDG base is locked so our dump jobs are
forced to wait which eventually causes lost SMF data.
Has anyone else run into this problem? How are other shops handling
high
SMF volumes?
We have 4 SMF datasets on each LPAR. Two are 3000 cyl and two are 1500
cyl.
-----------------------SNIP------------------------------------------
We too have run into this issue and came up with this. This is not a
cure all (or close) but it seemed to work well (for us).
We set up a GDG for each system ie smf.raw.&sysid.smf.gxxxxvxx and
let each system create them all day long (this was tape originally
then it went to dasd). Then at about 10PM a job kicked off on each
system to create a "intermediate" daily and deleting the old gdg's on
successful completion. then a 'final" smf tape as created taking all
of the intermediate tapes in.
You are right about the contention issue and at times it did need
baby sitting but that is what production control people are for.
Ed
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html