Re: [EXTERNAL] Collecting SMF data with Logstreams

2021-02-10 Thread Colin Paice
An advantage of log streams is that you can set up a log stream so type1
goes to one log stream and type2 goes to another log stream.  If type1
produces too much data type2 is not affected.
If you log to disk you process the data set to extract/process the type1
and type2 records.  If type1 produces too many records so the SMF buffer
fills up you can lose type 2 records.
Colin

On Wed, 10 Feb 2021 at 15:03, Ron Burg  wrote:

> Thank you Gadi and Jim,
>
> For us, using DASD-only logstream is the preferred way and I already set
> it up on a test system and played with it a little bit.
> I set up a RETPD of 10 days. this works fine and I understand that this is
> the prefered way and not period of years, I just wanted to check the
> possibilty of longer-period on a logsream as this extands the benefits (for
> example maybe I can migrate the offload datasets of the Logstream itself to
> tapes).
>
> The other thing that bothers me more is the IFASMFDL which gives me a
> return code of 04 for critical errors.
> for example, I tried to dump (not archive) data from logstream to a
> sequential dataset, but I gave a wrong LS name, this resulted in the
> following message:
> IFA815I INSTALLATION ERROR FOR LOGSTREAM IFASMF.BLABLA.DFLT
> LOGSTREAM IS UNAVAILABLE CAUSED BY RC=08-080B
> IFA825I LOG STREAM NAME HAS NOT BEEN DEFINED IN LOGR POLICY.
>
> and although I can see in the message RC=08-080B, the job itself ends with
> RC 4.
> How can I catch those type of errors? the only solution I thought on is
> running another step of REXX to parse the IFASMFDL output and search for
> common error messages and then set a return code accordingly.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [EXTERNAL] Collecting SMF data with Logstreams

2021-02-10 Thread Ron Burg
Thank you Gadi and Jim,

For us, using DASD-only logstream is the preferred way and I already set it up 
on a test system and played with it a little bit.
I set up a RETPD of 10 days. this works fine and I understand that this is the 
prefered way and not period of years, I just wanted to check the possibilty of 
longer-period on a logsream as this extands the benefits (for example maybe I 
can migrate the offload datasets of the Logstream itself to tapes).

The other thing that bothers me more is the IFASMFDL which gives me a return 
code of 04 for critical errors.
for example, I tried to dump (not archive) data from logstream to a sequential 
dataset, but I gave a wrong LS name, this resulted in the following message:
IFA815I INSTALLATION ERROR FOR LOGSTREAM IFASMF.BLABLA.DFLT
LOGSTREAM IS UNAVAILABLE CAUSED BY RC=08-080B
IFA825I LOG STREAM NAME HAS NOT BEEN DEFINED IN LOGR POLICY.

and although I can see in the message RC=08-080B, the job itself ends with RC 4.
How can I catch those type of errors? the only solution I thought on is running 
another step of REXX to parse the IFASMFDL output and search for common error 
messages and then set a return code accordingly.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [EXTERNAL] Collecting SMF data with Logstreams

2021-02-10 Thread Horne, Jim
Ron,

You definitely want to go to logstreams.  We went to CF logstreams 10 years ago 
and have never regretted it. And that is the first question you need to answer: 
 coupling facility, which is sysplex-wide, or DASD-only?  That will affect all 
your other decisions.

The next thing you need to do is see how much data you dump per day and in what 
records.  That will help you determine what logstreams you want to use.  Sadly, 
the picture IBM paints of keeping data in the logstream is not practical shops. 
 Collecting over 100 gigabytes of data per day is not unusual; that does not 
lend itself to practical long term storage.  You also need to determine how 
much disk space you are willing to give to offload datasets.  While this is 
less of a concern for most shops than it was 10 years ago, it is still 
something you want to factor in.  When we did our original calculations, we 
were comfortable keeping far less than a single day on disk.  We period archive 
- archive, not dump - to tape, and consolidate at daily and weekly levels.  You 
should save room for more offload datasets than you expect to handle the times 
the archive process breaks.

On your question of return code 4 on the logstream dump, our archive jobs 
accept 4 as okay because of the message you cite.  The archive process will 
fail, not merely RC=4, if it doesn't work.  Again, test this for yourself; it 
will make you feel better and give you more confidence in your process.

Best of luck!

Jim Horne
-Original Message-
I'm trying to figure out the best-practice for collecting SMF data with 
Logstream instead of MAN datasets. There are few points I seem to miss and I 
hope to get some advice from those who used it for a while.
I can see the benefits in storing the SMF on logstream only, for example I can 
use IFASMFDL to set range of dates and not have to worry about the location of 
each week/month in a different GDG generation.
Also, the ability of logger to use the timestamp to read only relevant period 
can save time and CPU while offloading data in comparison to sequential read.
Those benefit however are minified if I use the logstream only as a short-term 
storage before migrating the data to the regular GDG datasets.

Here are some open questions I have:
1. The ideal situation for me would be to have all SMF data on logstream, for 
example for a period of two years, but there is a limit of 168 offload datasets 
before starting to use DSEXTENTs and each dataset can be up to about 2GB in 
size, this makes me wonder if I misunderstood the Logstream usage as a long 
term storage of smf data.
Does it make sense to hold SMF data for few years on logstream?

2. Using the IFASMFDL utility is very tricky as I can get RC 4 for both 
"RELATIVEDATE RANGE EXTENDS INTO FUTURE" and for a case where I ran the job on 
a different system (DASD-only logstream) and it failed to find the logstream.
Worse than that, if I try to dump data from few offload datasets (a week for
example) and one of the "middle" datasets is missing I can get a return code of 
0 while the data returned will be only what Logger was able to find.
How can I be sure that the dumped data from IFASMFDL is complete?

NOTICE: All information in and attached to the e-mails below may be 
proprietary, confidential, privileged and otherwise protected from improper or 
erroneous disclosure. If you are not the sender's intended recipient, you are 
not authorized to intercept, read, print, retain, copy, forward, or disseminate 
this message. If you have erroneously received this communication, please 
notify the sender immediately by phone (704-758-1000) or by e-mail and destroy 
all copies of this message electronic, paper, or otherwise. By transmitting 
documents via this email: Users, Customers, Suppliers and Vendors collectively 
acknowledge and agree the transmittal of information via email is voluntary, is 
offered as a convenience, and is not a secured method of communication; Not to 
transmit any payment information E.G. credit card, debit card, checking 
account, wire transfer information, passwords, or sensitive and personal 
information E.G. Driver's license, DOB, social security, or any other 
information the user wishes to remain confidential; To transmit only 
non-confidential information such as plans, pictures and drawings and to assume 
all risk and liability for and indemnify Lowe's from any claims, losses or 
damages that may arise from the transmittal of documents or including 
non-confidential information in the body of an email transmittal. Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN