This sounds suspiciously like something I saw with SDSF a long time ago. I like 
to think that things would have improved by now, but who knows? This is how 
you could tell:

If you don't have the original job still on spool run a job to create a test 
SYSOUT dataset. Use something that will write records that don't contain lots 
of blanks, and the more records the better - several million would be good. 

Create a job to run SDSF in batch to select your test SYSOUT and capture it 
to a DASD dataset (or even to DD DUMMY), using the SDSF PRINT command, 
but set it up to not capture the whole test dataset - use something 
like "PRINT 1 500000".

Run your SDSF batch job a number of times, varying the number of records  
printed, by changing the second number of the PRINT range. Try it for say 
500000, 1000000, 1500000 etc and see how the resource usage, particularly 
CPU time increases as it processes more records.

If you find that CPU time increases disproportionally with increasing number of 
records processed, you probably have the same problem I found, where it 
became unusable for very large datasets. If it is still like that you could try 
opening a PMR. If they tell you that you are the only one to ever have this 
problem, suggest that they search the PMR archive for about 1998, looking for 
the words ISFDSRC and PARROT.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to