I can't believe how stupid my last comment (and test) was, I looked at the wrong line of his output and completely missed that the next DS in line was 25 MILLION lines.
I don't have any 25M line jobs to test, but I went to our test runs of SyzSpool and looked at the stats for various size output. I don't see how any of them would multiply out to the numbers that he is seeing. The 120,000 EXCPs is possible putting about 400 recs/block in the datasets, but that seems to be strange (or maybe pretty big records). Maybe your writing them out to a dataset that is blocked badly, or the JES dataset is really screwed up. If you were to select your output in SDSF, then MAX DOWN to the bottom, how long does that take? If it isn't 1/2 of the numbers that you have provided initially, then the problem isn't with the input dataset (or SDSF) itself, but rather with the output DS that you are writing it into. Although if it's the input DS that is the problem then this test will not really show us very good results, only if the output dataset is the issue. It would be interesting to know the record size of the input and the DCB of the output. Sorry for the stupidity of the last post. Brian ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html