Just to clarify. You will not run out of space on an active job. You can run into a S722 abend which is lines exceeded in JES which means the configuration in JES2 has a maximum limit for lines written to SPOOL. And the task exceeded that limit. And this can happen with a job running for weeks at a time even with SPIN being used.
Space is related to dasd datasets. Spool is handled very differently from data going to a dataset. You indicated you wanted to have the output spin after so many lines are written. This can be done with the following command Note: If this is only for a //xxx DD SYSOUT= the following type of DD statement then this will work //DD5 DD SYSOUT=A,SPIN=(UNALLOC,5K) In this example, the system splits the data set into 5000 record segments and makes the SYSOUT data set available for printing every 5000 records. Whatever remains in the data set at the end of the STEP is available for printing at the end of step. If you want the entire JOB to spin - Start the STC up with the SPIN Parameter (Depending on your level of z/OS and JES2) SPIN= {NO } {UNALLOC } {(UNALLOC,'hh:mm') } {(UNALLOC,'+hh:mm') } {(UNALLOC,nnn [K|M])} {(UNALLOC,NOCMND) } {(UNALLOC,CMNDONLY) } (UNALLOC,nnn[K|M]) JES2 only. Indicates that the data set is to be spun when it has the specified number of lines, where nnn is lines. A minimum of 500 lines must be specified. Specify the optional characters K for thousands of lines and M for millions of lines. You still will need another process (JES2DISK for example) to have the data on JES2 moved to a dataset for later viewing. Hope this helps Lizette > -----Original Message----- > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On > Behalf Of venkat kulkarni > Sent: Thursday, May 04, 2017 5:58 AM > To: IBM-MAIN@LISTSERV.UA.EDU > Subject: Re: job output into dataset > > Hello, > > Thanks for reply. As you mentioned that you have program which can be used to > extract data from various jobs. There are couple points, I would like to make > > > > 1) Our requirement is to avoid space issue by cutting the records from > continuously running address spaces and put in dataset. > > 2) This process should run once in day and whatever address space we > specify in this process, should cut records from address space and keep > appending into datasets we specify. > > 3) For every address space, we will have separate Dataset for later to > be used or reviewed. > > ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN