Scott, Before I answer your questions:
Why use dataspaces? You need caution when using it (commas!) and they are limited (althru way high= 2GB) - New stuff is better off using large memory objects (IARV64). >> And the job ends, does this mean the dataspace created is deleted ? When the job that created the dataspace ends (regardless how privileged it was) the dataspace is deleted. There are tricks to avoid that, but a started task (creating it) is much better. >> Then a STC would read the dataspace to retrieve the stored data. When the dataspace is created with the right SCOPE it can be reached from others. To access it others need the STOKEN for an ALESERV(ADD) and can then use the received ALET. >> What I want is the dataspace created and the job to end .... and that is the point when its life ends. >> ...and another process populate the dataspace with data. The same applies as for the reader of the dataspace. Something that will work is to have a started task that does the create. After creating it publishes the STOKEN. Then it goes to sleep and waits for a signal to either - do some work ("....retrieve the stored data") or - end its life. Others can use the published STOKEN and place work in the dataspace ("...another process populate the dataspace with data") and then signal work to the waiting started task. >> Presently we use another mechanism, basically a subpool to store data, but I want to get away from that design and feel the dataspace is better suited for the data and the amount of data. You get pretty isolated storage (and extendadble) with dataspaces as well as with IARV64. With either one running over the defined boundries will get some kind of program exception (0002, 0004, 0010, 0011). -- Martin Pi_cap_CPU - all you ever need around MWLC/SCRT/CMT in z/VSE more at http://www.picapcpu.de