I might add that this is not how I would do this. But I'm known to be
weird. For small amounts of data, I tend to set up UNIX SYSLOG DAEMON with
appropriate "facility" and "priority" and send the data to it to take care
of. But I already have this set up on my z/OS work system, so it is NBD to
me to add something to it. I did this mainly to track the z/OS FTP server,
which logs to the UNIX syslog daemon. It's easier to look at than the SMF
records.

On Sat, Jun 18, 2022 at 12:35 PM John McKown <john.archie.mck...@gmail.com>
wrote:

> On Sat, Jun 18, 2022 at 11:52 AM Charles Mills <charl...@mcn.org> wrote:
>
>> Hope some of you can help out a dinosaur. <g>
>>
>
> One dino to another, I'll try.
>
>
>
>>
>> I am designing a z/OS application (for in-house use, not an ISV product).
>> It
>> will consist of a started task that runs continuously plus one or more
>> small
>> reporting programs, one of them to be run daily shortly after midnight.
>> The
>> started task will record a very small amount of data (about 24 bytes or
>> so)
>> every fifteen minutes, 96 times a day. That data will be input to the
>> reporting program(s).
>>
>> Being a dinosaur, I thought in terms of recording that information in a
>> traditional MVS dataset. There are some problems with that: basically if I
>> do allocate, open, write, close, de-allocate then I think I am going to
>> have
>> 24-byte blocks on disk*, which is going to give me really poor track
>> utilization; and if I don't, then the report program is not going to be
>> able
>> to read the dataset due to ENQ contention (absent some sort of special
>> "close it for a little while around midnight" logic) and also any abnormal
>> termination of the started task (such as an IPL) will cause a loss of data
>> -- not a critical issue, but less than desirable.
>>
>> *Yeah, emulated blocks on emulated tracks. There are of course no real
>> tracks anymore. But the emulation is pretty realistic, right down to the
>> poor track utilization!
>>
>> I thought about various approaches such as accumulating the records in
>> memory and then writing all 96 of them in a single blast right after
>> midnight. That would probably work out and solve the ENQ problem (but not
>> the IPL problem). It would unfortunately preclude any ad hoc reporting in
>> the middle of the day. And I would still end up with fairly small 2K
>> blocks.
>>
>> This morning I thought "why not a UNIX file?" I can of course look up any
>> specific questions in the relevant manuals, but I am just unfamiliar with
>> the big picture. Am I correct that (1) there is of course no "physical
>> block
>> size/track utilization" issue with UNIX files; (2) that shortly after I
>> write a record it will be fixed in place and would survive an IPL or other
>> abnormal termination of the writing task; and (3) most importantly, the
>> report program can read the file while the writing task has it open? Are
>> those premises correct? (By "shortly after" I mean that I could live with
>> a
>> delay of a few minutes or so; this is not a banking application where
>> two-phase commit is critical.)
>>
>
> I sometimes do this. UNIX files are not written directly to disk by an
> application. The application actually uses a subsystem type interface to
> hand the buffer off to the UNIX file system colony address space, much like
> DB2 or JES. To "harden" to data to disk, look at the UNIX fsync function.
> in REXX https://www.ibm.com/docs/en/zos/2.1.0?topic=descriptions-fsync
>
>
>
>>
>> I picture writing the started task in Rexx, so I would have to write to a
>> DD
>> name allocated to the UNIX file (either dynamically or with JCL), not with
>> "native" C fopen(), fwrite(), etc. Does that change any of the answers?
>>
>
> You do not need a DD. If you want, you can use the "SYSCALLS" to do all
> the UNIX stuff.
> ref: https://www.ibm.com/docs/en/zos/2.1.0?topic=services-syscall-commands
>
> short example:
>
> /* REXX logger */
> SYSCALLS('ON')
> ADDRESS SYSCALLS
> today=DATE('S') /* Date in YYYYMMDD */
> file_prefix="/some/directory/my-data."today".log"
> /* Open the file, create (if necessary), add new records to the end, force
> sync to disk often, write only, "644" is read+write, read only, read only;
> */
> "open (file_prefix) " O_CREAT+O_APPEND+O_SYNC+O_WRONLY 644
> fd = RETVAL /* "fd" is the "File descriptor -- an integer used to do
> things to the file */
> do ???
>      /* do whatever and get data into variable "data_to_write" */
>    data_to_write = data_to_write||ESC_N; /* Put a "new line" at the end of
> the line for a text file */
>     "write"  fd "data_to_write)"
>    "fsync" fd /* flush to disk -- trhe O_SYNC should do this, but I'm
> paranoid */
> end
> /* close the file when the loop, somehow, ends */
> "close" fd
>
> Note that UNIX files do not have any sort of automatic ENQ on them. If you
> want to you can easily just do an ISPF BROWSE on it. You can even do an
> ISPF edit on it. But if you change and save it, "strange" things might
> happen. Remember that the actual I/O buffer is in a UNIX filesystem address
> space. So an application reading & writing a file are actually getting data
> from it. If you were to save data (or run some program which updates it),
> the actual data would be, in effect, merged or overlain in strange ways in
> the I/O buffer before it is written to disk.
>
>
>
>
>
>
>>
>> Anyone see any gotcha's with the UNIX file approach that I seem not to
>> have
>> thought of?
>>
>> Thanks!
>>
>> Charles
>>
>> ----------------------------------------------------------------------
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>>
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to