On Tue, 10 Jan 2017 17:23:18 -0600, Paul Gilmartin
(0000000433f07816-dmarc-requ...@listserv.ua.edu) wrote about "Re: "task
level" TIOT & XTIOT? A crazy thought?" (in
<1554048328241551.wa.paulgboulderaim....@listserv.ua.edu>):

> On Tue, 10 Jan 2017 20:18:04 +0000, David W Noon wrote:
[snip]
>> So, saying "no code changes" is rather arbitrary.
>>
> New function requires new support.  Changes would be needed at the
> system layer such as ATTACH and OPEN; not at the application layer
> such as Charles's example, FTP.

But we are really discussing COBOL and its current limitation regarding
dynamic DDNAME usage. Other programming languages don't have this
limitation.

> It's dreadfully wasteful for each application program that needs the facility
> to duplicate the code to process the alternate DDNAME list.

What list? I am suggesting that any DDNAME that cannot be specified at
design time be made parametric. That is all that COBOL needs to handle
dynamic DDNAMEs.

Frank Swarbrick's message would indicate that this is in the 2014
ISO/ANSI standard for COBOL but has not yet been implemented by IBM.

>> How do you know what my habits are?
>>
> OK, then, not habits.  You make your biases disturbingly clear.
> I suppose the recent U.S. political campaign established a new
> standard of deportment.

I don't understand this. What has the U.S. political campaign to do with
me? Should I have put a smiley after what I thought was an obviously
rhetorical question?

>> When writing shell scripts, environment variables are inescapable as
>> every shell variable is in the shell's environment. If all you write are
>> shell scripts then everything looks like an environment variable.
>>
> There's a distinction at least in terminology.  "Exported" variables are
> called "environment variables"; ohers are simply called "shell variables".
> The former are available in programs forked by the shell; the latter
> are not.

But all are visible within the script.

Visibility is not the issue here. The issue that arises is that any
given variable has only one instance within an address space for a COBOL
(or similar, traditional) program.

>> The PARM field is more akin to the argv[] strings fed into a C/C++ or
>> Java program. At least COBOL allows for this to be handled in the
>> LINKAGE SECTION of the DATA DIVISION. However, COBOL has not
>> traditionally handled environment variables and has not really needed them.
>>
> You don't "really need" anything other than a Universal Turing Machine.
> Higher level facilities make our jobs easier.

Well, that's COBOL out the window. ... :-)

>> ... Worse still, the environment
>> variables are shared by all tasks in the address space, so we are back
>> with the problem of dynamically providing a name to identify an
>> allocated dataset. The upshot is that an environment variable of a given
>> name (i.e. hard coded in the COBOL source) will be the same for all
>> tasks in the address space, just like the DDNAME in the TIOT/SIOT.
>>
> No.  spawn() with _BPX_SHAREAS=YES (an environment variable) can
> support multiple processes, each with its own environment variables
> and its own TCB in a single address space.  This is z/OS-peculiar;
> not customary over all UNIXes.
> 
> Look at the specification of execve():
>     int execve(const char *path, char *const argv[], char *const envp[]);
> Simply, argv[] is a list of positional arguments; envp[] is a similar list of
> keyword arguments.  The caller has complete control of both.

And how many COBOL programmers do you expect to use a variant of the
POSIX standard API? [And how do we get them to use this with "no code
changes"?]

> You misunderstand the operation of _BPX_SHAREAS.

Actually, I don't.

It's just that we are focussed on COBOL here, and the POSIX API is not
readily exploited in that language. All the languages that can readily
exploit the POSIX API already have dynamic DDNAME handling.

>> As coded, each calling task can choose a DDNAME, possibly returned by
>> SVC99, to have the same subroutine process different datasets for
>> different tasks -- possibly concurrently.
>>
> And where would you get the arguments to pass to SVC99?

The calling program would supply the sample subroutine with a DDNAME
after having done whatever SVC 99 processing might be necessary. The
calling program might use an existing DD, or it could read in a DSN and
use SVC 99, or any other allocation strategy one could dream up.
Different callers could use different allocation strategies to allocate
different datasets with different DDNAMEs, all to be processed by the
same subroutine.

> And how
> would you preserve one of the aboriginal design objectives of OS/360:
> the ability to ENQ all needed resources before job initiation to avoid 
> deadlocks.

That went out the window with the arrival of MVS/370, which introduced
SVC 99.

>> I hope you now see why environment variables are not really a workable
>> solution for multi-tasking address spaces.
>>
> Single address space jobs are an anachronism.  They should have vanished
> with the advent of Virtual Memory and DAT, but the legacy of OS/360
> through MVT perpetuated them.

Well, MVT was a single address space system. All regions were in
separate extents of the same physical address space.

Single address space jobs are still the bread and butter of commercial
data processing. You might not like them, but non-shell jobs still run
that way.
-- 
Regards,

Dave  [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
david.w.n...@googlemail.com (David W Noon)
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to