>         1)  Define a protocol.
>             (like a TCP conversation even if it is local)
>
>         2)  Build a reference API on that protocol
>             but don't paint yourself into a corner.
>             That is,  if your reference API is in and for C,
>             don't do anything that prevents another API for REXX.
>
>         3)  Build a sample application on that API.
>             It might only be used for testing.   That's okay.
>
>         4)  Build production applications as command-line tools.
>             Allow for scripting,  with or without an embedded interp.
>
>         5)  Build GUI and/or web on the command-line suite.

All good points. What I'm trying to do is avoid creating ANY new APIs --
anything that can do file I/O should be able to use this thing. I'm also
somewhat concerned about cluttering up the namespace -- making 200
pseudofiles seems untidy if you aren't going to use them all.

> > a) does this sound like a usable paradigm?
> Usable, yes.
> But it's a little foreign  (the "clone" semantics).
> Simple queueing of sessions could be done a bit more elegantly.
> (I'm thinking sockets,  I suppose.)

IMHO, the sockets API is somewhat of a ugly hack. 8-)

> > b) is it worth implementing?

> Instead of two pseudofiles (ctl and data),  I would suggest
> using a combination of read()/write() and ioctl().
> Think of it as a stream.   Provide access to the "out of band" data.
> (More below;  discussion of multi-stream internet protocols.)

What if the out-of-band stuff were presented as additional pseudofiles in
the connection directory? Borrowing again from the Plan 9 paper, their TCP
implementation adds a couple more pseudofiles for the out-of-band stuff like
connection endpoint details and connection status.  They also add a "listen"
pseudofile, which blocks on open until there is data available (eg a network
server writes an address and port number to the ctl file for a connection
and the does an open on the 'listen' file and waits until something comes
in. Example for IUCV connection:

open /cp/iucv/clone, which returns a file pointer to an available connection
dir ctl file and sets up a connection buffer. read of from this file pointer
returns the connection id (say '2'). /cp/iucv/2 contains the ctl and data
entries, and a standard set of pseudofiles related to IUCV connections (say,
remotenode, remoteuser, status).

Application then writes destination node/userid (consider CSE/ISFC/TSAF!) to
/cp/iucv/2/ctl, and then opens /cp/iucv/2/data, which actually makes the
connection.  read of /cp/iucv/2/remotenode returns remote node id, read of
/cp/iucv/2/remoteuser... you get the picture. Data gets read and written to
/cp/iucv/2/data.

When done, close /cp/iucv/2/ctl, and the connection gets shut down and the
resources go away.

It feels kinda like CPI-C, but it's not conceptually too weird IMHO. Most
people understand files; there's just an extra step to connect a function to
a "connection pointer".

> > c) what am I missing?
> You're somewhat Linux-centric here.   Don't be.
> In the case of IUCV,  consider something compatible with CMS Sockets.
> I mean,  Linux should have an AF_IUCV like CMS does.
> (Wasn't someone working on that?)   ;-)

Yeah, but *that* approach gets you back into having to do special
programming to have applications aware of the network. I guess I'm trying to
find a good way to map IUCV and DIAGs into the Linux context w/o having a
bunch of special purpose commands.

> think of interoperability first.   Make that your goal.
> Don't paint yourself into a Linux corner,  nor a z/VM corner.
> IUCV is rather unique to z/VM.
> DIAGnose is coded for z/VM and zSeries,
> but most of what it is used for can be generalized.
> This is where the FreeVM-L discussion would have helped.

That's why we're having this discussion...8-)

> "8,CP QUERY USERID" is just waaay too platform specific.
> Looks like a REXX coder trying to force a familiar model onto Linux.

Hey, who am I to counter all that good research Cowlishaw did on REXX
usability? I mean, after all this time, is there still any good excuse for
Perl syntax? 8-)

> Would be better to have a constant QUERY_USERID supplied to some call.
> The z/VM console command processor is just one protocol to use.
> DIAG 8 is simply its handle.   Nothing magic about DIAG (8 or other).
> Nothing sacred about the CP command suite.   (Apologies to Endicott.)

Although any function like this one will necessarily *be* platform
specific -- what gets passed to the underlying layers, isn't necessarily
going to be the same as the hosted system. One reason why I want to pass
some kind of function code as part of the data rather than have specialized
pseudofiles for each DIAG.

> It's nice that there is an
>         hcp commands
> command to return the list of available commands and DIAGs.
> Not unlike some of the newer internet protocols,  what can you do?
> Let the model be capability based.

another reason for above function/parm data stream idea.

> You have split "TERMINATE" and other controls into one vein and data
> into another.   Much easier to have the module look for a
> "DATA" command along with other controls.   RSCS and IMAP
> both have examples of this,  from what little I know of them.

Bad explanation/thinking on my part, I think. Further thought indicates that
if the semantics are changed so that the application holds
/cp/diag/<connid>/ctl open for the duration of the connection and explicitly
closes it when the connection should drop, then the "TERMINATE" thing can go
away and the whole mess can get cleaned up as part of the close()
processing. Or maybe unlink() would terminate the connection? close seems
cleaner.

Reply via email to