> No, I have looked, and CMS Pipelines are nice indeed. But then so are pipes
> under UNIX; indeed, pipes are the very core of UNIX. If you are not annoyed
> by discussing it, I would love to hear your opinions on what is so primitive
> about UNIX. :)

As I said: leaky garden hose. The analogy holds just as long as you
use terms like "nice"
Well- I was being polite, since this is pretty obviously a sore subject with 
you for some reason.


* CMS Pipelines has multi stream pipelines which means that you can
divert part of the input stream and have that go through a different
segment of the pipeline and further down the pipe the streams can join
again when desired. The closest you get in UNIX is something like the
"tee" program.
There is a reason for this in UNIX - most system utilties are built to do one 
or two things as well as possible, and that rather simple mindset leads to a 
single in / single out design prejudice. It does not mean they capability does 
not exist.
For example, I have many mulitple input streams sending data to a named pipe, 
which has a director application reading from it, which sends things out of 
dynamically created streams of processing. For example, Job#1 may come down the 
pipe and need to be processed in Chinese, while Job#2 coming down the pipe may 
need to br printed in some other state, and Job#3 is credit card transaction. I 
did write the director application in C, but it could have been written just as 
well in Perl or Rexx or Pascal or Fortran for that matter.
Granted, this is not a super high volume transactional system (it processed 
between 100 adn 200 jobs per minute) but if I needed that, I would use CICS.

* The stages in CMS Pipelines are not limited by a single input and
output (and stderr), but can have many streams which allows for
building complex refineries without the need for endless copies of the
data.

* The way records are moving through the pipeline and the way stages
interact means that you can reason about where records are and
guarantee the order in which data is produced and consumed in parallel
pipeline segments.

This is more program design to me than a natural or intrinsic function of 
Pipes, but that's not a fact that is my opinion. :)
* Dynamic changes to the topology of the pipeline where a stage can
replace itself by a newly composed segment either permanently or
temporary (a sipping pipeline). Combined with the strict order in
which data is consumed, you control what part of the data flows
through the modified pipeline.

Again, this iis quite easily accomplished under Unix - though I admit the best 
solution tends to start a new process or thread, which is somewhat different. 
Then I think that process creation is more expensive under VM than under Linux. 
Opinion again though, I might be wrong.

I do believe I am one of those many VM people who embraced Linux and
the concepts are not alien to me (I avoid the term "transition"
because that would suggest going from one to the other).

Recently I wrote a simple Perl program - ptime - to take lines from
stdin and write them out prefixed with the local time. To my surprise
the following did not work to tag vmstat output with the time as I
intended: vmstat 10 | ptime
Turns out that something is doing an undetermined amount of buffering
(and yes, I learned that I can set the "$|" variable (?) to change
that). And there's many more cases where the tools violate the
Principe of Least Astonishment. Things like njpipes and OS/2 pipe ran
short of that and turned out to be far less useful.
That kind of surprises me- though in this case I would most likely have written 
a short C program to do it and used fflush(). It seems silly that Perl did not 
automatically account for the buffering.
There are other things that can drive you crazy too - like ever try to get a 
reasonable return code? Try sending back a -4 as the exit code on a program 
sometime. Annoying!
There are certainly lots of rough edges in UNIX/Linux, but there are more than 
a few there in CMS too, most especially if you do not use it on a very regular 
basis. Sometimes, the problems in Linux are enough to make me scream and really 
REALLY miss JCL.
-Paul


I understand I have the option to write a C program from scratch to do
what I want, or maybe copy an old one from when I wanted almost the
same. We've done so with Rexx for quite some time. However, I find it
way more productive to compose a pipeline out of many built-in stages
and maybe a few reusable ones from myself in Rexx.

Rob


Reply via email to