On Mon, Jan 21, 2002 at 05:51:21PM +0300, Alexander V. Lukyanov wrote:
> >     Remove NewEcho (replaced by EchoJob.)
> 
> Is EchoJob any better?

It's an example of basic OutputJob usage; it gets the facilities of
OutputJob.  (You can pass it an OutputJob to an FA, for example.)

> > I've explained the usage of OutputJob in comments, so I won't repeat
> > everything here.  A few things could be simplified with external help (a
> > way to tell FDStream and children to close the fd passed on the constructor,
> > CopyJob sending errors immediately rather than on the next Do.)
> 
> I'll look some more at the code, but I think it is over-complicated by trying
> to do zcat's (for_each). Can you leave that for CatJob?

I really wanted to; I only did this (somewhat grudgingly) after trying
to avoid it for a while.  (Part of the reason this took a while; I was
trying different things to avoid this.)

CatJob itself is very simple: connect the input to an FileCopyPeerOutputJob.

I'll think about it some more, but I did bash on this for a while before
doing it this way.  (Maybe the for_each stuff can be moved into its own,
simpler class; perhaps an OutputFilter?)

> I don't like Put returning LATER too. Better make it always succeed, and
> add a function Size or Full which will be checked by writer. The writer
> has to suspend data source when destination is overflowed.

That's fine, I'll do that.  (I'll use Full, so we don't bother higher
classes with buffer sizes.)

> BTW, does OutputJob has to be Job? It does not correspond to any user command
> and is not self-contained.

It uses PrintStatus, so it gets a line of output if it's outputting
remotely.  (Otherwise, if the job is done but waiting for a connection,
it'll be harder to tell why the job is sitting there.)

Apart from that, I might be able to fold it down to be an SMTask instead of a
Job.

> Does it need to use FileCopy, which is complex itself? Most FileCopy
> complications come from trying to make postions of read/write correspond,

I think of FileCopy as an intelligent buffering mechanism; and there are
two places that need to buffer data, so it seemed appropriate.  (It also
needs to be able to output to a FileCopyPeerFA.)

(The common case, outputting to stdout, doesn't create the second
FileCopy.)

> and OutputJob cannot seek on data source anyway, so it cannot retry on
> failed store.

If resumes could be done like I mentioned--keep a small back buffer of
data so simple resumes can be done without help from the peer--then this
would work.  (I think you mentioned this in comments somewhere, too.)

> In short, I think OutputJob can be turned into IOBuffer.

If all of the FileCopy's could be done away with, maybe.  (I'm not
sure.)  I suppose it could be forced to work as an IOBuffer even with
FileCopy's, but then the only change is the first buffer data goes into,
which is normally InputPeer, would be the object itself, so data would
have to be moved.  (Basically, PutLL would just dump data to InputPeer.)

One of the FileCopy's could be done away with if *both* for_each and
global filters weren't done; but if either one is done, we need to buffer
both to the filter (from the source), and from the filter to the output
(if it's a FileCopyPeerFA.)

Global filtering is very useful to have; without it, we couldn't do things
like "command | pipe > ftp://url";.  (Adding URL redirection but saying
" ... but you can't combine it with pipes" wouldn't be very good.)

-- 
Glenn Maynard

Reply via email to