Thinking about this situation over the weekend I concluded that
the sanest thing to do would be to hack qmail-remote so it
checks file-size and marks temporary failures for oversize mails during
peak times.  This could be done by reading a file size from an external
file, and having the file size modified by something run from cron; or
the policy could be written into the patch (which would make the patch
too site-specific to share.)

Looking at the the qmail-remote.c program, I suppose the patch would
first define an error handler like all the other error handlers :

void temp_policy_size() { out("Z\
Message is too large to send during peak time. (#4.3.0)\n"); zerodie();
}

Then, find a place where we can stat the message text....  No, we can't
at all because qmail-remote reads stdin -- although we could read stdin 
until we have enough, and issue the error the same place djb issues the
"out of memory" temporary system error --

no, we appear to be reading stdin and passing it to remote in
little pieces? The sending is done within the smtp() call -- in which 
we learn that a new temp_* message is a breakable convention, as the
c<quit("Z......")> will work too -- or is that just within the smtp
session?

Anyway, the message gets sent within blast(), which reads from &ssin,
doubles line-starting dots, and writes to &smtpto, one character at a
time.


So.

In order to implement a size-driven policy completely within
qmail-remote,
you could allocate a buffer large enough to hold an entire
size-compliant message and load it from stdin, before looking up hosts
and dns and so forth.  If you get to the end of my buffer and we are
doing
size-rationing, I would temp_policy_size() and that's that.

Later, when the smtp session is under way, blast() would read and send
the
buffer,  before reading stdin, if there is any stdin left to read,
because
we're not in peak hours.

Alternately, you could have two qmail-remote programs, the original one
and
one that reads into a limited buffer, and then refers to it within
blast()
instead of reading &ssin, and have your cron job switch the link at 
/var/qmail/bin/qmail-remote between /var/qmail/bin/qmail-remote_original
and /var/qmail/bin/qmail-remote_size-policy at the transition times,
something
like

0 8 * * * ln -f  /var/qmail/bin/qmail-remote_size-policy
/var/qmail/bin/qmail-remote
0 20 * * * ln -f  /var/qmail/bin/qmail-remote_original
/var/qmail/bin/qmail-remote

Since qmail-rspawn finds the program to run by having execvp consult the
file
system, this will work.  To be supersuper safe you could add a wait and
retry
once before exiting with error condition somewhere near the execvp in
qmail-rspawn.

That way you don't have to mess with new control files any.



There are some rough edges in there;  when I write my qmail-inspired MTA
I think I'll have the qmail-remote analogue take a file name as an
argument
and be responsible for setting the delivered flags itself instead of 
simply reporting delivery status; this will of course make my system
vulnerable to stack-smashing attacks from remote SMTP servers, which
would
then have sufficient privilege to do some damage, should the OS I will
be
using be vulnerable to such things.




Ari Arantes Filho wrote:
 
> David,
> 
> You are totally right:
> 
> > What ari appears to me to be asking for is a way to derail large e-mails
> > into a secondary queue:  He wants email to flow 24/z for little memos,
> > but attachments above a threshold must wait until off-peak.
> 
> I'm using the qmail-hold patch, so I can create a control/holdremote (with
> 1), send HUP the qmail-send and the queue is paused. But at this time, all
> messages will be stopped. I would like to stop only the big one.

Reply via email to