On Thu, May 16, 2013 at 10:40:38AM +0200, Patrik Rak wrote:

> On 15.5.2013 20:30, Wietse Venema wrote:
> 
> >Patrik appears to have a source of mail that will never be delivered.
> >He does not want to run a huge number of daemons; that is just
> >wasteful. Knowing that some mail will never clear the queue, he just
> >doesn't want such mail to bog down other deliveries.
> >
> > From that perspective, the natural solution is to reserve some fraction
> >X of resources to the delivery of mail that is likely to be deliverable
> >(as a proxy: mail that is new).
> 
> Very well said. Describes my thoughts exactly.

What Patrick may not yet appreciate, is that I was advocating
tackling a *related* problem.  I was not claiming that concurrency
balooning (lets give my approach that name) prevents starvation of
new mail under all conditions.

Rather concurrency balooning can:

    - More quickly dispose of bursts of slow messages that can congest
      the queue when they first arrive as new mail.  A separate for
      deferred mail does not address this.

    - More quickly dispose of bursts of slow messages in the deferred
      path when bad mail is mixed with greylisting, ...

The downside is that new mail is not "protected" from bursts of
bad mail that fill the baloon.

The Postfix "sitting there doing nothing" problem is not new, that's
what got me on the list posting comments and patches in June of 2001.

My view then and now is that when an idle system with plenty of
CPU, RAM and networking resources is just sitting there waiting
for a timers to fire, what's wasteful (even if much of the mail is
ultimately discarded) is not using more of the system's resources
to have more timers expiring concurrently.

It is fine if you don't think the related problem is worth addressing,
but at least understand that my perspective is different, I strive
for higher throughput, and then congestion mostly takes care of itself.

I am not completely sold on the 80/20 reservation, since it too
will get blocked with slow mail when a concurrent bunch of slow
mail is new, or when the deferred queue is a mixture of likely
never deliverable, and recently deferred mail.  So the approach is
not perfect either.  Tweaking it to exclude messages that are not
sufficiently old (one maximal backoff time) perhaps addresses most
of my concern about mixed deferred mail, since a sufficiently
delayed message can reasonably tolerate a bit more delay.

> So, if you don't mind, I would like to go ahead and try to implement
> this limit, for both the delivery agent slots as well as the active
> queue slots. I think that enough has been said about this to provide
> evidence that adding such knob doesn't put us in any worse position
> than we are at now, nor does it preclude us from using other
> solutions.

Go ahead.

> The only remaining objection seems to be the amount of back pressure
> postfix applies to incoming mail, depending on the growth of the
> queue. I believe this problem exists regardless of if this new knob
> is in place or not, so it may as well be good idea to discuss this
> independently if you feel like doing so now...

Back-pressure is about the behaviour of 80% of queue (rather than
80% of agents) ceiling and its likely impact is to largely eliminate
inflow_delay (which is already fairly weak).  So the issue is whether
and how to slow down input (smtpd + cleanup) when the queue is large
(as evidenced by a lot of deferred mail in the active queue).

So your work will have an impact on back-pressure (it will further
reduce it), but perhaps since the existing back-pressure is fairly
weak, we can live with it becoming a bit weaker still for now.

The current back-pressure mostly addresses the "stupid performance
test" case rather than the persistent MTA overload case.  So if we
want to address persistent overload (perhaps as a result of output
collapse, as with a broken second network card) we can design that
separately.  It would perhaps be useful to shed load onto healthier
MX hosts in a cluster in which one MX host is struggling.

-- 
        Viktor.

Reply via email to