Viktor Dukhovni:
> Nothing I'm proposing creates less opportunity for delivery of new
> mail, rather I'm proposing dynamic (up to a limit) higher concurrency
> that soaks up a bounded amount of high latency traffic (ideally
> all of it most of the time).
This is no better than having a static process limit at that larger
maximum. Your on-demand additional process slots cannot prevent
slow mail from using up all delivery agents.
To prevent slow mail from using up all delivery agents, one needs
limit the amount of slow mail in the active queue. Once a message
is in the active queue the queue manager has no choice. It has to
be delivered ASAP.
How do we limit the amount of slow mail in the active queue? That
requires prediction. We seem to agree that once mail has been
deferred a few times, it is likey to be deferred again. We have one
other predictor: the built-in dead-site list. That's it as far as
I know.
As for after-the-fact detection, it does not help if a process
informs the master dynamically that it is blocked. That is too
late to prevent slow mail from using up all delivery agents,
regardless of whether the process limit is dynamically increased
up to some maximum, or whether it is frozen at that same inflated
maximum.
[detailed analysis]
Thanks. This underscores that longer maximal_backoff_time can be
beneficial, by reducing the number of times that a delayed message
visits the active queue. This reflects a simple heuristic: once
mail has been deferred a few times, it is likey to be deferred
again.
Wietse