Hello, I may have missed something, but it was to my understanding that nginx continuously send data to clients, thus fill up buffers whil the client empties it at the same time (FIFO). Thus, to me, backend upload was stopping when the allocated buffer(s) was(were) full, waiting for space being available in it(them).
That is how/why, to my understanding (again), nginx was supposed to be able to handle slow clients. The intuitive solution if it was to happen to me, would have been to reduce buffer(s) size + number to ensure they fill up quickier (and thus stop downloading from upstream with the same velocity). In the end, the computation of the 'lost' resource is done: - in space with number of 'attackers' * num buffers * size buffer - in time with space calculated above / upstream sownloading speed (an average would be enough) Is not your patch redundant with existing capabilities? You just added another caluclation, competing with the one above, multiplying the above values per 10%. You could as much have reduced the settings above to meet the same result, could not you? Not talking about the risk of introducing vulnerabilities/instabilities with custom patch. What if the attacker modifies its client to ensure downloading 50% of the file (thanks to his /dev/null)? Your patch becomes useless and the resources grow back to what they used to be... on the other hand, the standard way of having modified how you handle upstream data would have been resisting, whatever amount of data any client grabs. What have I missed here? --- *B. R.*
_______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx