> On Nov 13, 2017, at 7:54 AM, Tommy Pauly <tpa...@apple.com> wrote:
> 
> The code I work with does TCP_NOTSENT_LOWAT by default, so we have a fair 
> amount of experience with it.

I figured  :-)   AFAIK think Stuart invented it…


> If you're using sockets with TCP_NOTSENT_LOWAT, and you're doing asynchronous 
> non-blocking socket operations, then you don't have the annoyance of getting 
> these "empty" callbacks you're referring to—it just changes how aggressively 
> the writable event fires, making it back off a bit.

Ah, ok, sure.


> With a Post-like API, having something directly like TCP_NOTSENT_LOWAT 
> doesn't make much sense. Instead, the implementation may internally use that, 
> but the equivalent application feedback is the completion that is returned 
> based on the write messages. The timing of when the stack indicates when 
> something has been written allows the application to understand the 
> back-pressure properties of the transport, and if it able to generate or 
> fetch data more slowly to match the back-pressure, it can. Otherwise, it can 
> simply keep writing and the data will get enqueued within the library.

I mean, the way you describe it here, the application has no means to say how 
much it wants the layers below to buffer. I think that would be useful, no?
A big buffer is more convenient (based on the number write returns), a smaller 
buffer lets the application have control over the data until the last minute.
But then, doesn’t this amount to the same "nuisance vs. control of data until 
the last minute" trade-off that I described?


> Dependencies between the messages that are being written, then, doesn't 
> actually come into play much here. Dependencies are hints to the 
> implementation and protocol stack of how to order work when scheduling to 
> send (PRE-WRITE). TCP_NOTSENT_LOWAT or the completions to indicate that 
> writes have completed are all about signaling back to the application to 
> provide back-pressure (POST-WRITE).

I understand the functionality is separate, but you can achieve the same with 
it: from an application’s point of view, if I have blocks with dependencies and 
if you allow me to tune the buffer below the write call, then I can decide 
until the last minute that I’d rather not send a certain data block.

I guess additionally offering to describe dependencies doesn’t hurt anyway, btw 
… what made me worried about the complexity is the possibility that these 
dependencies would change over time, and then the application would want to 
send an update to post-sockets.  I guess the easy way out is not to offer such 
dynamics (as indeed in this case an application might be better off handling it 
in the way I describe above?).

Cheers,
Michael

_______________________________________________
Taps mailing list
Taps@ietf.org
https://www.ietf.org/mailman/listinfo/taps

Reply via email to