I'm a bit surprised why about ~100K message size not flushing multiple 
times, unless there are concurrent calls here? (using 16K frame).

Otherwise the 3 flushes per server unary should be expected if the message 
isn't large.

A couple of reasons/problems I've seen:

* Write flush comes from the directly from the app's message send.
    * possible fix: defer message writes to a separate writer go-routine 
queue for batching (on top of other changes, seen some benefits in 
streaming QPS with this)

* Must be sure that headers get sent regardless of message (
    * Currently flushes after each not stuck waiting for http2 flow control 
window to enlarge).
    * must also be sure that steady progress is made when sending a large 
message to a receiver with a small http2 flow control window.
    *  https://github.com/grpc/grpc-go/pull/973 addresses this by flushing 
only when necessary

On Thursday, January 19, 2017 at 6:57:41 PM UTC-8, Zeymo Wang wrote:
>
>
> streaming call , 
>
> I find at least one request 3 times flush on streaming server , end header 
> flush + end data flush + end status flush ,how can reduce ?
> data just less 100k,not separate data frame
>
> On Friday, January 20, 2017 at 1:33:28 AM UTC+8, apo...@google.com wrote:
>>
>> This looks like an issue that has been seen in grpc-go earlier. What 
>> types of calls are these - unary? or streaming?
>>
>> Indeed for unary calls, each call is currently flushing writes after 
>> sending headers and status for each call. The message portions of unary and 
>> streaming calls (split up into separate http2 data frames if it's large) 
>> both have some but only small amounts of batching of syscall.Writes.
>>
>> There has been some work towards reducing this but I think the issue is 
>> probably somewhat expected for latest update. (There's one experimental 
>> solution to reducing unary call flushes in 
>> https://github.com/grpc/grpc-go/pull/973)
>>
>> On Thursday, January 19, 2017 at 1:25:36 AM UTC-8, Zeymo Wang wrote:
>>>
>>> I fork grpc-go uses as gateway just  enioy h2c benifit (also remove pb 
>>> IDL feature),which I implement 0-RTT TLS( cgo invoke libsodium) repalce the 
>>> standard TLS and handle request just do http request to upstream; In 
>>> benchmark of bidirectional streaming rpc ,high cpu usage under not much 
>>> heavy load (maxConcurrencyStream = 100 or 1000 ,the same), according to "go 
>>> tool pprof ", I find syscall.wirte consume much cpu and RT ( maybe  cgo 
>>> performance?). At least 3 time call system.wrtie (flush) will cause this 
>>> problem (header + data + status)?Is orignal grpc  have this issue?how to 
>>> resolve or reduce invoke syscall.write?or waiting go add syscall.writev?
>>>
>>>
>>> <https://lh3.googleusercontent.com/-0TCAcilsguw/WICDnBBHKBI/AAAAAAAAB_k/2OtJVaBq9ykgXPKboM43S8PWR1OXT59oQCEw/s1600/perf.jpg>
>>>
>>>
>>> <https://lh3.googleusercontent.com/-2HXrQl6GgH0/WICENZODGQI/AAAAAAAAB_o/VUTPcgod4wQsI8Csoh7rVSBwcEe-n3yqQCLcB/s1600/strace.jpg>
>>>   
>>>   
>>>
>>>
>>> <https://lh3.googleusercontent.com/-IV3cZmIFYso/WICD8niuwnI/AAAAAAAAB_g/liXcah0inB4RQJxujk57SYxfzmjaVCvgQCEw/s1600/pprof.png>
>>>   
>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8120194d-21f8-454a-8528-915f55870410%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to