Hi Yash,

Recently we have being trying both approaches (async and callbacks), since 
we are streaming chunks of a file to the Swift clients (our server is C++).
Our observations are:
1.- The bytes received on the client when using callbacks are between 25-40 
Mb/s.
2.- The bytes received on the client  when using async APIs, are between 
0-97 Kb/s (which is not good).

I agree with you, callbacks seems to perform better, NOW we also need to 
use the Async approach since we will be serving a lot of clients and we do 
not want the code to be blocked.

Thanks in advance.

On Wednesday, May 31, 2023 at 4:13:02 PM UTC-5 yas...@google.com wrote:

> You would find some examples here - 
> https://github.com/grpc/grpc/tree/master/examples/cpp
>
> The documentation would be best found in the headers - 
>
> https://github.com/grpc/grpc/blob/master/include/grpcpp/support/client_callback.h
>
> https://github.com/grpc/grpc/blob/master/include/grpcpp/support/server_callback.h
>
> Also, https://github.com/grpc/proposal/blob/master/L67-cpp-callback-api.md 
> for some additional reading
>
> On Thursday, May 18, 2023 at 12:15:47 PM UTC-7 Ashutosh Maheshwari wrote:
>
>> Hello Yash,
>>
>> Can you please point me to the documentation of the Callback API?
>>
>> Regards
>> Ashutosh
>>
>>
>> On Wednesday, May 17, 2023 at 6:54:25 AM UTC+5:30 yas...@google.com 
>> wrote:
>>
>> I'll preface this by saying - Use the C++ callback API. Instead of trying 
>> to understand the Async CQ-based API, the callback API should be the choice 
>> and is our current recommendation.
>>
>> >  Only one write is permissible per stream. So we cannot write another 
>> tag on a stream until we receive a response tag from the completion queue 
>> for the previous write.
>>
>> This is correct.
>>
>> I'll end this by again saying - Use the C++ callback API.
>>
>> > Recently,  I came across an issue where the gRPC client became a zombie 
>> process as its parent Python application was aborted. In this condition, 
>> the previous Write done on the stream connected with the client did not get 
>> ack, probably,  and I did not receive the Write tag back in the completion 
>> queue for that Write. My program kept waiting for the write tag and other 
>> messages continued to queue up as the previous Write did not finish its 
>> life cycle and hence I could not free the resources also for that tag.
>>
>> This can be easily avoided by configuring keepalive. Refer -
>> 1) https://github.com/grpc/grpc/blob/master/doc/keepalive.md
>> 2) 
>> https://github.com/grpc/proposal/blob/master/A9-server-side-conn-mgt.md
>> 3) 
>> https://github.com/grpc/proposal/blob/master/A8-client-side-keepalive.md
>>
>> That also answers your question on what happens if for some reason, a 
>> client stops reading. Keepalive would handle it.
>>
>> > My question is, if a write tag for a previous write does not surface on 
>> the completion queue, shall we wait for it indefinitely? What should be the 
>> strategy to handle this scenario?
>> Depends highly on your API/service. If for some reason, the RPC is taking 
>> much longer than you want and you are suspecting that the client is being 
>> problematic (i.e. responding to http keepalives but not making progress on 
>> RPCs), you could always just end the RPC.
>>
>> On Wednesday, May 10, 2023 at 12:17:46 AM UTC-7 Ashutosh Maheshwari wrote:
>>
>> Hello,
>>
>> My question is, if a write tag for a previous write does not surface on 
>> the completion queue, shall we wait for it indefinitely? What should be the 
>> strategy to handle this scenario?
>>
>> Regards
>> Ashutosh
>> On Wednesday, April 26, 2023 at 11:11:57 PM UTC+5:30 apo...@google.com 
>> wrote:
>>
>> First, it's important to clarify what it means to wait for a "Write" tag 
>> to complete on a completion queue:
>>
>> When async "Write" is initially attempted, the message can be fully or 
>> partially buffered within gRPC. The corresponding tag will surface on the 
>> completion queue that the Write is associated with essentially after gRPC 
>> is done buffering the message, i.e. after it's written out relevant bytes 
>> to the wire.
>>
>> This is unrelated to whether or not a "response" has been received from 
>> the peer, on the same stream.
>>
>> So, the highlighted comment means that you can only have one async write 
>> "pending" per RPC, at any given time. I.e. in order to start a new write on 
>> a streaming RPC, one must wait for the previous write on that same stream 
>> to "complete" (i.e. for it's tag to be surfaced).
>>
>> Multiple pending writes on different RPCs of the same completion queue 
>> are fine.
>> On Saturday, April 22, 2023 at 12:58:57 PM UTC-7 Ashutosh Maheshwari 
>> wrote:
>>
>> Hello gRPC Team,
>>
>> I have taken an extract from 
>> *“include/grpcpp/impl/codegen/async_stream.h”*
>>
>>  *“*
>>
>>   /// Request the writing of \a msg with identifying tag \a tag.
>>
>>   ///
>>
>>   /// Only one write may be outstanding at any given time. This means 
>> that
>>
>>   /// after calling Write, one must wait to receive \a tag from the 
>> completion
>>
>>   /// queue BEFORE calling Write again.
>>
>>   /// This is thread-safe with respect to \a AsyncReaderInterface::Read
>>
>>   ///
>>
>>   /// gRPC doesn't take ownership or a reference to \a msg, so it is safe 
>> to
>>
>>   /// to deallocate once Write returns.
>>
>>   ///
>>
>>   /// \param[in] msg The message to be written.
>>
>>   /// \param[in] tag The tag identifying the operation.
>>
>>   virtual void Write(const W& msg, void* tag) = 0;
>>
>> “
>>
>>  After reading the highlighted part,  I can make the following two 
>> inferences:
>>
>>    1. Only one write is permissible per stream. So we cannot write 
>>    another tag on a stream until we receive a response tag from the 
>> completion 
>>    queue for the previous write. 
>>    2. Only one write is permissible on the completion queue with no 
>>    dependency on available streams. When multiple clients connect to the 
>> grpc 
>>    server, then we will have multiple streams present. Now in such a 
>> scenario, 
>>    only one client can be responded to at a time due to the 
>> above-highlighted 
>>    limitation. 
>>
>>  Can you please help us in understanding which one of our above 
>> inferences is true?
>>
>> Recently,  I came across an issue where the gRPC client became a zombie 
>> process as its parent Python application was aborted. In this condition, 
>> the previous Write done on the stream connected with the client did not get 
>> ack, probably,  and I did not receive the Write tag back in the completion 
>> queue for that Write. My program kept waiting for the write tag and other 
>> messages continued to queue up as the previous Write did not finish its 
>> life cycle and hence I could not free the resources also for that tag.
>>
>> I was wondering if I could have gone ahead with Write for other streams 
>> and queue up messages related to this stream till we get a write tag in 
>> return for the previous message. If I kill the zombie and clean up on the 
>> client, the Write tag is returned
>>
>> Alternatively, is it possible  to force cleanup the inactive gRPC session 
>> ? What would happen if the Write tag is returned after the internal memory 
>> for that tag had been cleaned up . I guess it will crash. 
>>
>> Please clarify the doubts,
>>
>> Regards
>>
>> Ashutosh (Ciena)
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/5a61372b-5562-46d1-af55-42577e8c28dan%40googlegroups.com.

Reply via email to