2 blocking calls on the same channel do not block each other, they will
execute concurrently.
The main reason to use asynchronous calls is when the caller themselves
cannot block, for instance in a non blocking server (think of something
similar to nodeJS).
On Tue, Nov 12, 2019 at 10:29 PM wrote:
So you would say it’s ok to create for every call a new stub. But what about
concurrency? Let’s say I create two stubs and they make a call at the same time
on the same channel. Is one stub blocks the other one? So when you would use
asynchronous calls?
--
You received this message because you
Channels are relatively heavier weight and thus it could be a good idea to
not create a lot of them (unless you hit some throughput bottleneck). Stubs
are pretty cheap.
Channels and stubs are all thread-safe.
On Tue, Nov 12, 2019 at 1:22 PM wrote:
> Sorry, i meant "Is it better to also create on
Sorry, i meant "Is it better to also create only one stub for all these
calls or to create a stub for every function call..."
Am Dienstag, 12. November 2019 22:18:15 UTC+1 schrieb
martin...@versasec.com:
>
> After hours of reading on internet and not getting an answer i will ask
> here. We are
After hours of reading on internet and not getting an answer i will ask
here. We are currently trying to replace SOAP with grpc for our client
server architecture. On the client side we are using grpc c++. Now the
question is what is the best approach to create stubs and channels. What i
unders
That makes perfect sense. I was looking at using the class outside non-gRPC
code, but it requires grpinit to be called beforehand. A utility class to do
rpcinit only oncein Envoy code is illustrative of my dilemma. I ended up
using seastar core include temporary_buffer.h
Thanks for confirming my guess. It is completely justified but if I use those
classes in non-gRPC code the metrics will be wrong.
The code is really nice and hence I was exploring the possibility of using them
outside.
>
> On Nov 12, 2019 at 5:18 PM, <'Vijay Pai' via gr
Alternatively you can keep track of number of async operations in progress and
a flag set by asyncnotifywhendone as done by contributed code from Arpit of
Electronic Arts for managing gRPC async server state. You can delete (like
Arpit’s code) or reset (like Vijay’s code in the bench
Those macros are performance-tracking counters used in microbenchmarks. We
count our use of atomic operations, locks, and mallocs since these are all
(relatively) expensive operations and often foreshadow performance
regressions (or improvements). The results of these can be seen on the
Checks
Acorn, thanks for the detailed and correct response. I'll go one step
further than your fourth sentence, though, and say that there is explicitly
no guarantee about the ordering of the Finish and Done events (or, for that
matter, any concurrent operations on the CQ). Your solution for dealing
w
There's a requirement that new RPCs can't be registered once the server has
been shutdown nor can new RPC operations be initiated once the CQ has been
shutdown. These are, imo, the two rough edges in the C++ CQ-based async
API. The shutdown mutex protects against that possibility. Note that it's
Thanks for the explanation. In the same vein, is there any reason to use a
shutdown mutex instead of calling server shutdown and then cq shutdown? With
these two mutex removed, probably the c++ benchmark program will show marked
improvement?
Thank you.
>
>
12 matches
Mail list logo