I did research on this topic but did not find detailed information in the 
documentation yet.
How exactly does the thread model of the new callback API work?

When using the synchronous API, the thread model I guess is this:
- grpc owns threads, number can be limited
- Several RPCs can operate on one thread, but there's a limit
- When too many RPCs are open, the client receives a "resource exhausted"
- An application with multiple clients needs at least one thread per each 
open RPC.

In the callback (not asynchronous) API, I understand:
- grpc owns threads and spawns new threads if needed
- multiple RPCs can be handled on one thread non-blocking
For the server, I wonder how this scales with many (don't have a number in 
mind) RPCs being open. Assuming all 16 threads are spawned, how many RPCs 
can I operate?
Assuming I have an application with multiple clients implemented, each 
connecting to different servers.
Would all the clients be able to share the same thread pool, or would (in 
worst case) each client spawn 16 threads?

Especially when designing microservices where each service offers a server, 
but can be a client to another service it may be important to not scale 
threads too much.

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0185e00a-cc8a-4407-aa22-4128588a1b94n%40googlegroups.com.

Reply via email to