I am writing an async grpc server, and I want to control the total memory
it may use. Looks like that grpc::ResourceQuota is useful, but by checking
the places where it is called, it seems that we only check memory quota
when accepting a new connection.
What if a client keeps sending API calls
For executor threads, we can use Executor::SetThreadingAll(false) to shut
down. If there is no thread, it still works according to the following code.
void Executor::Enqueue(grpc_closure* closure, grpc_error_handle error,
bool is_short)
...
do {
retry_push = false;
size_t cur_thread_count =
Thank you Craig. What is the timeline for those work?
I understand the current implementation does not give us control of the
thread counts. By reading the code, the *Executor* can create as many as 2
times the cpu core threads. I have more questions here:
1. What kind of *internal* threads
I am trying to understand how many internal threads gRPC creates in async
mode. I find some timer threads, and some threads in Executor. Are there
any threads? Are there any short-lived thread?
Also are there any threads to receive bytes from socket and deserialize
them?
--
You received this