Hi,

I have some wondering regarding the threading and IO model of a synchronous 
gRPC server.

As I understand it, gRPC uses epoll together with pollsets, polling islands 
in order to watch several fds for incoming data. If data is ready to be 
read, correct worker thread is woken up to deal with it...(simplified..)
In asynchronous gRPC with only one thread and one completion queue, all fds 
(1 client <-> 1 fd) are in the same pollset and the same polling island etc 
(I am not confident in the exact details of the implementation). This 
enables one thread to serve all clients (or n threads to serve all clients).

Now to my question/wondering:
What does this look like in synchronous gRPC? To my understanding, if I 
limit the resourceQuota to a number of threads, then I am only able to 
serve a limited number of concurrent clients. Why is it like this? 
That is, in synchronous gRPC, each "worker thread" is blocked on a client 
connection, why is this?

Could someone explain the threading model / IO model of synchronus gRPC? Or 
where could I read up on it on my own?
Thank you,
Rasmus

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0fb2b71a-fdb1-4011-9e6e-93e198af9118o%40googlegroups.com.

Reply via email to