Sorry for the late reply. We are actually using a bidirectional stream observer. Here is a model implementation of stream observer.
Multiple clients can be connected to the server. Clients send chunks of data(16MB chunks) to server for processing. ClientRequestObserver#onNext() { // add the request to pending queue. // if the queue is full the request is rejected and client retries after some time. These rejected requests lead to lots of garbage in the system because the requests are larger in size. } In the background there is a service running which polls the pending requests queue and processes them. The requirement was to be able to control the client flow of requests based on the pending request queue size. On Wednesday, January 29, 2020 at 5:05:23 AM UTC+5:30, Penn (Dapeng) Zhang wrote: > > I assume your RPC is unary type (correct me if it's not the case), you can > (1) use NettyServerBuilder.maxConcurrentCallsPerConnection() to limit the > number of concurrent calls per client channel; (2) in server application > implementation, send response slowly if possible (e.g. sleep a little bit > before sending out the response when server is too busy). To limit the > total number of connections to the server, the discussion in > https://github.com/grpc/grpc-java/issues/1886 may help. > On Friday, January 17, 2020 at 1:42:28 AM UTC-8 lokes...@gmail.com wrote: > >> Apache Ratis is a java implementation of RAFT consensus protocol and uses >> grpc as a transport protocol. Currently we have multiple clients connecting >> to a server. The server has limited resources available to handle the >> client requests and it fails the requests which it can not handle. These >> resources are in the application layer. Since client requests can be large >> in size, failing them creates a lot of garbage. We wanted to push back the >> clients until resources become available without creating a lot of garbage. >> Based on my understanding flow control in grpc works by controlling the >> amount of data buffered in the receiver. In our use case we want the server >> to have not more than x number of requests which it has to process. Lets >> assume that the server enqueues the requests it receives in a queue for >> processing them(I don't think the isReady control would work in this >> scenario?). Is it possible for server to limit the number of requests it >> receives from the clients? Is it possible for server to stop receiving data >> from the socket? >> > -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/a80a40fc-9360-44e5-be5c-e4f643020c87%40googlegroups.com.