Hello,
I have a python service and a c++ client. The client produces data and is 
very fast. The service is writing data to some external db and is 
relatively slow. I am using a client-to-server stream method (called 
SendData). So it is easy to overwhelm the service.

The question is is there any protection that prevents the service to go out 
of memory? I did a small experiment and it seems that if the client is 
flooding the service (at least if it uses the synchronous c++ API), gRPC 
eventually blocks the client (in ClientWriter::Write) and lets the server 
catch up. After the server does some progress, the client is unblocked 
again and continues.

I am happy, that something like this is in place, but I have no idea where 
it comes from. Does anyone know any details about how this works? Is it 
documented? On what layer does it work? Is it configurable? I read about 
retries and server pushback, but it seems with streaming they are not 
usable, so it is probably not it.

Vojtech

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/435ea361-4045-4f20-b87a-421dd0cc9884n%40googlegroups.com.

Reply via email to