[grpc-io] Re: C++ Client Async write after read

2023-05-31 Thread 'yas...@google.com' via grpc.io
Sorry for the late response. This fell through the cracks. It's fine to have a read and a write active at the same time. It's only problematic to multiple reads or multiple writes at the same time. On Sunday, March 13, 2022 at 2:06:36 PM UTC-7 Trending Now wrote: > Any update please ? > > Le

[grpc-io] Re: Server Threadpool Exhausted

2023-05-31 Thread 'yas...@google.com' via grpc.io
Hi, when wanting such granular control over the threading model, it's better to use the async model. Currently, the CQ based async API is the only API that can serve this purpose. I would've wanted to recommend using the callback API along with the ability to set your own `EventEngine`, but we

[grpc-io] Re: grpc-cpp: Interceptor release plan

2023-05-31 Thread 'yas...@google.com' via grpc.io
Hi, sorry for the late response. We've identified some improvements we want to make to the API, and hence the delay in stabilizing it. We'll be working on this soon though. Please stay posted. On Friday, May 6, 2022 at 1:03:31 AM UTC-7 Luca Dev wrote: > Dear Maintainer of grpc, > > Are there

[grpc-io] Re: Progress indicator for a grpc C++ async or callback service?

2023-05-31 Thread 'yas...@google.com' via grpc.io
The gRPC library does not provide any such mechanism inbuilt, but you could imagine writing a gRPC service that has a pause/resume functionality where it stops serving requests or cancels incoming requests till resume is invoked. On Thursday, March 24, 2022 at 4:15:14 AM UTC-7 Iro Karyoti

[grpc-io] gRPC java: Provide ScheduledExecutorService setter in NettyChannelBuilder

2023-05-31 Thread dan...@gmail.com
Hello, I have observed that the ScheduledExecutorService utilized for setting up deadlines and exiting idle mode is obtained by calling NettyClientTransport.getScheduledExecutorService, which returns an EventLoopGroup instance that was previously set in NettyChannelBuilder.eventLoopGroup().

[grpc-io] Re: how to check if server is up/down from a client

2023-05-31 Thread 'yas...@google.com' via grpc.io
Sorry for the late reply. >From what I'm reading, health checking is exactly what you want. I don't understand why you don't want to use it. https://github.com/grpc/grpc/blob/master/doc/health-checking.md About using the channel state - Just because a channel is not in the connected state,

[grpc-io] Re: relaying rpcs Calls in grpc C++

2023-05-31 Thread 'yas...@google.com' via grpc.io
https://github.com/grpc/grpc/blob/2892b24eabbb22b2344aba9c3ba84e529017b684/include/grpcpp/generic/generic_stub.h#L114 The generic APIs are what you are looking for. I don't have an exact example for you, but you could use this as a reference for the Generic APIs -

[grpc-io] Re: pure virtual method called. terminate called without an active exception

2023-05-31 Thread yas...@gmail.com
Our tests already run under sanitizers (note that sanitizers on some platforms have been known to have false positives), so if I assume that there is no bug in gRPC, my first guess would be to check your usage of the API. I don't see your code, but please take a look at our examples. Maybe

[grpc-io] Re: maximum concurrent streams in cpp

2023-05-31 Thread 'yas...@google.com' via grpc.io
I don't think that you are running into a limit from max concurrent streams. If you haven't explicit set a limit of 15, you are not getting limited by that arg. What are the symptoms that you are seeing? If it is simply a case of only 15 RPCs being served concurrently, I suspect that the issue

[grpc-io] Re: C++: AsyncWrite constraint on completion queue

2023-05-31 Thread 'yas...@google.com' via grpc.io
You would find some examples here - https://github.com/grpc/grpc/tree/master/examples/cpp The documentation would be best found in the headers - https://github.com/grpc/grpc/blob/master/include/grpcpp/support/client_callback.h

[grpc-io] Re: gRPC for the R programming language

2023-05-31 Thread Udit Ranasaria
My company is also very much interested in using gRPC for microservices with R code in it! On Wednesday, February 8, 2023 at 10:17:51 PM UTC-8 Jan Krynauw wrote: > Not sure whether this is allowed, but we are willing to reward anyone able > to look into this: