[grpc-io] Re: C++ Client Async write after read

2023-05-31 Thread 'yas...@google.com' via grpc.io
Sorry for the late response. This fell through the cracks.

It's fine to have a read and a write active at the same time. It's only 
problematic to multiple reads or multiple writes at the same time.

On Sunday, March 13, 2022 at 2:06:36 PM UTC-7 Trending Now wrote:

> Any update please ?
>
> Le samedi 12 mars 2022 à 11:55:28 UTC+1, Trending Now a écrit :
>
>> Hello
>>
>> Any update please.
>> sorry, it's blocking for me :(
>>
>> Thank you very much !
>>
>> Le vendredi 11 mars 2022 à 19:17:04 UTC+1, Trending Now a écrit :
>>
>>> Hello,
>>>
>>> I'm coding a bidirectional rpc using grpc. I'm using the asynchronous 
>>> API.
>>>
>>> The idea is to write the msg to the grpc::ClientAsyncReaderWriter< W, R 
>>> > stream and then call the Read in while loop till getting a false 
>>> status
>>>
>>> If I write to the stream, the program will simply crash. The reason is 
>>> the asynchronous API allows only “1 outstanding asynchronous write on the 
>>> same side of the same stream without waiting for the completion queue 
>>> notification“.
>>>
>>> Is there a way to force/prioritize the write operation after making a 
>>> read operation ?
>>>
>>> Thank you very much
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1aef2cfe-6729-4129-b0bb-440d7e4c05d5n%40googlegroups.com.


[grpc-io] Re: Server Threadpool Exhausted

2023-05-31 Thread 'yas...@google.com' via grpc.io
Hi, when wanting such granular control over the threading model, it's 
better to use the async model. Currently, the CQ based async API is the 
only API that can serve this purpose. I would've wanted to recommend using 
the callback API along with the ability to set your own `EventEngine`, but 
we don't have that built out yet.

On Sunday, May 1, 2022 at 9:33:49 PM UTC-7 Roshan Chaudhari wrote:

> More context:
> I am using C++ sync server. Currently when I have multiple concurrent  
> clients, number of threads used by the server increases linearly and it 
> seems each client is served with separate thread. 
>
> streaming RPC I am using, will be idle 90 percent of the time, so rarely 
> data will be sent across it. So server can have minimal number of threads 
> and multiple client requests can be served by say fixed number of threads. 
> And it is okay if there is some delay in serving the client.
>
> Is it possible to achieve this in sync server? Or async is the only option.
> On Friday, April 29, 2022 at 12:17:27 PM UTC+5:30 Roshan Chaudhari wrote:
>
>> i have gRPC sync server with one service and 1 RPC.
>>
>> I am not setting ResourceQuota on serverbuilder. If n clients wants to 
>> connect, there will be n request handler threads created by gRPC. I want to 
>> keep some limit on these threads. lets say 10. And if it costs some latency 
>> in serving client, it is okay.
>>
>> So I tried these settings:
>> grpc::ServerBuilder builder; grpc::ResourceQuota rq; 
>> rq.SetMaxThreads(10); builder.SetResourceQuota(rq); 
>> builder.SetSyncServerOption( 
>> grpc::ServerBuilder::SyncServerOption::MIN_POLLERS, 1); 
>> builder.SetSyncServerOption( 
>> grpc::ServerBuilder::SyncServerOption::MAX_POLLERS, 1); 
>> builder.SetSyncServerOption(grpc::ServerBuilder::SyncServerOption::NUM_CQS, 
>> 1); 
>>
>> From another process, I am firing up 800 clients in parallel. So I expect 
>> there will be 1 completion queue for each of them and 10 threads sharing 
>> it. However, on client side there is an error:
>>
>> "*Server Threadpool Exhausted*"
>>
>> and none of the client succeeds.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/47dab6b0-8b54-46a5-9283-d3e85248053dn%40googlegroups.com.


[grpc-io] Re: grpc-cpp: Interceptor release plan

2023-05-31 Thread 'yas...@google.com' via grpc.io
Hi, sorry for the late response. We've identified some improvements we want 
to make to the API, and hence the delay in stabilizing it. We'll be working 
on this soon though. Please stay posted.

On Friday, May 6, 2022 at 1:03:31 AM UTC-7 Luca Dev wrote:

> Dear Maintainer of grpc,
>
> Are there some plan to release into the short-term the experimental 
> Interceptor interface (
> https://github.com/grpc/grpc/blob/1d94aa92d883c40abe8b064d79e682f27b432cd3/include/grpcpp/impl/codegen/interceptor.h)?
>  
> This was introduced about 4 years ago and looks very promising!
>
>
> Cheers
> Luca
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/98dd350a-800e-4ef0-b7de-c98d10ab4196n%40googlegroups.com.


[grpc-io] Re: Progress indicator for a grpc C++ async or callback service?

2023-05-31 Thread 'yas...@google.com' via grpc.io
The gRPC library does not provide any such mechanism inbuilt, but you could 
imagine writing a gRPC service that has a pause/resume functionality where 
it stops serving requests or cancels incoming requests till resume is 
invoked.

On Thursday, March 24, 2022 at 4:15:14 AM UTC-7 Iro Karyoti wrote:

> Can a grpc C++ client request for a pause/resume of an async or callback 
> service?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/910f5c9f-0f59-4232-860e-d12bf0c89788n%40googlegroups.com.


[grpc-io] gRPC java: Provide ScheduledExecutorService setter in NettyChannelBuilder

2023-05-31 Thread dan...@gmail.com
Hello,

I have observed that the ScheduledExecutorService utilized for setting up 
deadlines and exiting idle mode is obtained by calling 
NettyClientTransport.getScheduledExecutorService, which returns an 
EventLoopGroup instance that was previously set in 
NettyChannelBuilder.eventLoopGroup().

Is there a specific rationale behind utilizing the same EventLoopGroup 
instance for executing IO tasks and as a ScheduledExecutorService?

I have identified two disadvantages associated with this approach:

1. Developers relinquish control over EventLoop to subchannel assignment, 
potentially leading to multiple subchannels sharing the same EventLoop.
2. The deadline task may end up in the same slow EventLoop, which 
subsequently requires its cancellation.

To tackle these issues, I propose the addition of a setter method for a 
custom ScheduledExecutorService within the NettyChannelBuilder. This 
enhancement would enable the separate scheduling of tasks while exclusively 
utilizing the EventLoopGroup for IO operations.

Best regards,
Dan

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/fa5280cd-dd69-4d8b-a6bd-9a8e4465256cn%40googlegroups.com.


[grpc-io] Re: how to check if server is up/down from a client

2023-05-31 Thread 'yas...@google.com' via grpc.io
Sorry for the late reply. 

>From what I'm reading, health checking is exactly what you want. I don't 
understand why you don't want to use 
it. https://github.com/grpc/grpc/blob/master/doc/health-checking.md

About using the channel state - Just because a channel is not in the 
connected state, it does not necessarily mean that the server is down. 

On Wednesday, December 8, 2021 at 2:13:25 PM UTC-8 Viktor Khristenko wrote:

> Hello,
>
> Setup:
> Client, server using callback unary api
>
> question:
> How do i check from a client side that server is up/down? What I'm 
> currently doing is to issue grpc with deadline set + wait_for_ready as 
> false. the return code if shows UNAVAILABLE, then server is not there, 
> otherwise needs a retry... 
>
> Here it's not about health check service that could be used, but rather 
> about mechanisms to check either thru channel or stub (issueing rpc)? I was 
> also trying to query the channel state, however not quite clear what 
> indicates an unavailable server (using grpc_connectivity_state)...
>
> the use case is I have a client connected to N servers (1 channel per 
> server) and does some simple load balancing with priorities. this client is 
> actually another server 
>
> any help is greatly appreciated!
> thanks!
>
> VK
> Reply all
> Reply to author
> Forward
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e2c1fbef-0105-4a66-8c54-8f0103ab832en%40googlegroups.com.


[grpc-io] Re: relaying rpcs Calls in grpc C++

2023-05-31 Thread 'yas...@google.com' via grpc.io
https://github.com/grpc/grpc/blob/2892b24eabbb22b2344aba9c3ba84e529017b684/include/grpcpp/generic/generic_stub.h#L114
The generic APIs are what you are looking for. 

I don't have an exact example for you, but you could use this as a 
reference for the Generic APIs 
- 
https://github.com/grpc/grpc/blob/2892b24eabbb22b2344aba9c3ba84e529017b684/test/cpp/end2end/client_callback_end2end_test.cc#L270

On Thursday, March 2, 2023 at 2:30:37 AM UTC-8 Anil Kumar wrote:

> Can someone please reply ?
>
> On Tuesday, February 21, 2023 at 4:52:19 PM UTC+5:30 Anil Kumar wrote:
>
>> My question is very similar to 
>> https://groups.google.com/g/grpc-io/c/Yruej18KJ_M/m/oGp5vYocCgAJ
>>
>> I want to implement a service agnostic gRPC proxy in C++, which forwards 
>> the request from the Client to the Server and forwards the response back 
>> from the server to the client ?
>>
>> Is there a generic manner to do so ?
>>
>> I see example mentioned in here 
>> 
>>
>>
>> How do I achieve the same in C++ ? Unable to find the C++ equivalent APIs 
>> for 
>>
>> ServerCallHandler, ServerCall.Listener, ClientCallListener
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/4bb1f673-10c9-4156-9118-68303b97d1f2n%40googlegroups.com.


[grpc-io] Re: pure virtual method called. terminate called without an active exception

2023-05-31 Thread yas...@gmail.com
Our tests already run under sanitizers (note that sanitizers on some 
platforms have been known to have false positives), so if I assume that 
there is no bug in gRPC, my first guess would be to check your usage of the 
API. I don't see your code, but please take a look at our examples. Maybe 
this would help 
- 
https://github.com/grpc/grpc/blob/2892b24eabbb22b2344aba9c3ba84e529017b684/examples/cpp/interceptors/server.cc#L87
 
?

On Thursday, 25 May 2023 at 22:48:16 UTC-7 karthik karra wrote:

> Hi All,
>
> I am using reactor bidi apis. when run normally, i am not facing this 
> error but when trying to run the client with valgrind then this error is 
> showing up. 
>
> any suggestions would be helpful.
>
> thanks,
> karthik
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/245b14dc-8a5e-4a35-8372-a79b79e3f0abn%40googlegroups.com.


[grpc-io] Re: maximum concurrent streams in cpp

2023-05-31 Thread 'yas...@google.com' via grpc.io
I don't think that you are running into a limit from max concurrent 
streams. If you haven't explicit set a limit of 15, you are not getting 
limited by that arg.

What are the symptoms that you are seeing? If it is simply a case of only 
15 RPCs being served concurrently, I suspect that the issue that you are 
running into is that your threads are blocked and hence not able to 
serve/poll other RPCs.

On Monday, May 15, 2023 at 2:17:18 AM UTC-7 karthik karra wrote:

> and also tried using different channels for each client. nothing worked
>
> On Monday, May 15, 2023 at 2:45:14 PM UTC+5:30 karthik karra wrote:
>
>> tried this call but of no use
>> server_builder.AddChannelArgument(GRPC_ARG_MAX_CONCURRENT_STREAMS,30)
>>
>> On Monday, May 15, 2023 at 2:04:04 PM UTC+5:30 karthik karra wrote:
>>
>>> Hi All, 
>>>
>>> currently i am getting max of 15 streams to server from 2 clients. 
>>> how to set the max concurrent streams ?
>>> do i need to create a new channel or should i increase the max 
>>> concurrent streams ?
>>>
>>> Any suggestions would be helpful.
>>>
>>> Thanks,
>>> Karthik
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/500c8c7b-aa89-4738-9473-94e052785b00n%40googlegroups.com.


[grpc-io] Re: C++: AsyncWrite constraint on completion queue

2023-05-31 Thread 'yas...@google.com' via grpc.io
You would find some examples here 
- https://github.com/grpc/grpc/tree/master/examples/cpp

The documentation would be best found in the headers - 
https://github.com/grpc/grpc/blob/master/include/grpcpp/support/client_callback.h
https://github.com/grpc/grpc/blob/master/include/grpcpp/support/server_callback.h

Also, https://github.com/grpc/proposal/blob/master/L67-cpp-callback-api.md 
for some additional reading

On Thursday, May 18, 2023 at 12:15:47 PM UTC-7 Ashutosh Maheshwari wrote:

> Hello Yash,
>
> Can you please point me to the documentation of the Callback API?
>
> Regards
> Ashutosh
>
>
> On Wednesday, May 17, 2023 at 6:54:25 AM UTC+5:30 yas...@google.com wrote:
>
> I'll preface this by saying - Use the C++ callback API. Instead of trying 
> to understand the Async CQ-based API, the callback API should be the choice 
> and is our current recommendation.
>
> >  Only one write is permissible per stream. So we cannot write another 
> tag on a stream until we receive a response tag from the completion queue 
> for the previous write.
>
> This is correct.
>
> I'll end this by again saying - Use the C++ callback API.
>
> > Recently,  I came across an issue where the gRPC client became a zombie 
> process as its parent Python application was aborted. In this condition, 
> the previous Write done on the stream connected with the client did not get 
> ack, probably,  and I did not receive the Write tag back in the completion 
> queue for that Write. My program kept waiting for the write tag and other 
> messages continued to queue up as the previous Write did not finish its 
> life cycle and hence I could not free the resources also for that tag.
>
> This can be easily avoided by configuring keepalive. Refer -
> 1) https://github.com/grpc/grpc/blob/master/doc/keepalive.md
> 2) https://github.com/grpc/proposal/blob/master/A9-server-side-conn-mgt.md
> 3) 
> https://github.com/grpc/proposal/blob/master/A8-client-side-keepalive.md
>
> That also answers your question on what happens if for some reason, a 
> client stops reading. Keepalive would handle it.
>
> > My question is, if a write tag for a previous write does not surface on 
> the completion queue, shall we wait for it indefinitely? What should be the 
> strategy to handle this scenario?
> Depends highly on your API/service. If for some reason, the RPC is taking 
> much longer than you want and you are suspecting that the client is being 
> problematic (i.e. responding to http keepalives but not making progress on 
> RPCs), you could always just end the RPC.
>
> On Wednesday, May 10, 2023 at 12:17:46 AM UTC-7 Ashutosh Maheshwari wrote:
>
> Hello,
>
> My question is, if a write tag for a previous write does not surface on 
> the completion queue, shall we wait for it indefinitely? What should be the 
> strategy to handle this scenario?
>
> Regards
> Ashutosh
> On Wednesday, April 26, 2023 at 11:11:57 PM UTC+5:30 apo...@google.com 
> wrote:
>
> First, it's important to clarify what it means to wait for a "Write" tag 
> to complete on a completion queue:
>
> When async "Write" is initially attempted, the message can be fully or 
> partially buffered within gRPC. The corresponding tag will surface on the 
> completion queue that the Write is associated with essentially after gRPC 
> is done buffering the message, i.e. after it's written out relevant bytes 
> to the wire.
>
> This is unrelated to whether or not a "response" has been received from 
> the peer, on the same stream.
>
> So, the highlighted comment means that you can only have one async write 
> "pending" per RPC, at any given time. I.e. in order to start a new write on 
> a streaming RPC, one must wait for the previous write on that same stream 
> to "complete" (i.e. for it's tag to be surfaced).
>
> Multiple pending writes on different RPCs of the same completion queue are 
> fine.
> On Saturday, April 22, 2023 at 12:58:57 PM UTC-7 Ashutosh Maheshwari wrote:
>
> Hello gRPC Team,
>
> I have taken an extract from 
> *“include/grpcpp/impl/codegen/async_stream.h”*
>
>  *“*
>
>   /// Request the writing of \a msg with identifying tag \a tag.
>
>   ///
>
>   /// Only one write may be outstanding at any given time. This means that
>
>   /// after calling Write, one must wait to receive \a tag from the 
> completion
>
>   /// queue BEFORE calling Write again.
>
>   /// This is thread-safe with respect to \a AsyncReaderInterface::Read
>
>   ///
>
>   /// gRPC doesn't take ownership or a reference to \a msg, so it is safe 
> to
>
>   /// to deallocate once Write returns.
>
>   ///
>
>   /// \param[in] msg The message to be written.
>
>   /// \param[in] tag The tag identifying the operation.
>
>   virtual void Write(const W& msg, void* tag) = 0;
>
> “
>
>  After reading the highlighted part,  I can make the following two 
> inferences:
>
>1. Only one write is permissible per stream. So we cannot write 
>another tag on a stream until we receive a response tag from the 
> completion 
>queue for

[grpc-io] Re: gRPC for the R programming language

2023-05-31 Thread Udit Ranasaria
My company is also very much interested in using gRPC for microservices 
with R code in it!
On Wednesday, February 8, 2023 at 10:17:51 PM UTC-8 Jan Krynauw wrote:

> Not sure whether this is allowed, but we are willing to reward anyone able 
> to look into this: https://www.upwork.com/jobs/~01699f5b31ffbebb9b
>
> On Thursday, 9 February 2023 at 08:05:34 UTC+2 Jan Krynauw wrote:
>
>> Agree on this!
>>
>> We deal quite a bit with Financial Analysts, Actuaries and Accountants 
>> and the world of R Scripts is massive.  Moving traditional python scripts 
>> to Proto defined and implemented using gRPC has been amazing.  It would be 
>> incredible to be able to transform the world of R into this pattern as well.
>>
>> On Wednesday, 8 February 2023 at 12:55:23 UTC+2 Sanjit Rath wrote:
>>
>>> I came across this thread while searching for R support for gRPC. We are 
>>> looking for ways to integrate R & Python, and Arrow libraries in between. I 
>>> am good at C++ as well and have been through the gRPC stub generation code. 
>>> Can someone from grpc.io please guide me here? I would be happy to get 
>>> involved and support the RPort of the gRPC libraries. 
>>>
>>> On Monday, 28 November 2022 at 16:37:36 UTC+4 Ruan Spies wrote:
>>>
 Such a delight to see this thread! We've also recently developed a need 
 for this in R and are also happy to get involved and pull in some of our 
 team as well.

 We operate primarily in the financial services space. Quite a lot of 
 people that we speak to, primarily actuaries, are big on R. We (Alis 
 Exchange ) aim to empower business teams to build 
 their own Cloud Native services, rather than having to go through IT 
 departments, by building out a various utilities on top of Protocol 
 Buffers 
 and GCP. We are all in on gRPC and believe that it is the best way to 
 design and build, but us and our clients are unable to truly use it.

 On the *client side*, a workaround has been to simply to do HTTP 
 transcoding and let people hit the HTTP endpoint, rather than using gRPC. 
 With this, you obviously lose the type definitions that are extremely 
 valuable.

 The bigger need is definitely being able to *implement a server* in R. 
 We want to enable business people to take their R scripts and formally 
 make 
 it available as services, rather than just code on local devices. 

 We have explored some 
  of the 
  packages people have built, but for 
 anyone who has extensively used gRPC in supported languages, the developer 
 experience is horrible - something that is officially supported, or at 
 least has proper community support, would be of great value.

 We really see a HUGE opportunity in helping business convert their R 
 scripts into proper APIs using gRPC and think that we will be able to 
 drive 
 adoption of this in the financial industry.


 On Wednesday, 16 November 2022 at 00:10:14 UTC+2 Jim Sheldon wrote:

> Thanks for replying so quickly!
>
> R support would fill a need in the community I support (epi). Happy to 
> discuss more, right now in the prototyping and planning phase 
> (distributed 
> pathogen tracking system).
>
> And thanks for sharing those resources! I will need to take a look 
> more, and am happy to both get my hands dirty and get any additional 
> guidance that's easily provided/readily available.
>
> Jim
>
>
> On Tuesday, November 15, 2022 at 1:20:39 PM UTC-5 rbel...@google.com 
> wrote:
>
>> The core gRPC team does not currently have plans to extend support to 
>> R. Frankly, this is the first request I've heard for R support. If there 
>> is 
>> a need here, we'd love to hear more about it.
>>
>> With that said, gRPC is an open protocol and the gRPC Core codebase 
>> is open source. The C++ Core API 
>>  is 
>> designed specifically for use with foreign function interfaces like 
>> R's 
>> .
>>  
>> This is how we implemented Python, Ruby, PHP, etc. My gut says that 
>> getting 
>> a basic client working is about the size of a weekend project. We'd be 
>> happy to give you (or anyone else) the guidance you'd need to get that 
>> off 
>> the ground.
>>
>> Thanks,
>> Richard Belleville
>> gRPC Team
>>
>> On Tuesday, November 15, 2022 at 4:50:03 AM UTC-8 Jim Sheldon wrote:
>>
>>> Hello!
>>>
>>> Are there plans to add gRPC to the R language?
>>> It would help coordinate work between software and science.
>>>
>>> Thanks,
>>> Jim
>>>
>>>
>>>

-- 
You