I'm not sure I fully understand the question "how can the client needs to
establish connectivity to these new pods". But this use case seems specific
to a cloud provider, and I don't have more context than you do.

Whether to connect to backends or use in-line load balancers, this is more
of a business logic decision. If one client needs to connect to all
backends to function, then the scalability of the backends will be limited.
Based on the information provided, if I were you, I would rethink how the
information is organized and consider a possible solution to shard them
properly. E.g., using a light-weight in memory database to handle the load.



On Thu, Feb 25, 2021 at 3:34 PM Mahadevan Krishnan <krish...@gmail.com>
wrote:

> Hi Lidi,
>
> Thanks for taking time to respond. We have been doing Keep Alive Pings
> every 30 seconds(even if there is no data) to keep the channel live.
>
> I had one more follow up question to check if you have any ideas on it.
>
> Currently the gRPC server is running on Kubernetes POD streaming out the
> data to Client. Server POD is responsible for reading from AWS Kinesis and
> pushing this out to our customers for sensor data. Data is sharded based on
> the Device ID and hence when I start more instances of gRPC Server POD, it
> helps distribute the load between multiple pods as each pod will read a few
> shards from AWS Kinesis. But now the question is how can the client needs
> to establish connectivity to these new pods. Otherwise they will miss out
> the data retrieved from these shards.
>
> Clients will need to open connections to these pods individually or can we
> use load balancers ?  Challenge with load balancer is it has to read data
> from all underlying servers and not any one of servers at a time.
>
> Regards,
> Mahadevan
>
> On Fri, Feb 12, 2021 at 12:43 PM Lidi Zheng <li...@google.com> wrote:
>
>> Hi Mahadevan,
>>
>> Thanks for using gRPC!
>>
>> Based on the description, long living connections getting dropped
>> frequently could be caused by TCP socket timeouts.
>>
>> You can refer to our document about how to set keepalive pings:
>> https://github.com/grpc/grpc/blob/master/doc/keepalive.md
>> How to set channel arguments:
>> https://github.com/grpc/grpc/blob/master/examples/python/helloworld/greeter_client_with_options.py
>> The list of available channel arguments:
>> https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/grpc_types.h#L138
>>
>> I'm not sure what the maximum size of channels means. Channels share the
>> underlying TCP sockets when possible. If you mean the maximum number of TCP
>> sockets , gRPC doesn't have such limitations, please check your OS's
>> network settings and the network devices. But as mentioned, I'm more
>> suspect that this could be improved by using keepalive pings.
>>
>> Bests,
>> Lidi Zheng
>>
>>
>> On Fri, Feb 12, 2021 at 9:55 AM Mahadevan Krishnan <krish...@gmail.com>
>> wrote:
>>
>>> Hi Lidi,
>>>
>>> Sorry to directly reach out to you through email. I saw one of your
>>> presentations on youtube for gRPC about flow control. Hence I was thinking,
>>> you would be able to help us here on what we could be doing wrong. We are
>>> new to gRPC
>>>
>>> Regards,
>>> Mahadevan
>>>
>>> ---------- Forwarded message ---------
>>> From: Mahadevan Krishnan <Unknown>
>>> Date: Wednesday, 10 February 2021 at 18:53:41 UTC-6
>>> Subject: Channel getting dropped
>>> To: grpc.io <Unknown>
>>>
>>>
>>>
>>> We have been using Server Streaming Application written in Java through
>>> gRPC maintaining long living connection where we stream data to our clients
>>> based on the data collected from sensors. We have been seeing channel
>>> getting dropped more frequently, connection getting dropped and we need to
>>> make client re-establish connection even though there is no network blip.
>>> We wanted to understand if there is maximum size for channel and if there
>>> is a way to increase the channel size so that we do not lose the channel
>>> and messages that were in the channel needs to be processed. Any help on
>>> this is highly appreciated.
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAMC1%3DjfMOHA5FB4%3DY47HFgqN_q1NfhNLNVpA2eP0JguaFhixOA%40mail.gmail.com.

Reply via email to