Thank you John for mentioning the liveness probe. This is something I
didn't think about.

Regards,
Pushpendra

On Fri, Aug 7, 2020 at 8:50 AM John Sanda <john.sa...@gmail.com> wrote:

> It is worth mentioning that depending on how you configure your client
> application deployment, you should not have to worry about bouncing the
> driver. If you add a liveness probe for your client deployment that relies
> on the driver being able to connect to the cluster, then kubernetes will
> restart the client container when the liveness probe fails. And if you
> configure the driver to connect via the headless service, you will get the
> update endpoints.
>
> On Thu, Aug 6, 2020 at 11:00 PM John Sanda <john.sa...@gmail.com> wrote:
>
>> Hi Pushpendra
>>
>> You should use the headless service, e.g.,
>>
>> // Note that this code snippet is using v3.x of the driver.
>> // Assume the service is deployed in namespace dev and is
>> // named cassandra-service. The FQDN of the service would then
>> // be cassandra-service.dev.svc.cluster.local. If your client
>> // is deployed in the same namespace, you can reach it with by
>> // cassandra-service without the rest.
>>
>> cluster = Cluster.builder
>>     .addContactPoint(headlessService)
>>     .build();
>> Session session = cluster.connect();
>>
>> A headless service will resolve to multiple endpoints. The exact
>> endpoints to which it maps will be determined by the label selector you use
>> for your service. You can check this with:
>>
>> $ kubectl get endpoints
>>
>> The service will update the endpoints with any IP address changes. The
>> addContactPoint method calls the following:
>>
>> addContactPoints(*IntetAddress.getAllByName*(headlessService));
>>
>> getAllByName will return all of the endpoints. If the entire C* cluster
>> goes down, you will need to bounce the driver.
>>
>> Cheers
>>
>> John
>>
>> On Thu, Aug 6, 2020 at 4:47 AM Pushpendra Rajpoot <
>> pushpendra.nh.rajp...@gmail.com> wrote:
>>
>>>
>>> We have created a statefulset & headless service to deploy Cassandra in
>>> Kubernetes. Our client is also in the same Kubernetes cluster. We have
>>> identified two ways by which we can find contact point for driver in client
>>> application:
>>>
>>>    1. Use 'cassandra-headless-service-name' as contactPoints
>>>    2. Fetch the IPs of pods from headless-service & externalize the
>>>     IPs. Read these IP as contact points when initializing the connection.
>>>
>>>
>>> So far so good. Above will work if one/some pods are restarted and their
>>> IP changes. In this case, the driver will update the new IP automatically.
>>>
>>> *How will this work in case of complete outage (all Cassandra pods down)
>>> ? If all the pods are down and they come back online with different IPs (IP
>>> can change in Kubernetes), how will the application connect to Cassandra?*
>>>
>>> *What is the best way to connect Cassandra cluster in Kubernetes with a
>>> client running in the same cluster.*
>>>
>>>
>>> Regards,
>>>
>>> Pushpendra
>>>
>>>
>>
>> --
>>
>> - John
>>
>
>
> --
>
> - John
>

Reply via email to