Feel free to search the documentation for the default values.
But in my experience, it was never an issue (we don't have one replica of
anything, though. So a single node going down is pretty tolerable)
On Monday, August 6, 2018, Niranjan Kolly wrote:
> Hi MR,
>
> What about any application whi
Hi MR,
What about any application which are running(like nginx) , what if the node
goes down and how quick master would spin those pods in the available
nodes(for load distribution)
How to tweak the timeout in the contol manager.
Thanks,
Niranjan
On Mon, Aug 6, 2018 at 1:12 PM, 'Matthias Rampke
It takes a few minutes to declare a node lost, this is configurable via
kube-controller-manager flags.
There are a few things you can do on a pod that prevents them being put on
the same node, such as declaring a hostPort.
How will Cassandra react when a pod disappears and another one appears (th
Hi,
I have a K8 cluster with 3 master and 3 slave on centOS VM.
we have installed KONG and cassandra with 3 replica each. As a part of the
resiliency we brought down one node , but still the "kubectl get pods"
command shows the pods running in that node showing up.
As a K8 feature the master sho