On Fri, Oct 27, 2017 at 05:17:48PM -0400, David Rosenstrauch wrote:
> I'm trying to make sure that as I'm deploying new services on our cluster,
> that failures/restarts get handled in a way that's most optimal for
> resiliency/uptime.
>
>
> I'm simplifying things a bit, but if a piece of code ru
On Fri, Oct 27, 2017 at 2:17 PM, David Rosenstrauch wrote:
> I'm trying to make sure that as I'm deploying new services on our cluster,
> that failures/restarts get handled in a way that's most optimal for
> resiliency/uptime.
>
>
> I'm simplifying things a bit, but if a piece of code running insi
On Fri, Oct 27, 2017 at 1:34 PM, David Rosenstrauch wrote:
> Was speaking to our admin here, and he offered that running a health check
> container inside the same pod might work. Anyone agree that that would be a
> good (or even preferred) approach?
Not sure what you mean, but IIUC, it won't ge
(google groups is taking days when I use my non-gmail email, so I'm
sending via gmail again)
On Fri, Oct 27, 2017 at 6:17 PM, David Rosenstrauch wrote:
> I'm trying to make sure that as I'm deploying new services on our cluster,
> that failures/restarts get handled in a way that's most optimal f
I'm trying to make sure that as I'm deploying new services on our
cluster, that failures/restarts get handled in a way that's most optimal
for resiliency/uptime.
I'm simplifying things a bit, but if a piece of code running inside a
container crashes, there's more or less 2 possibilities: 1)
If the machine is down, k8s will automatically move all pods off of it, anyway.
On Fri, Oct 27, 2017 at 1:51 PM, David Rosenstrauch wrote:
> Well restarting the pod actually does have a better chance of fixing
> whatever the issue is, rather than just restarting the container inside of
> it. The
Well restarting the pod actually does have a better chance of fixing
whatever the issue is, rather than just restarting the container inside
of it. The pod might very well get restarted on a different machine.
If the machine the pod is running on is either down or hurting, then
just restartin
What Rodrigo said - what problem are you trying to solve?
The pod lifecycle is defined as restart-in-place, today. Nothing you
can do inside your pod, except deleting it from the apiserver, will do
what you asking. It doesn't seem too far fetched that a pod could
exit and "ask for a different no
I don't think it is configurable.
But I don't really see what you are trying to solve, maybe there is another
way to achieve it? If you are running a pod of a single container, what is
the problem that the container is restarted when is appropriate instead of
the whole pod?
I mean, you would need
Was speaking to our admin here, and he offered that running a health
check container inside the same pod might work. Anyone agree that that
would be a good (or even preferred) approach?
Thanks,
DR
On 2017-10-27 11:41 am, David Rosenstrauch wrote:
I have a pod which runs a single container.
I have a pod which runs a single container. The pod is being run under
a ReplicaSet (which starts a new pod to replace a pod that's
terminated).
What I'm seeing is that when the container within that pod terminates,
instead of the pod terminating too, the pod stays alive, and just
restarts
11 matches
Mail list logo