The short answer is that it's going to be fixed in Kubernetes 1.6. The 
workaround is to manually kill the failed pod, so that DaemonSet controller 
can recreate it.

Because DaemonSet controller doesn't schedule pods through scheduler 
<https://kubernetes.io/docs/admin/daemons/#how-daemon-pods-are-scheduled>, 
if its pod is created on the node but somehow rejected by kubelet, the pod 
will become failed and won't be rescheduled. In 1.6, DaemonSet controller 
will kill those failed pods actively so that they can be recreated again 
(see PR #40330 <https://github.com/kubernetes/kubernetes/pull/40330>). 


On Tuesday, February 28, 2017 at 4:01:28 PM UTC-8, Nate Rook wrote:
>
> Recently one of my nodes ran out of disk space, because it had too many 
> images on it. It went into DiskPressure mode, garbage collected some 
> images, then left DiskPressure mode and started admitting pods again. This 
> is fine.
>
> At the same time, I updated a DaemonSet to use a new image, and killed all 
> its pods in order to coerce the DaemonSet into recreating them with the new 
> image. The DaemonSet created some new pods, but the one on my disk-pressure 
> node failed, with a reason of Evicted, and this message:
>
> Message:        Pod The node was low on resource: [DiskPressure].
>
> This all makes sense, too. What's confusing to me, however, is that the 
> pod never got rescheduled. I would have expected the DaemonSet to delete 
> the pod and try creating it again. Is this expected behavior? If it is, is 
> there any way to get the pod to automatically be recreated instead?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to