Issue Reported:
https://github.com/openshift/origin/issues/16160
> On Sep 5, 2017, at 2:51 PM, Mateus Caruccio
> wrote:
>
> Would you mind posting the issue link here so I can keep up on it? I'm seeing
> some errors like those too.
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.co
Would you mind posting the issue link here so I can keep up on it? I'm
seeing some errors like those too.
--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017
2017-09-05 18:28 GMT-03:00 Clayton Coleman :
> Please open a bug in opensh
Please open a bug in openshift/origin and we'll triage it there.
On Tue, Sep 5, 2017 at 5:14 PM, Patrick Tescher
wrote:
> The pods are still “terminating” and have been stuck in that state. New
> pods have come and gone since then but the stuck ones are still stuck.
>
>
> On Sep 5, 2017, at 2:13
The pods are still “terminating” and have been stuck in that state. New pods
have come and gone since then but the stuck ones are still stuck.
> On Sep 5, 2017, at 2:13 PM, Clayton Coleman wrote:
>
> So the errors recur continuously for a given pod once they start happening?
>
> On Tue, Sep 5
So the errors recur continuously for a given pod once they start happening?
On Tue, Sep 5, 2017 at 5:07 PM, Patrick Tescher
wrote:
> No patches have been applied since we upgraded to 3.6.0 over a week ago.
> The errors just popped up for a few different pods in different namespaces.
> The only t
No patches have been applied since we upgraded to 3.6.0 over a week ago. The
errors just popped up for a few different pods in different namespaces. The
only thing we did today was launch a stateful set in a new namespace. Those
pods were not the ones throwing this error.
> On Sep 5, 2017, at
Were any patches applied to the system? Some of these are normal if they
happen for a brief period of time. Are you seeing these errors
continuously for the same pod over and over?
On Tue, Sep 5, 2017 at 3:23 PM, Patrick Tescher
wrote:
> This morning our cluster started experiencing an odd err
This morning our cluster started experiencing an odd error on multiple nodes.
Pods are stuck in the terminating phase. In our node log I see the following:
Sep 5 19:17:22 ip-10-0-1-184 origin-node: E0905 19:17:22.043257 112306
nestedpendingoperations.go:262] Operation for
"\"kubernetes.io/sec