Re: Pods randomly running as root

2017-02-07 Thread Alex Wauck
>>>>> $ oc -n some-project exec app6-15-078fd -- id
>>>>>> uid=0(root) gid=0(root) groups=0(root),100037
>>>>>>
>>>>>> All of these pods are running on the same node, and as you can see,
>>>>>> they are in the same project.  Yet, some are running as root and some are
>>>>>> not.  How weird is that?
>>>>>>
>>>>>> On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck 
>>>>>> wrote:
>>>>>>
>>>>>>> $ oc export -n some-project pod/good-pod | grep serviceAccount
>>>>>>>   serviceAccount: default
>>>>>>>   serviceAccountName: default
>>>>>>> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>>>>>>>   serviceAccount: default
>>>>>>>   serviceAccountName: default
>>>>>>>
>>>>>>> Same serviceAccountName.  This problem seems to happen with any pod
>>>>>>> from any project that happens to run on these newer nodes.  I examined 
>>>>>>> the
>>>>>>> output of `oc describe scc`, and I did not find any unexpected access to
>>>>>>> elevated privileges for a default serviceaccount.  The project were I'm
>>>>>>> currently seeing the problem is not mentioned at all.  Also, I've seen 
>>>>>>> the
>>>>>>> problem happen with pods that are managed by the same replication
>>>>>>> controller.
>>>>>>>
>>>>>>> On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman <
>>>>>>> ccole...@redhat.com> wrote:
>>>>>>>
>>>>>>>> Adding the list back
>>>>>>>>
>>>>>>>> -- Forwarded message --
>>>>>>>> From: Clayton Coleman 
>>>>>>>> Date: Mon, Feb 6, 2017 at 1:42 PM
>>>>>>>> Subject: Re: Pods randomly running as root
>>>>>>>> To: Alex Wauck 
>>>>>>>> Cc: users 
>>>>>>>>
>>>>>>>>
>>>>>>>> Do the pods running as root and not have the same
>>>>>>>> serviceAccountName field or different ones?  IF different, you may have
>>>>>>>> granted the service account access to a higher role - defaulting is
>>>>>>>> determined by the SCC's that a service account can access, so an admin
>>>>>>>> level service account will run as root by default unless you specify 
>>>>>>>> you
>>>>>>>> don't want that.
>>>>>>>>
>>>>>>>> On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I'm looking at two nodes where one has the problem and the other
>>>>>>>>> doesn't, and I have confirmed that their node-config.yaml is the same 
>>>>>>>>> for
>>>>>>>>> both (modulo IP addresses).  The generated kubeconfigs for these 
>>>>>>>>> nodes on
>>>>>>>>> the master are also the same (modulo IP addresses and keys/certs).
>>>>>>>>>
>>>>>>>>> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck >>>>>>>> > wrote:
>>>>>>>>>
>>>>>>>>>> Oh, wait.  I was looking at the wrong section.  The non-root pod
>>>>>>>>>> as a runAsUser attribute, but the root pod doesn't!
>>>>>>>>>>
>>>>>>>>>> On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck <
>>>>>>>>>> alexwa...@exosite.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> A pod that IS running as root have this:
>>>>>>>>>>>
>>>>>>>>>>>   securityContext:
>>>>>>>>>>> fsGroup: 100037
>>>>>>>>>>> seLinuxOptions:
>>>>>>>>>>>   level: s0:c19,c14
>>>>>>>>>>>
>>>>>>>>>>> Another pod in the same project that is NOT running as root has
>>>>>>>>>>> the exact same securityContext se

Re: Pods randomly running as root

2017-02-07 Thread Jordan Liggitt
It is not right, and no, Ansible does not relax the restricted SCC.

`oadm policy reconcile-sccs` will show you default sccs that need
reconciling, and `oadm policy reconcile-sccs --confirm` will revert them to
their default settings.

On Mon, Feb 6, 2017 at 2:29 PM, Alex Wauck  wrote:

> Well, well:
>
> $ oc export scc/restricted
> allowHostDirVolumePlugin: false
> allowHostIPC: false
> allowHostNetwork: false
> allowHostPID: false
> allowHostPorts: false
> allowPrivilegedContainer: false
> allowedCapabilities: null
> apiVersion: v1
> defaultAddCapabilities: null
> fsGroup:
>   type: MustRunAs
> groups:
> - system:authenticated
> kind: SecurityContextConstraints
> metadata:
>   annotations:
> kubernetes.io/description: restricted denies access to all host
> features and requires
>   pods to be run with a UID, and SELinux context that are allocated to
> the namespace.  This
>   is the most restrictive SCC.
>   creationTimestamp: null
>   name: restricted
> priority: null
> readOnlyRootFilesystem: false
> requiredDropCapabilities:
> - KILL
> - MKNOD
> - SYS_CHROOT
> - SETUID
> - SETGID
> runAsUser:
>   type: RunAsAny
> seLinuxContext:
>   type: MustRunAs
> supplementalGroups:
>   type: RunAsAny
> volumes:
> - configMap
> - downwardAPI
> - emptyDir
> - persistentVolumeClaim
> - secret
>
> That runAsUser isn't right, is it?  Any idea how that could have been done
> by openshift-ansible or something?  Otherwise, this might be my excuse to
> clamp down hard on admin-level access to the cluster.
>
> On Mon, Feb 6, 2017 at 1:24 PM, Jordan Liggitt 
> wrote:
>
>> Can you include your `restricted` scc definition:
>>
>> oc get scc -o yaml
>>
>> It seems likely that the restricted scc definition was modified in your
>> installation to not be as restrictive. By default, it sets runAsUser to
>> MustRunAsRange
>>
>>
>>
>>
>> On Mon, Feb 6, 2017 at 2:17 PM, Alex Wauck  wrote:
>>
>>> openshift.io/scc is "restricted" for app1-45-3blnd (not running as
>>> root).  It also has that value for app5-36-2rfsq (running as root).
>>>
>>> On Mon, Feb 6, 2017 at 1:11 PM, Clayton Coleman 
>>> wrote:
>>>
>>>> Were those apps created in order?  Or at individual times?   If you did
>>>> the following order of actions:
>>>>
>>>> 1. create app2, app4
>>>> 2. grant the default service account access to a higher level SCC
>>>> 3. create app1, app3, app5, and app6
>>>>
>>>> Then this would be what I would expect.  Can you look at the
>>>> annotations of pod app1-45-3blnd and see what the value of "
>>>> openshift.io/scc" is?
>>>>
>>>>
>>>> On Mon, Feb 6, 2017 at 1:57 PM, Alex Wauck 
>>>> wrote:
>>>>
>>>>> OK, this just got a lot more interesting:
>>>>>
>>>>> $ oc -n some-project exec app1-45-3blnd -- id
>>>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>>>> tape),27(video),100037
>>>>> $ oc -n some-project exec app2-18-q2fwm -- id
>>>>> uid=100037 gid=0(root) groups=100037
>>>>> $ oc -n some-project exec app3-10-lhato -- id
>>>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>>>> tape),27(video),100037
>>>>> $ oc -n some-project exec app4-16-dl2r7 -- id
>>>>> uid=100037 gid=0(root) groups=100037
>>>>> $ oc -n some-project exec app5-36-2rfsq -- id
>>>>> uid=0(root) gid=0(root) groups=0(root),100037
>>>>> $ oc -n some-project exec app6-15-078fd -- id
>>>>> uid=0(root) gid=0(root) groups=0(root),100037
>>>>>
>>>>> All of these pods are running on the same node, and as you can see,
>>>>> they are in the same project.  Yet, some are running as root and some are
>>>>> not.  How weird is that?
>>>>>
>>>>> On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck 
>>>>> wrote:
>>>>>
>>>>>> $ oc export -n some-project pod/good-pod | grep serviceAccount
>>>>>>   serviceAccount: default
>>>>>>   serviceAccountName: default
>>>>>> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>>>>>>   servi

Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
Well, well:

$ oc export scc/restricted
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegedContainer: false
allowedCapabilities: null
apiVersion: v1
defaultAddCapabilities: null
fsGroup:
  type: MustRunAs
groups:
- system:authenticated
kind: SecurityContextConstraints
metadata:
  annotations:
kubernetes.io/description: restricted denies access to all host
features and requires
  pods to be run with a UID, and SELinux context that are allocated to
the namespace.  This
  is the most restrictive SCC.
  creationTimestamp: null
  name: restricted
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SYS_CHROOT
- SETUID
- SETGID
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: MustRunAs
supplementalGroups:
  type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- secret

That runAsUser isn't right, is it?  Any idea how that could have been done
by openshift-ansible or something?  Otherwise, this might be my excuse to
clamp down hard on admin-level access to the cluster.

On Mon, Feb 6, 2017 at 1:24 PM, Jordan Liggitt  wrote:

> Can you include your `restricted` scc definition:
>
> oc get scc -o yaml
>
> It seems likely that the restricted scc definition was modified in your
> installation to not be as restrictive. By default, it sets runAsUser to
> MustRunAsRange
>
>
>
>
> On Mon, Feb 6, 2017 at 2:17 PM, Alex Wauck  wrote:
>
>> openshift.io/scc is "restricted" for app1-45-3blnd (not running as
>> root).  It also has that value for app5-36-2rfsq (running as root).
>>
>> On Mon, Feb 6, 2017 at 1:11 PM, Clayton Coleman 
>> wrote:
>>
>>> Were those apps created in order?  Or at individual times?   If you did
>>> the following order of actions:
>>>
>>> 1. create app2, app4
>>> 2. grant the default service account access to a higher level SCC
>>> 3. create app1, app3, app5, and app6
>>>
>>> Then this would be what I would expect.  Can you look at the annotations
>>> of pod app1-45-3blnd and see what the value of "openshift.io/scc" is?
>>>
>>>
>>> On Mon, Feb 6, 2017 at 1:57 PM, Alex Wauck 
>>> wrote:
>>>
>>>> OK, this just got a lot more interesting:
>>>>
>>>> $ oc -n some-project exec app1-45-3blnd -- id
>>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>>> tape),27(video),100037
>>>> $ oc -n some-project exec app2-18-q2fwm -- id
>>>> uid=100037 gid=0(root) groups=100037
>>>> $ oc -n some-project exec app3-10-lhato -- id
>>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>>> tape),27(video),100037
>>>> $ oc -n some-project exec app4-16-dl2r7 -- id
>>>> uid=100037 gid=0(root) groups=100037
>>>> $ oc -n some-project exec app5-36-2rfsq -- id
>>>> uid=0(root) gid=0(root) groups=0(root),100037
>>>> $ oc -n some-project exec app6-15-078fd -- id
>>>> uid=0(root) gid=0(root) groups=0(root),100037
>>>>
>>>> All of these pods are running on the same node, and as you can see,
>>>> they are in the same project.  Yet, some are running as root and some are
>>>> not.  How weird is that?
>>>>
>>>> On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck 
>>>> wrote:
>>>>
>>>>> $ oc export -n some-project pod/good-pod | grep serviceAccount
>>>>>   serviceAccount: default
>>>>>   serviceAccountName: default
>>>>> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>>>>>   serviceAccount: default
>>>>>   serviceAccountName: default
>>>>>
>>>>> Same serviceAccountName.  This problem seems to happen with any pod
>>>>> from any project that happens to run on these newer nodes.  I examined the
>>>>> output of `oc describe scc`, and I did not find any unexpected access to
>>>>> elevated privileges for a default serviceaccount.  The project were I'm
>>>>> currently seeing the problem is not mentioned at all.  Also, I've seen the
>>>>> problem happen with pods that are managed by the same replication
>>>>> controller.
>>>>>
>>>>> On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman 
>>>>> wrote:
>>>>>

Re: Pods randomly running as root

2017-02-06 Thread Jordan Liggitt
Can you include your `restricted` scc definition:

oc get scc -o yaml

It seems likely that the restricted scc definition was modified in your
installation to not be as restrictive. By default, it sets runAsUser to
MustRunAsRange




On Mon, Feb 6, 2017 at 2:17 PM, Alex Wauck  wrote:

> openshift.io/scc is "restricted" for app1-45-3blnd (not running as
> root).  It also has that value for app5-36-2rfsq (running as root).
>
> On Mon, Feb 6, 2017 at 1:11 PM, Clayton Coleman 
> wrote:
>
>> Were those apps created in order?  Or at individual times?   If you did
>> the following order of actions:
>>
>> 1. create app2, app4
>> 2. grant the default service account access to a higher level SCC
>> 3. create app1, app3, app5, and app6
>>
>> Then this would be what I would expect.  Can you look at the annotations
>> of pod app1-45-3blnd and see what the value of "openshift.io/scc" is?
>>
>>
>> On Mon, Feb 6, 2017 at 1:57 PM, Alex Wauck  wrote:
>>
>>> OK, this just got a lot more interesting:
>>>
>>> $ oc -n some-project exec app1-45-3blnd -- id
>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>> tape),27(video),100037
>>> $ oc -n some-project exec app2-18-q2fwm -- id
>>> uid=100037 gid=0(root) groups=100037
>>> $ oc -n some-project exec app3-10-lhato -- id
>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>> tape),27(video),100037
>>> $ oc -n some-project exec app4-16-dl2r7 -- id
>>> uid=100037 gid=0(root) groups=100037
>>> $ oc -n some-project exec app5-36-2rfsq -- id
>>> uid=0(root) gid=0(root) groups=0(root),100037
>>> $ oc -n some-project exec app6-15-078fd -- id
>>> uid=0(root) gid=0(root) groups=0(root),100037
>>>
>>> All of these pods are running on the same node, and as you can see, they
>>> are in the same project.  Yet, some are running as root and some are not.
>>> How weird is that?
>>>
>>> On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck 
>>> wrote:
>>>
>>>> $ oc export -n some-project pod/good-pod | grep serviceAccount
>>>>   serviceAccount: default
>>>>   serviceAccountName: default
>>>> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>>>>   serviceAccount: default
>>>>   serviceAccountName: default
>>>>
>>>> Same serviceAccountName.  This problem seems to happen with any pod
>>>> from any project that happens to run on these newer nodes.  I examined the
>>>> output of `oc describe scc`, and I did not find any unexpected access to
>>>> elevated privileges for a default serviceaccount.  The project were I'm
>>>> currently seeing the problem is not mentioned at all.  Also, I've seen the
>>>> problem happen with pods that are managed by the same replication
>>>> controller.
>>>>
>>>> On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman 
>>>> wrote:
>>>>
>>>>> Adding the list back
>>>>>
>>>>> -- Forwarded message --
>>>>> From: Clayton Coleman 
>>>>> Date: Mon, Feb 6, 2017 at 1:42 PM
>>>>> Subject: Re: Pods randomly running as root
>>>>> To: Alex Wauck 
>>>>> Cc: users 
>>>>>
>>>>>
>>>>> Do the pods running as root and not have the same serviceAccountName
>>>>> field or different ones?  IF different, you may have granted the service
>>>>> account access to a higher role - defaulting is determined by the SCC's
>>>>> that a service account can access, so an admin level service account will
>>>>> run as root by default unless you specify you don't want that.
>>>>>
>>>>> On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck 
>>>>> wrote:
>>>>>
>>>>>> I'm looking at two nodes where one has the problem and the other
>>>>>> doesn't, and I have confirmed that their node-config.yaml is the same for
>>>>>> both (modulo IP addresses).  The generated kubeconfigs for these nodes on
>>>>>> the master are also the same (modulo IP addresses and keys/certs).
>>>>>>
>>>>>> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck 
>>>>>> wrote:
>>&

Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
Whoops, those are both running as root.  However, app2-18-q2fwm is also
"restricted" and is not running as root.

On Mon, Feb 6, 2017 at 1:17 PM, Alex Wauck  wrote:

> openshift.io/scc is "restricted" for app1-45-3blnd (not running as
> root).  It also has that value for app5-36-2rfsq (running as root).
>
> On Mon, Feb 6, 2017 at 1:11 PM, Clayton Coleman 
> wrote:
>
>> Were those apps created in order?  Or at individual times?   If you did
>> the following order of actions:
>>
>> 1. create app2, app4
>> 2. grant the default service account access to a higher level SCC
>> 3. create app1, app3, app5, and app6
>>
>> Then this would be what I would expect.  Can you look at the annotations
>> of pod app1-45-3blnd and see what the value of "openshift.io/scc" is?
>>
>>
>> On Mon, Feb 6, 2017 at 1:57 PM, Alex Wauck  wrote:
>>
>>> OK, this just got a lot more interesting:
>>>
>>> $ oc -n some-project exec app1-45-3blnd -- id
>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>> tape),27(video),100037
>>> $ oc -n some-project exec app2-18-q2fwm -- id
>>> uid=100037 gid=0(root) groups=100037
>>> $ oc -n some-project exec app3-10-lhato -- id
>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>> tape),27(video),100037
>>> $ oc -n some-project exec app4-16-dl2r7 -- id
>>> uid=100037 gid=0(root) groups=100037
>>> $ oc -n some-project exec app5-36-2rfsq -- id
>>> uid=0(root) gid=0(root) groups=0(root),100037
>>> $ oc -n some-project exec app6-15-078fd -- id
>>> uid=0(root) gid=0(root) groups=0(root),100037
>>>
>>> All of these pods are running on the same node, and as you can see, they
>>> are in the same project.  Yet, some are running as root and some are not.
>>> How weird is that?
>>>
>>> On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck 
>>> wrote:
>>>
>>>> $ oc export -n some-project pod/good-pod | grep serviceAccount
>>>>   serviceAccount: default
>>>>   serviceAccountName: default
>>>> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>>>>   serviceAccount: default
>>>>   serviceAccountName: default
>>>>
>>>> Same serviceAccountName.  This problem seems to happen with any pod
>>>> from any project that happens to run on these newer nodes.  I examined the
>>>> output of `oc describe scc`, and I did not find any unexpected access to
>>>> elevated privileges for a default serviceaccount.  The project were I'm
>>>> currently seeing the problem is not mentioned at all.  Also, I've seen the
>>>> problem happen with pods that are managed by the same replication
>>>> controller.
>>>>
>>>> On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman 
>>>> wrote:
>>>>
>>>>> Adding the list back
>>>>>
>>>>> -- Forwarded message --
>>>>> From: Clayton Coleman 
>>>>> Date: Mon, Feb 6, 2017 at 1:42 PM
>>>>> Subject: Re: Pods randomly running as root
>>>>> To: Alex Wauck 
>>>>> Cc: users 
>>>>>
>>>>>
>>>>> Do the pods running as root and not have the same serviceAccountName
>>>>> field or different ones?  IF different, you may have granted the service
>>>>> account access to a higher role - defaulting is determined by the SCC's
>>>>> that a service account can access, so an admin level service account will
>>>>> run as root by default unless you specify you don't want that.
>>>>>
>>>>> On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck 
>>>>> wrote:
>>>>>
>>>>>> I'm looking at two nodes where one has the problem and the other
>>>>>> doesn't, and I have confirmed that their node-config.yaml is the same for
>>>>>> both (modulo IP addresses).  The generated kubeconfigs for these nodes on
>>>>>> the master are also the same (modulo IP addresses and keys/certs).
>>>>>>
>>>>>> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck 
>>>>>> wrote:
>>>>>>
>>>>>>> Oh, wait.  I was looking at the wrong section.  The non-root pod as
&g

Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
openshift.io/scc is "restricted" for app1-45-3blnd (not running as root).
It also has that value for app5-36-2rfsq (running as root).

On Mon, Feb 6, 2017 at 1:11 PM, Clayton Coleman  wrote:

> Were those apps created in order?  Or at individual times?   If you did
> the following order of actions:
>
> 1. create app2, app4
> 2. grant the default service account access to a higher level SCC
> 3. create app1, app3, app5, and app6
>
> Then this would be what I would expect.  Can you look at the annotations
> of pod app1-45-3blnd and see what the value of "openshift.io/scc" is?
>
>
> On Mon, Feb 6, 2017 at 1:57 PM, Alex Wauck  wrote:
>
>> OK, this just got a lot more interesting:
>>
>> $ oc -n some-project exec app1-45-3blnd -- id
>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),
>> 26(tape),27(video),100037
>> $ oc -n some-project exec app2-18-q2fwm -- id
>> uid=100037 gid=0(root) groups=100037
>> $ oc -n some-project exec app3-10-lhato -- id
>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),
>> 26(tape),27(video),100037
>> $ oc -n some-project exec app4-16-dl2r7 -- id
>> uid=100037 gid=0(root) groups=100037
>> $ oc -n some-project exec app5-36-2rfsq -- id
>> uid=0(root) gid=0(root) groups=0(root),100037
>> $ oc -n some-project exec app6-15-078fd -- id
>> uid=0(root) gid=0(root) groups=0(root),100037
>>
>> All of these pods are running on the same node, and as you can see, they
>> are in the same project.  Yet, some are running as root and some are not.
>> How weird is that?
>>
>> On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck 
>> wrote:
>>
>>> $ oc export -n some-project pod/good-pod | grep serviceAccount
>>>   serviceAccount: default
>>>   serviceAccountName: default
>>> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>>>   serviceAccount: default
>>>   serviceAccountName: default
>>>
>>> Same serviceAccountName.  This problem seems to happen with any pod from
>>> any project that happens to run on these newer nodes.  I examined the
>>> output of `oc describe scc`, and I did not find any unexpected access to
>>> elevated privileges for a default serviceaccount.  The project were I'm
>>> currently seeing the problem is not mentioned at all.  Also, I've seen the
>>> problem happen with pods that are managed by the same replication
>>> controller.
>>>
>>> On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman 
>>> wrote:
>>>
>>>> Adding the list back
>>>>
>>>> -- Forwarded message --
>>>> From: Clayton Coleman 
>>>> Date: Mon, Feb 6, 2017 at 1:42 PM
>>>> Subject: Re: Pods randomly running as root
>>>> To: Alex Wauck 
>>>> Cc: users 
>>>>
>>>>
>>>> Do the pods running as root and not have the same serviceAccountName
>>>> field or different ones?  IF different, you may have granted the service
>>>> account access to a higher role - defaulting is determined by the SCC's
>>>> that a service account can access, so an admin level service account will
>>>> run as root by default unless you specify you don't want that.
>>>>
>>>> On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck 
>>>> wrote:
>>>>
>>>>> I'm looking at two nodes where one has the problem and the other
>>>>> doesn't, and I have confirmed that their node-config.yaml is the same for
>>>>> both (modulo IP addresses).  The generated kubeconfigs for these nodes on
>>>>> the master are also the same (modulo IP addresses and keys/certs).
>>>>>
>>>>> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck 
>>>>> wrote:
>>>>>
>>>>>> Oh, wait.  I was looking at the wrong section.  The non-root pod as a
>>>>>> runAsUser attribute, but the root pod doesn't!
>>>>>>
>>>>>> On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck 
>>>>>> wrote:
>>>>>>
>>>>>>> A pod that IS running as root have this:
>>>>>>>
>>>>>>>   securityContext:
>>>>>>> fsGroup: 100037
>>>>>>> seLinuxOptions:
>>>>>>>   level: s0:c19,c14
>>>>

Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
Redacted versions of the full pod specs are attached.  app1 is running as
root; app2 is not.

On Mon, Feb 6, 2017 at 1:07 PM, Jordan Liggitt  wrote:

> Can you provide the full pod specs for a pod running as non-root and a pod
> running as root?
>
> Can you also provide the definition of the SecurityContextConstraint
> referenced in the pod specs "openshift.io/scc" annotation?
>
>
>
>
> On Mon, Feb 6, 2017 at 2:01 PM, Alex Wauck  wrote:
>
>> Judging by pod start times, it looks like everything that started before
>> February 2 is not running as root, while everything else is.
>>
>> On Mon, Feb 6, 2017 at 12:57 PM, Alex Wauck 
>> wrote:
>>
>>> OK, this just got a lot more interesting:
>>>
>>> $ oc -n some-project exec app1-45-3blnd -- id
>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>> tape),27(video),100037
>>> $ oc -n some-project exec app2-18-q2fwm -- id
>>> uid=100037 gid=0(root) groups=100037
>>> $ oc -n some-project exec app3-10-lhato -- id
>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>> tape),27(video),100037
>>> $ oc -n some-project exec app4-16-dl2r7 -- id
>>> uid=100037 gid=0(root) groups=100037
>>> $ oc -n some-project exec app5-36-2rfsq -- id
>>> uid=0(root) gid=0(root) groups=0(root),100037
>>> $ oc -n some-project exec app6-15-078fd -- id
>>> uid=0(root) gid=0(root) groups=0(root),100037
>>>
>>> All of these pods are running on the same node, and as you can see, they
>>> are in the same project.  Yet, some are running as root and some are not.
>>> How weird is that?
>>>
>>> On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck 
>>> wrote:
>>>
>>>> $ oc export -n some-project pod/good-pod | grep serviceAccount
>>>>   serviceAccount: default
>>>>   serviceAccountName: default
>>>> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>>>>   serviceAccount: default
>>>>   serviceAccountName: default
>>>>
>>>> Same serviceAccountName.  This problem seems to happen with any pod
>>>> from any project that happens to run on these newer nodes.  I examined the
>>>> output of `oc describe scc`, and I did not find any unexpected access to
>>>> elevated privileges for a default serviceaccount.  The project were I'm
>>>> currently seeing the problem is not mentioned at all.  Also, I've seen the
>>>> problem happen with pods that are managed by the same replication
>>>> controller.
>>>>
>>>> On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman 
>>>> wrote:
>>>>
>>>>> Adding the list back
>>>>>
>>>>> -- Forwarded message --
>>>>> From: Clayton Coleman 
>>>>> Date: Mon, Feb 6, 2017 at 1:42 PM
>>>>> Subject: Re: Pods randomly running as root
>>>>> To: Alex Wauck 
>>>>> Cc: users 
>>>>>
>>>>>
>>>>> Do the pods running as root and not have the same serviceAccountName
>>>>> field or different ones?  IF different, you may have granted the service
>>>>> account access to a higher role - defaulting is determined by the SCC's
>>>>> that a service account can access, so an admin level service account will
>>>>> run as root by default unless you specify you don't want that.
>>>>>
>>>>> On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck 
>>>>> wrote:
>>>>>
>>>>>> I'm looking at two nodes where one has the problem and the other
>>>>>> doesn't, and I have confirmed that their node-config.yaml is the same for
>>>>>> both (modulo IP addresses).  The generated kubeconfigs for these nodes on
>>>>>> the master are also the same (modulo IP addresses and keys/certs).
>>>>>>
>>>>>> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck 
>>>>>> wrote:
>>>>>>
>>>>>>> Oh, wait.  I was looking at the wrong section.  The non-root pod as
>>>>>>> a runAsUser attribute, but the root pod doesn't!
>>>>>>>
>>>>>>> On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck 
>>>>>>> wrote:
>>>

Re: Pods randomly running as root

2017-02-06 Thread Clayton Coleman
Were those apps created in order?  Or at individual times?   If you did the
following order of actions:

1. create app2, app4
2. grant the default service account access to a higher level SCC
3. create app1, app3, app5, and app6

Then this would be what I would expect.  Can you look at the annotations of
pod app1-45-3blnd and see what the value of "openshift.io/scc" is?


On Mon, Feb 6, 2017 at 1:57 PM, Alex Wauck  wrote:

> OK, this just got a lot more interesting:
>
> $ oc -n some-project exec app1-45-3blnd -- id
> uid=0(root) gid=0(root) groups=0(root),1(bin),2(
> daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(
> dialout),26(tape),27(video),100037
> $ oc -n some-project exec app2-18-q2fwm -- id
> uid=100037 gid=0(root) groups=100037
> $ oc -n some-project exec app3-10-lhato -- id
> uid=0(root) gid=0(root) groups=0(root),1(bin),2(
> daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(
> dialout),26(tape),27(video),100037
> $ oc -n some-project exec app4-16-dl2r7 -- id
> uid=100037 gid=0(root) groups=100037
> $ oc -n some-project exec app5-36-2rfsq -- id
> uid=0(root) gid=0(root) groups=0(root),100037
> $ oc -n some-project exec app6-15-078fd -- id
> uid=0(root) gid=0(root) groups=0(root),100037
>
> All of these pods are running on the same node, and as you can see, they
> are in the same project.  Yet, some are running as root and some are not.
> How weird is that?
>
> On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck  wrote:
>
>> $ oc export -n some-project pod/good-pod | grep serviceAccount
>>   serviceAccount: default
>>   serviceAccountName: default
>> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>>   serviceAccount: default
>>   serviceAccountName: default
>>
>> Same serviceAccountName.  This problem seems to happen with any pod from
>> any project that happens to run on these newer nodes.  I examined the
>> output of `oc describe scc`, and I did not find any unexpected access to
>> elevated privileges for a default serviceaccount.  The project were I'm
>> currently seeing the problem is not mentioned at all.  Also, I've seen the
>> problem happen with pods that are managed by the same replication
>> controller.
>>
>> On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman 
>> wrote:
>>
>>> Adding the list back
>>>
>>> -- Forwarded message --
>>> From: Clayton Coleman 
>>> Date: Mon, Feb 6, 2017 at 1:42 PM
>>> Subject: Re: Pods randomly running as root
>>> To: Alex Wauck 
>>> Cc: users 
>>>
>>>
>>> Do the pods running as root and not have the same serviceAccountName
>>> field or different ones?  IF different, you may have granted the service
>>> account access to a higher role - defaulting is determined by the SCC's
>>> that a service account can access, so an admin level service account will
>>> run as root by default unless you specify you don't want that.
>>>
>>> On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck 
>>> wrote:
>>>
>>>> I'm looking at two nodes where one has the problem and the other
>>>> doesn't, and I have confirmed that their node-config.yaml is the same for
>>>> both (modulo IP addresses).  The generated kubeconfigs for these nodes on
>>>> the master are also the same (modulo IP addresses and keys/certs).
>>>>
>>>> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck 
>>>> wrote:
>>>>
>>>>> Oh, wait.  I was looking at the wrong section.  The non-root pod as a
>>>>> runAsUser attribute, but the root pod doesn't!
>>>>>
>>>>> On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck 
>>>>> wrote:
>>>>>
>>>>>> A pod that IS running as root have this:
>>>>>>
>>>>>>   securityContext:
>>>>>> fsGroup: 100037
>>>>>> seLinuxOptions:
>>>>>>   level: s0:c19,c14
>>>>>>
>>>>>> Another pod in the same project that is NOT running as root has the
>>>>>> exact same securityContext section.
>>>>>>
>>>>>> On Mon, Feb 6, 2017 at 10:25 AM, Clayton Coleman >>>>> > wrote:
>>>>>>
>>>>>>> Do the pods themselves have a user UID set on them?  Each pod should
>>>>>>> have the container "securityContext" field set and have an explicit 
>>>>>>> user ID
>>>>>>> val

Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
Judging by pod start times, it looks like everything that started before
February 2 is not running as root, while everything else is.

On Mon, Feb 6, 2017 at 12:57 PM, Alex Wauck  wrote:

> OK, this just got a lot more interesting:
>
> $ oc -n some-project exec app1-45-3blnd -- id
> uid=0(root) gid=0(root) groups=0(root),1(bin),2(
> daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(
> dialout),26(tape),27(video),100037
> $ oc -n some-project exec app2-18-q2fwm -- id
> uid=100037 gid=0(root) groups=100037
> $ oc -n some-project exec app3-10-lhato -- id
> uid=0(root) gid=0(root) groups=0(root),1(bin),2(
> daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(
> dialout),26(tape),27(video),100037
> $ oc -n some-project exec app4-16-dl2r7 -- id
> uid=100037 gid=0(root) groups=100037
> $ oc -n some-project exec app5-36-2rfsq -- id
> uid=0(root) gid=0(root) groups=0(root),100037
> $ oc -n some-project exec app6-15-078fd -- id
> uid=0(root) gid=0(root) groups=0(root),100037
>
> All of these pods are running on the same node, and as you can see, they
> are in the same project.  Yet, some are running as root and some are not.
> How weird is that?
>
> On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck  wrote:
>
>> $ oc export -n some-project pod/good-pod | grep serviceAccount
>>   serviceAccount: default
>>   serviceAccountName: default
>> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>>   serviceAccount: default
>>   serviceAccountName: default
>>
>> Same serviceAccountName.  This problem seems to happen with any pod from
>> any project that happens to run on these newer nodes.  I examined the
>> output of `oc describe scc`, and I did not find any unexpected access to
>> elevated privileges for a default serviceaccount.  The project were I'm
>> currently seeing the problem is not mentioned at all.  Also, I've seen the
>> problem happen with pods that are managed by the same replication
>> controller.
>>
>> On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman 
>> wrote:
>>
>>> Adding the list back
>>>
>>> -- Forwarded message --
>>> From: Clayton Coleman 
>>> Date: Mon, Feb 6, 2017 at 1:42 PM
>>> Subject: Re: Pods randomly running as root
>>> To: Alex Wauck 
>>> Cc: users 
>>>
>>>
>>> Do the pods running as root and not have the same serviceAccountName
>>> field or different ones?  IF different, you may have granted the service
>>> account access to a higher role - defaulting is determined by the SCC's
>>> that a service account can access, so an admin level service account will
>>> run as root by default unless you specify you don't want that.
>>>
>>> On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck 
>>> wrote:
>>>
>>>> I'm looking at two nodes where one has the problem and the other
>>>> doesn't, and I have confirmed that their node-config.yaml is the same for
>>>> both (modulo IP addresses).  The generated kubeconfigs for these nodes on
>>>> the master are also the same (modulo IP addresses and keys/certs).
>>>>
>>>> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck 
>>>> wrote:
>>>>
>>>>> Oh, wait.  I was looking at the wrong section.  The non-root pod as a
>>>>> runAsUser attribute, but the root pod doesn't!
>>>>>
>>>>> On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck 
>>>>> wrote:
>>>>>
>>>>>> A pod that IS running as root have this:
>>>>>>
>>>>>>   securityContext:
>>>>>> fsGroup: 100037
>>>>>> seLinuxOptions:
>>>>>>   level: s0:c19,c14
>>>>>>
>>>>>> Another pod in the same project that is NOT running as root has the
>>>>>> exact same securityContext section.
>>>>>>
>>>>>> On Mon, Feb 6, 2017 at 10:25 AM, Clayton Coleman >>>>> > wrote:
>>>>>>
>>>>>>> Do the pods themselves have a user UID set on them?  Each pod should
>>>>>>> have the container "securityContext" field set and have an explicit 
>>>>>>> user ID
>>>>>>> value set.
>>>>>>>
>>>>>>> On Mon, Feb 6, 2017 at 11:23 AM, Alex Wauck 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> These are completely normal app containe

Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
OK, this just got a lot more interesting:

$ oc -n some-project exec app1-45-3blnd -- id
uid=0(root) gid=0(root)
groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video),100037
$ oc -n some-project exec app2-18-q2fwm -- id
uid=100037 gid=0(root) groups=100037
$ oc -n some-project exec app3-10-lhato -- id
uid=0(root) gid=0(root)
groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video),100037
$ oc -n some-project exec app4-16-dl2r7 -- id
uid=100037 gid=0(root) groups=100037
$ oc -n some-project exec app5-36-2rfsq -- id
uid=0(root) gid=0(root) groups=0(root),100037
$ oc -n some-project exec app6-15-078fd -- id
uid=0(root) gid=0(root) groups=0(root),100037

All of these pods are running on the same node, and as you can see, they
are in the same project.  Yet, some are running as root and some are not.
How weird is that?

On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck  wrote:

> $ oc export -n some-project pod/good-pod | grep serviceAccount
>   serviceAccount: default
>   serviceAccountName: default
> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>   serviceAccount: default
>   serviceAccountName: default
>
> Same serviceAccountName.  This problem seems to happen with any pod from
> any project that happens to run on these newer nodes.  I examined the
> output of `oc describe scc`, and I did not find any unexpected access to
> elevated privileges for a default serviceaccount.  The project were I'm
> currently seeing the problem is not mentioned at all.  Also, I've seen the
> problem happen with pods that are managed by the same replication
> controller.
>
> On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman 
> wrote:
>
>> Adding the list back
>>
>> -- Forwarded message --
>> From: Clayton Coleman 
>> Date: Mon, Feb 6, 2017 at 1:42 PM
>> Subject: Re: Pods randomly running as root
>> To: Alex Wauck 
>> Cc: users 
>>
>>
>> Do the pods running as root and not have the same serviceAccountName
>> field or different ones?  IF different, you may have granted the service
>> account access to a higher role - defaulting is determined by the SCC's
>> that a service account can access, so an admin level service account will
>> run as root by default unless you specify you don't want that.
>>
>> On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck  wrote:
>>
>>> I'm looking at two nodes where one has the problem and the other
>>> doesn't, and I have confirmed that their node-config.yaml is the same for
>>> both (modulo IP addresses).  The generated kubeconfigs for these nodes on
>>> the master are also the same (modulo IP addresses and keys/certs).
>>>
>>> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck 
>>> wrote:
>>>
>>>> Oh, wait.  I was looking at the wrong section.  The non-root pod as a
>>>> runAsUser attribute, but the root pod doesn't!
>>>>
>>>> On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck 
>>>> wrote:
>>>>
>>>>> A pod that IS running as root have this:
>>>>>
>>>>>   securityContext:
>>>>> fsGroup: 100037
>>>>> seLinuxOptions:
>>>>>   level: s0:c19,c14
>>>>>
>>>>> Another pod in the same project that is NOT running as root has the
>>>>> exact same securityContext section.
>>>>>
>>>>> On Mon, Feb 6, 2017 at 10:25 AM, Clayton Coleman 
>>>>> wrote:
>>>>>
>>>>>> Do the pods themselves have a user UID set on them?  Each pod should
>>>>>> have the container "securityContext" field set and have an explicit user 
>>>>>> ID
>>>>>> value set.
>>>>>>
>>>>>> On Mon, Feb 6, 2017 at 11:23 AM, Alex Wauck 
>>>>>> wrote:
>>>>>>
>>>>>>> These are completely normal app containers.  They are managed by
>>>>>>> deploy configs.  Whether they run as root or not seems to depend on 
>>>>>>> which
>>>>>>> node they run on: the older nodes seem to run pods as random UIDs, while
>>>>>>> the newer ones run as root.  Our older nodes have docker-selinux-1.10.3
>>>>>>> installed, while the newer ones do not.  They only have
>>>>>>> docker-selinux-1.9.1 available, since the 1.10.3 package seems to have 
>>>>>>> been
>>>>>>> remove

Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
$ oc export -n some-project pod/good-pod | grep serviceAccount
  serviceAccount: default
  serviceAccountName: default
$ oc export -n some-project pod/bad-pod | grep serviceAccount
  serviceAccount: default
  serviceAccountName: default

Same serviceAccountName.  This problem seems to happen with any pod from
any project that happens to run on these newer nodes.  I examined the
output of `oc describe scc`, and I did not find any unexpected access to
elevated privileges for a default serviceaccount.  The project were I'm
currently seeing the problem is not mentioned at all.  Also, I've seen the
problem happen with pods that are managed by the same replication
controller.

On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman 
wrote:

> Adding the list back
>
> -- Forwarded message --
> From: Clayton Coleman 
> Date: Mon, Feb 6, 2017 at 1:42 PM
> Subject: Re: Pods randomly running as root
> To: Alex Wauck 
> Cc: users 
>
>
> Do the pods running as root and not have the same serviceAccountName field
> or different ones?  IF different, you may have granted the service account
> access to a higher role - defaulting is determined by the SCC's that a
> service account can access, so an admin level service account will run as
> root by default unless you specify you don't want that.
>
> On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck  wrote:
>
>> I'm looking at two nodes where one has the problem and the other doesn't,
>> and I have confirmed that their node-config.yaml is the same for both
>> (modulo IP addresses).  The generated kubeconfigs for these nodes on the
>> master are also the same (modulo IP addresses and keys/certs).
>>
>> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck 
>> wrote:
>>
>>> Oh, wait.  I was looking at the wrong section.  The non-root pod as a
>>> runAsUser attribute, but the root pod doesn't!
>>>
>>> On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck 
>>> wrote:
>>>
>>>> A pod that IS running as root have this:
>>>>
>>>>   securityContext:
>>>> fsGroup: 100037
>>>> seLinuxOptions:
>>>>   level: s0:c19,c14
>>>>
>>>> Another pod in the same project that is NOT running as root has the
>>>> exact same securityContext section.
>>>>
>>>> On Mon, Feb 6, 2017 at 10:25 AM, Clayton Coleman 
>>>> wrote:
>>>>
>>>>> Do the pods themselves have a user UID set on them?  Each pod should
>>>>> have the container "securityContext" field set and have an explicit user 
>>>>> ID
>>>>> value set.
>>>>>
>>>>> On Mon, Feb 6, 2017 at 11:23 AM, Alex Wauck 
>>>>> wrote:
>>>>>
>>>>>> These are completely normal app containers.  They are managed by
>>>>>> deploy configs.  Whether they run as root or not seems to depend on which
>>>>>> node they run on: the older nodes seem to run pods as random UIDs, while
>>>>>> the newer ones run as root.  Our older nodes have docker-selinux-1.10.3
>>>>>> installed, while the newer ones do not.  They only have
>>>>>> docker-selinux-1.9.1 available, since the 1.10.3 package seems to have 
>>>>>> been
>>>>>> removed from the CentOS extras repo.
>>>>>>
>>>>>> We are running OpenShift 1.2.1, since I haven't had time to upgrade
>>>>>> it.
>>>>>>
>>>>>> On Mon, Feb 6, 2017 at 8:31 AM, Clayton Coleman 
>>>>>> wrote:
>>>>>>
>>>>>>> Are you running them directly (launching a pod)?  Or running them
>>>>>>> under another controller resource.
>>>>>>>
>>>>>>> On Feb 6, 2017, at 2:00 AM, Alex Wauck 
>>>>>>> wrote:
>>>>>>>
>>>>>>> Recently, I began to notice that some of my pods on OpenShift run as
>>>>>>> root instead of a random UID.  There does not seem to be any obvious 
>>>>>>> cause
>>>>>>> (e.g. SCC).  Any idea how this could happen or where to look for clues?
>>>>>>>
>>>>>>>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com <http://www.exosite.com/>*

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Fwd: Pods randomly running as root

2017-02-06 Thread Clayton Coleman
Adding the list back

-- Forwarded message --
From: Clayton Coleman 
Date: Mon, Feb 6, 2017 at 1:42 PM
Subject: Re: Pods randomly running as root
To: Alex Wauck 
Cc: users 


Do the pods running as root and not have the same serviceAccountName field
or different ones?  IF different, you may have granted the service account
access to a higher role - defaulting is determined by the SCC's that a
service account can access, so an admin level service account will run as
root by default unless you specify you don't want that.

On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck  wrote:

> I'm looking at two nodes where one has the problem and the other doesn't,
> and I have confirmed that their node-config.yaml is the same for both
> (modulo IP addresses).  The generated kubeconfigs for these nodes on the
> master are also the same (modulo IP addresses and keys/certs).
>
> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck  wrote:
>
>> Oh, wait.  I was looking at the wrong section.  The non-root pod as a
>> runAsUser attribute, but the root pod doesn't!
>>
>> On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck 
>> wrote:
>>
>>> A pod that IS running as root have this:
>>>
>>>   securityContext:
>>> fsGroup: 100037
>>> seLinuxOptions:
>>>   level: s0:c19,c14
>>>
>>> Another pod in the same project that is NOT running as root has the
>>> exact same securityContext section.
>>>
>>> On Mon, Feb 6, 2017 at 10:25 AM, Clayton Coleman 
>>> wrote:
>>>
>>>> Do the pods themselves have a user UID set on them?  Each pod should
>>>> have the container "securityContext" field set and have an explicit user ID
>>>> value set.
>>>>
>>>> On Mon, Feb 6, 2017 at 11:23 AM, Alex Wauck 
>>>> wrote:
>>>>
>>>>> These are completely normal app containers.  They are managed by
>>>>> deploy configs.  Whether they run as root or not seems to depend on which
>>>>> node they run on: the older nodes seem to run pods as random UIDs, while
>>>>> the newer ones run as root.  Our older nodes have docker-selinux-1.10.3
>>>>> installed, while the newer ones do not.  They only have
>>>>> docker-selinux-1.9.1 available, since the 1.10.3 package seems to have 
>>>>> been
>>>>> removed from the CentOS extras repo.
>>>>>
>>>>> We are running OpenShift 1.2.1, since I haven't had time to upgrade it.
>>>>>
>>>>> On Mon, Feb 6, 2017 at 8:31 AM, Clayton Coleman 
>>>>> wrote:
>>>>>
>>>>>> Are you running them directly (launching a pod)?  Or running them
>>>>>> under another controller resource.
>>>>>>
>>>>>> On Feb 6, 2017, at 2:00 AM, Alex Wauck  wrote:
>>>>>>
>>>>>> Recently, I began to notice that some of my pods on OpenShift run as
>>>>>> root instead of a random UID.  There does not seem to be any obvious 
>>>>>> cause
>>>>>> (e.g. SCC).  Any idea how this could happen or where to look for clues?
>>>>>>
>>>>>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
I'm looking at two nodes where one has the problem and the other doesn't,
and I have confirmed that their node-config.yaml is the same for both
(modulo IP addresses).  The generated kubeconfigs for these nodes on the
master are also the same (modulo IP addresses and keys/certs).

On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck  wrote:

> Oh, wait.  I was looking at the wrong section.  The non-root pod as a
> runAsUser attribute, but the root pod doesn't!
>
> On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck  wrote:
>
>> A pod that IS running as root have this:
>>
>>   securityContext:
>> fsGroup: 100037
>> seLinuxOptions:
>>   level: s0:c19,c14
>>
>> Another pod in the same project that is NOT running as root has the exact
>> same securityContext section.
>>
>> On Mon, Feb 6, 2017 at 10:25 AM, Clayton Coleman 
>> wrote:
>>
>>> Do the pods themselves have a user UID set on them?  Each pod should
>>> have the container "securityContext" field set and have an explicit user ID
>>> value set.
>>>
>>> On Mon, Feb 6, 2017 at 11:23 AM, Alex Wauck 
>>> wrote:
>>>
 These are completely normal app containers.  They are managed by deploy
 configs.  Whether they run as root or not seems to depend on which node
 they run on: the older nodes seem to run pods as random UIDs, while the
 newer ones run as root.  Our older nodes have docker-selinux-1.10.3
 installed, while the newer ones do not.  They only have
 docker-selinux-1.9.1 available, since the 1.10.3 package seems to have been
 removed from the CentOS extras repo.

 We are running OpenShift 1.2.1, since I haven't had time to upgrade it.

 On Mon, Feb 6, 2017 at 8:31 AM, Clayton Coleman 
 wrote:

> Are you running them directly (launching a pod)?  Or running them
> under another controller resource.
>
> On Feb 6, 2017, at 2:00 AM, Alex Wauck  wrote:
>
> Recently, I began to notice that some of my pods on OpenShift run as
> root instead of a random UID.  There does not seem to be any obvious cause
> (e.g. SCC).  Any idea how this could happen or where to look for clues?
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


 --

 Alex Wauck // DevOps Engineer

 *E X O S I T E*
 *www.exosite.com *

 Making Machines More Human.


>>>
>>
>>
>> --
>>
>> Alex Wauck // DevOps Engineer
>>
>> *E X O S I T E*
>> *www.exosite.com *
>>
>> Making Machines More Human.
>>
>>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
Oh, wait.  I was looking at the wrong section.  The non-root pod as a
runAsUser attribute, but the root pod doesn't!

On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck  wrote:

> A pod that IS running as root have this:
>
>   securityContext:
> fsGroup: 100037
> seLinuxOptions:
>   level: s0:c19,c14
>
> Another pod in the same project that is NOT running as root has the exact
> same securityContext section.
>
> On Mon, Feb 6, 2017 at 10:25 AM, Clayton Coleman 
> wrote:
>
>> Do the pods themselves have a user UID set on them?  Each pod should have
>> the container "securityContext" field set and have an explicit user ID
>> value set.
>>
>> On Mon, Feb 6, 2017 at 11:23 AM, Alex Wauck 
>> wrote:
>>
>>> These are completely normal app containers.  They are managed by deploy
>>> configs.  Whether they run as root or not seems to depend on which node
>>> they run on: the older nodes seem to run pods as random UIDs, while the
>>> newer ones run as root.  Our older nodes have docker-selinux-1.10.3
>>> installed, while the newer ones do not.  They only have
>>> docker-selinux-1.9.1 available, since the 1.10.3 package seems to have been
>>> removed from the CentOS extras repo.
>>>
>>> We are running OpenShift 1.2.1, since I haven't had time to upgrade it.
>>>
>>> On Mon, Feb 6, 2017 at 8:31 AM, Clayton Coleman 
>>> wrote:
>>>
 Are you running them directly (launching a pod)?  Or running them under
 another controller resource.

 On Feb 6, 2017, at 2:00 AM, Alex Wauck  wrote:

 Recently, I began to notice that some of my pods on OpenShift run as
 root instead of a random UID.  There does not seem to be any obvious cause
 (e.g. SCC).  Any idea how this could happen or where to look for clues?

 --

 Alex Wauck // DevOps Engineer

 *E X O S I T E*
 *www.exosite.com *

 Making Machines More Human.

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>>>
>>> --
>>>
>>> Alex Wauck // DevOps Engineer
>>>
>>> *E X O S I T E*
>>> *www.exosite.com *
>>>
>>> Making Machines More Human.
>>>
>>>
>>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
A pod that IS running as root have this:

  securityContext:
fsGroup: 100037
seLinuxOptions:
  level: s0:c19,c14

Another pod in the same project that is NOT running as root has the exact
same securityContext section.

On Mon, Feb 6, 2017 at 10:25 AM, Clayton Coleman 
wrote:

> Do the pods themselves have a user UID set on them?  Each pod should have
> the container "securityContext" field set and have an explicit user ID
> value set.
>
> On Mon, Feb 6, 2017 at 11:23 AM, Alex Wauck  wrote:
>
>> These are completely normal app containers.  They are managed by deploy
>> configs.  Whether they run as root or not seems to depend on which node
>> they run on: the older nodes seem to run pods as random UIDs, while the
>> newer ones run as root.  Our older nodes have docker-selinux-1.10.3
>> installed, while the newer ones do not.  They only have
>> docker-selinux-1.9.1 available, since the 1.10.3 package seems to have been
>> removed from the CentOS extras repo.
>>
>> We are running OpenShift 1.2.1, since I haven't had time to upgrade it.
>>
>> On Mon, Feb 6, 2017 at 8:31 AM, Clayton Coleman 
>> wrote:
>>
>>> Are you running them directly (launching a pod)?  Or running them under
>>> another controller resource.
>>>
>>> On Feb 6, 2017, at 2:00 AM, Alex Wauck  wrote:
>>>
>>> Recently, I began to notice that some of my pods on OpenShift run as
>>> root instead of a random UID.  There does not seem to be any obvious cause
>>> (e.g. SCC).  Any idea how this could happen or where to look for clues?
>>>
>>> --
>>>
>>> Alex Wauck // DevOps Engineer
>>>
>>> *E X O S I T E*
>>> *www.exosite.com *
>>>
>>> Making Machines More Human.
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>>
>> Alex Wauck // DevOps Engineer
>>
>> *E X O S I T E*
>> *www.exosite.com *
>>
>> Making Machines More Human.
>>
>>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods randomly running as root

2017-02-06 Thread Clayton Coleman
Do the pods themselves have a user UID set on them?  Each pod should have
the container "securityContext" field set and have an explicit user ID
value set.

On Mon, Feb 6, 2017 at 11:23 AM, Alex Wauck  wrote:

> These are completely normal app containers.  They are managed by deploy
> configs.  Whether they run as root or not seems to depend on which node
> they run on: the older nodes seem to run pods as random UIDs, while the
> newer ones run as root.  Our older nodes have docker-selinux-1.10.3
> installed, while the newer ones do not.  They only have
> docker-selinux-1.9.1 available, since the 1.10.3 package seems to have been
> removed from the CentOS extras repo.
>
> We are running OpenShift 1.2.1, since I haven't had time to upgrade it.
>
> On Mon, Feb 6, 2017 at 8:31 AM, Clayton Coleman 
> wrote:
>
>> Are you running them directly (launching a pod)?  Or running them under
>> another controller resource.
>>
>> On Feb 6, 2017, at 2:00 AM, Alex Wauck  wrote:
>>
>> Recently, I began to notice that some of my pods on OpenShift run as root
>> instead of a random UID.  There does not seem to be any obvious cause (e.g.
>> SCC).  Any idea how this could happen or where to look for clues?
>>
>> --
>>
>> Alex Wauck // DevOps Engineer
>>
>> *E X O S I T E*
>> *www.exosite.com *
>>
>> Making Machines More Human.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods randomly running as root

2017-02-06 Thread Alex Wauck
These are completely normal app containers.  They are managed by deploy
configs.  Whether they run as root or not seems to depend on which node
they run on: the older nodes seem to run pods as random UIDs, while the
newer ones run as root.  Our older nodes have docker-selinux-1.10.3
installed, while the newer ones do not.  They only have
docker-selinux-1.9.1 available, since the 1.10.3 package seems to have been
removed from the CentOS extras repo.

We are running OpenShift 1.2.1, since I haven't had time to upgrade it.

On Mon, Feb 6, 2017 at 8:31 AM, Clayton Coleman  wrote:

> Are you running them directly (launching a pod)?  Or running them under
> another controller resource.
>
> On Feb 6, 2017, at 2:00 AM, Alex Wauck  wrote:
>
> Recently, I began to notice that some of my pods on OpenShift run as root
> instead of a random UID.  There does not seem to be any obvious cause (e.g.
> SCC).  Any idea how this could happen or where to look for clues?
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods randomly running as root

2017-02-06 Thread Clayton Coleman
Are you running them directly (launching a pod)?  Or running them under
another controller resource.

On Feb 6, 2017, at 2:00 AM, Alex Wauck  wrote:

Recently, I began to notice that some of my pods on OpenShift run as root
instead of a random UID.  There does not seem to be any obvious cause (e.g.
SCC).  Any idea how this could happen or where to look for clues?

-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Pods randomly running as root

2017-02-05 Thread Alex Wauck
Recently, I began to notice that some of my pods on OpenShift run as root
instead of a random UID.  There does not seem to be any obvious cause (e.g.
SCC).  Any idea how this could happen or where to look for clues?

-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users