Well, well:

$ oc export scc/restricted
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegedContainer: false
allowedCapabilities: null
apiVersion: v1
defaultAddCapabilities: null
fsGroup:
  type: MustRunAs
groups:
- system:authenticated
kind: SecurityContextConstraints
metadata:
  annotations:
    kubernetes.io/description: restricted denies access to all host
features and requires
      pods to be run with a UID, and SELinux context that are allocated to
the namespace.  This
      is the most restrictive SCC.
  creationTimestamp: null
  name: restricted
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SYS_CHROOT
- SETUID
- SETGID
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: MustRunAs
supplementalGroups:
  type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- secret

That runAsUser isn't right, is it?  Any idea how that could have been done
by openshift-ansible or something?  Otherwise, this might be my excuse to
clamp down hard on admin-level access to the cluster.

On Mon, Feb 6, 2017 at 1:24 PM, Jordan Liggitt <jligg...@redhat.com> wrote:

> Can you include your `restricted` scc definition:
>
> oc get scc -o yaml
>
> It seems likely that the restricted scc definition was modified in your
> installation to not be as restrictive. By default, it sets runAsUser to
> MustRunAsRange
>
>
>
>
> On Mon, Feb 6, 2017 at 2:17 PM, Alex Wauck <alexwa...@exosite.com> wrote:
>
>> openshift.io/scc is "restricted" for app1-45-3blnd (not running as
>> root).  It also has that value for app5-36-2rfsq (running as root).
>>
>> On Mon, Feb 6, 2017 at 1:11 PM, Clayton Coleman <ccole...@redhat.com>
>> wrote:
>>
>>> Were those apps created in order?  Or at individual times?   If you did
>>> the following order of actions:
>>>
>>> 1. create app2, app4
>>> 2. grant the default service account access to a higher level SCC
>>> 3. create app1, app3, app5, and app6
>>>
>>> Then this would be what I would expect.  Can you look at the annotations
>>> of pod app1-45-3blnd and see what the value of "openshift.io/scc" is?
>>>
>>>
>>> On Mon, Feb 6, 2017 at 1:57 PM, Alex Wauck <alexwa...@exosite.com>
>>> wrote:
>>>
>>>> OK, this just got a lot more interesting:
>>>>
>>>> $ oc -n some-project exec app1-45-3blnd -- id
>>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>>> tape),27(video),1000370000
>>>> $ oc -n some-project exec app2-18-q2fwm -- id
>>>> uid=1000370000 gid=0(root) groups=1000370000
>>>> $ oc -n some-project exec app3-10-lhato -- id
>>>> uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon
>>>> ),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(
>>>> tape),27(video),1000370000
>>>> $ oc -n some-project exec app4-16-dl2r7 -- id
>>>> uid=1000370000 gid=0(root) groups=1000370000
>>>> $ oc -n some-project exec app5-36-2rfsq -- id
>>>> uid=0(root) gid=0(root) groups=0(root),1000370000
>>>> $ oc -n some-project exec app6-15-078fd -- id
>>>> uid=0(root) gid=0(root) groups=0(root),1000370000
>>>>
>>>> All of these pods are running on the same node, and as you can see,
>>>> they are in the same project.  Yet, some are running as root and some are
>>>> not.  How weird is that?
>>>>
>>>> On Mon, Feb 6, 2017 at 12:49 PM, Alex Wauck <alexwa...@exosite.com>
>>>> wrote:
>>>>
>>>>> $ oc export -n some-project pod/good-pod | grep serviceAccount
>>>>>   serviceAccount: default
>>>>>   serviceAccountName: default
>>>>> $ oc export -n some-project pod/bad-pod | grep serviceAccount
>>>>>   serviceAccount: default
>>>>>   serviceAccountName: default
>>>>>
>>>>> Same serviceAccountName.  This problem seems to happen with any pod
>>>>> from any project that happens to run on these newer nodes.  I examined the
>>>>> output of `oc describe scc`, and I did not find any unexpected access to
>>>>> elevated privileges for a default serviceaccount.  The project were I'm
>>>>> currently seeing the problem is not mentioned at all.  Also, I've seen the
>>>>> problem happen with pods that are managed by the same replication
>>>>> controller.
>>>>>
>>>>> On Mon, Feb 6, 2017 at 12:46 PM, Clayton Coleman <ccole...@redhat.com>
>>>>> wrote:
>>>>>
>>>>>> Adding the list back
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: Clayton Coleman <ccole...@redhat.com>
>>>>>> Date: Mon, Feb 6, 2017 at 1:42 PM
>>>>>> Subject: Re: Pods randomly running as root
>>>>>> To: Alex Wauck <alexwa...@exosite.com>
>>>>>> Cc: users <us...@redhat.com>
>>>>>>
>>>>>>
>>>>>> Do the pods running as root and not have the same serviceAccountName
>>>>>> field or different ones?  IF different, you may have granted the service
>>>>>> account access to a higher role - defaulting is determined by the SCC's
>>>>>> that a service account can access, so an admin level service account will
>>>>>> run as root by default unless you specify you don't want that.
>>>>>>
>>>>>> On Mon, Feb 6, 2017 at 1:37 PM, Alex Wauck <alexwa...@exosite.com>
>>>>>> wrote:
>>>>>>
>>>>>>> I'm looking at two nodes where one has the problem and the other
>>>>>>> doesn't, and I have confirmed that their node-config.yaml is the same 
>>>>>>> for
>>>>>>> both (modulo IP addresses).  The generated kubeconfigs for these nodes 
>>>>>>> on
>>>>>>> the master are also the same (modulo IP addresses and keys/certs).
>>>>>>>
>>>>>>> On Mon, Feb 6, 2017 at 10:46 AM, Alex Wauck <alexwa...@exosite.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Oh, wait.  I was looking at the wrong section.  The non-root pod as
>>>>>>>> a runAsUser attribute, but the root pod doesn't!
>>>>>>>>
>>>>>>>> On Mon, Feb 6, 2017 at 10:44 AM, Alex Wauck <alexwa...@exosite.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> A pod that IS running as root have this:
>>>>>>>>>
>>>>>>>>>   securityContext:
>>>>>>>>>     fsGroup: 1000370000
>>>>>>>>>     seLinuxOptions:
>>>>>>>>>       level: s0:c19,c14
>>>>>>>>>
>>>>>>>>> Another pod in the same project that is NOT running as root has
>>>>>>>>> the exact same securityContext section.
>>>>>>>>>
>>>>>>>>> On Mon, Feb 6, 2017 at 10:25 AM, Clayton Coleman <
>>>>>>>>> ccole...@redhat.com> wrote:
>>>>>>>>>
>>>>>>>>>> Do the pods themselves have a user UID set on them?  Each pod
>>>>>>>>>> should have the container "securityContext" field set and have an 
>>>>>>>>>> explicit
>>>>>>>>>> user ID value set.
>>>>>>>>>>
>>>>>>>>>> On Mon, Feb 6, 2017 at 11:23 AM, Alex Wauck <
>>>>>>>>>> alexwa...@exosite.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> These are completely normal app containers.  They are managed by
>>>>>>>>>>> deploy configs.  Whether they run as root or not seems to depend on 
>>>>>>>>>>> which
>>>>>>>>>>> node they run on: the older nodes seem to run pods as random UIDs, 
>>>>>>>>>>> while
>>>>>>>>>>> the newer ones run as root.  Our older nodes have 
>>>>>>>>>>> docker-selinux-1.10.3
>>>>>>>>>>> installed, while the newer ones do not.  They only have
>>>>>>>>>>> docker-selinux-1.9.1 available, since the 1.10.3 package seems to 
>>>>>>>>>>> have been
>>>>>>>>>>> removed from the CentOS extras repo.
>>>>>>>>>>>
>>>>>>>>>>> We are running OpenShift 1.2.1, since I haven't had time to
>>>>>>>>>>> upgrade it.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Feb 6, 2017 at 8:31 AM, Clayton Coleman <
>>>>>>>>>>> ccole...@redhat.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Are you running them directly (launching a pod)?  Or running
>>>>>>>>>>>> them under another controller resource.
>>>>>>>>>>>>
>>>>>>>>>>>> On Feb 6, 2017, at 2:00 AM, Alex Wauck <alexwa...@exosite.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Recently, I began to notice that some of my pods on OpenShift
>>>>>>>>>>>> run as root instead of a random UID.  There does not seem to be 
>>>>>>>>>>>> any obvious
>>>>>>>>>>>> cause (e.g. SCC).  Any idea how this could happen or where to look 
>>>>>>>>>>>> for
>>>>>>>>>>>> clues?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Alex Wauck // DevOps Engineer
>>>>>
>>>>> *E X O S I T E*
>>>>> *www.exosite.com <http://www.exosite.com/>*
>>>>>
>>>>> Making Machines More Human.
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Alex Wauck // DevOps Engineer
>>>>
>>>> *E X O S I T E*
>>>> *www.exosite.com <http://www.exosite.com/>*
>>>>
>>>> Making Machines More Human.
>>>>
>>>>
>>>
>>
>>
>> --
>>
>> Alex Wauck // DevOps Engineer
>>
>> *E X O S I T E*
>> *www.exosite.com <http://www.exosite.com/>*
>>
>> Making Machines More Human.
>>
>>
>> _______________________________________________
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com <http://www.exosite.com/>*

Making Machines More Human.
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to