Re: Custom SCC assigned to wrong pods

2018-06-18 Thread Jordan Liggitt
Redeploying the application creates new pods.

Since you removed the part of your custom scc that allowed it to apply to
your pods, those new pods were once again subject to the restricted policy.

On Jun 18, 2018, at 6:12 PM, Daniel Comnea  wrote:

Hi Jordan,

Reviving the thread on the custom scc with another question if you don't
mind:

After i removed the

groups:
- system:authenticated

from my custom scc i went ahead and done the following:

1) Created Foo project
2) Created my custom scc (which i shared in my previous email)
3) Deployed the app pods
4) Upgraded Openshift to 3.6.1 – pods started to crash due to having the
default restricted scc instead of the custom scc previously assigned.

The docs says very clear that only the default scc will be reset to initial
state and so i was expecting the POD to pick up the custom scc even if they
get bounced during upgrade.

Any thoughts ?

Thanks !

On Wed, May 23, 2018 at 11:18 PM, Daniel Comnea 
wrote:

> I see the rational, thank you for quick response and knowledge.
>
> On Wed, May 23, 2018 at 10:59 PM, Jordan Liggitt 
> wrote:
>
>> By making your SCC available to all authenticated users, it gets added to
>> the set considered for every pod run by every service account:
>>
>> users:
>> - system:serviceaccount:foo:foo-sa
>> groups:
>> - system:authenticated
>>
>>
>> If you want to limit it to just your foo-sa service account, you should
>> remove the system:authenticated group from the SCC
>>
>>
>>
>> On Wed, May 23, 2018 at 5:54 PM, Daniel Comnea 
>> wrote:
>>
>>> Hi,
>>>
>>> I'm running Origin 3.7.0 and i've created a custom SCC [1] which is
>>> being referenced by different Deployments objects using
>>> serviceAccountName: foo-scc-restricted.
>>>
>>> Now the odd thing which i cannot explain is why glusterFS pods [2]
>>> which doesn't reference the new created serviceAccountName [3] do have
>>> the new custom scc being used [4]...is that normal or is a bug?
>>>
>>>
>>>
>>> Cheers,
>>> Dani
>>>
>>> [1] https://gist.github.com/DanyC97/56070e3f1523e31c1ad96980df6d7fe5
>>> [2] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e0918
>>> [3] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e
>>> 0918#file-glusterfs-deployment-yml-L65
>>> [4] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e
>>> 0918#file-glusterfs-deployment-yml-L11
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Custom SCC assigned to wrong pods

2018-06-18 Thread Daniel Comnea
Hi Jordan,

Reviving the thread on the custom scc with another question if you don't
mind:

After i removed the

groups:
- system:authenticated

from my custom scc i went ahead and done the following:

1) Created Foo project
2) Created my custom scc (which i shared in my previous email)
3) Deployed the app pods
4) Upgraded Openshift to 3.6.1 – pods started to crash due to having the
default restricted scc instead of the custom scc previously assigned.

The docs says very clear that only the default scc will be reset to initial
state and so i was expecting the POD to pick up the custom scc even if they
get bounced during upgrade.

Any thoughts ?

Thanks !

On Wed, May 23, 2018 at 11:18 PM, Daniel Comnea 
wrote:

> I see the rational, thank you for quick response and knowledge.
>
> On Wed, May 23, 2018 at 10:59 PM, Jordan Liggitt 
> wrote:
>
>> By making your SCC available to all authenticated users, it gets added to
>> the set considered for every pod run by every service account:
>>
>> users:
>> - system:serviceaccount:foo:foo-sa
>> groups:
>> - system:authenticated
>>
>>
>> If you want to limit it to just your foo-sa service account, you should
>> remove the system:authenticated group from the SCC
>>
>>
>>
>> On Wed, May 23, 2018 at 5:54 PM, Daniel Comnea 
>> wrote:
>>
>>> Hi,
>>>
>>> I'm running Origin 3.7.0 and i've created a custom SCC [1] which is
>>> being referenced by different Deployments objects using
>>> serviceAccountName: foo-scc-restricted.
>>>
>>> Now the odd thing which i cannot explain is why glusterFS pods [2]
>>> which doesn't reference the new created serviceAccountName [3] do have
>>> the new custom scc being used [4]...is that normal or is a bug?
>>>
>>>
>>>
>>> Cheers,
>>> Dani
>>>
>>> [1] https://gist.github.com/DanyC97/56070e3f1523e31c1ad96980df6d7fe5
>>> [2] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e0918
>>> [3] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e
>>> 0918#file-glusterfs-deployment-yml-L65
>>> [4] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e
>>> 0918#file-glusterfs-deployment-yml-L11
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev