Re: containters with host volumes from controllers

2016-05-18 Thread Clayton Coleman
The node is running as a user, but every pod / rc has to be created in
a namespace (or project, which is the same thing but with some
additional controls).  When you create an RC from your credentials,
you are either creating it in the "default" namespace (in which case
you need to grant system:serviceaccount:default:default access to
hostmount-anyuid) or in whatever namespace was the default.  If you
run "oc project", which project does it say you are in?

On Wed, May 18, 2016 at 8:16 PM, Alan Jones  wrote:
> I now reproduced the issue with OpenShift 3.2 on RHEL 7, as apposed to my
> few week old origin on CentOS.
> Unfortunately, my magic command isn't working.
> Here is my procedure:
> 1) Create node certs with `oadm create-node-config`
> 2) Use these certs from said node to create a replication set for a
> container that requires a host mount.
> 3) See event with 'hostPath volumes are not allowed to be used'
> Note, this process works with standard Kubernetes; so navigating the
> OpenShift authentication & permissions is what I'm trying to accomplish.
> Also note that there is not *project* specified in this procedure; the node
> being certified belongs to system:node, should I use that?
> I feel like I'm flying blind because there is no feedback:
> 1) The command to add privileges doesn't verify that the project or user
> exists.
> 2) The failure doesn't tell me which project/user was attempting to do the
> unpermitted task.
> Alan
> [root@alan-lnx ~]# cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 7.2 (Maipo)
> [root@alan-lnx ~]# openshift version
> openshift v3.2.0.20
> kubernetes v1.2.0-36-g4a3f9c5
> etcd 2.2.5
>
>
> On Wed, May 18, 2016 at 3:08 PM, Alan Jones  wrote:
>>
>> I think I'm making progress:
>> oadm policy add-scc-to-user hostmount-anyuid
>> system:serviceaccount:openshift-infra:default
>> Now when I submit the replica set I get a different mount error that I
>> think I understand.
>> Note, the context I'm submitting the request in is using the node host
>> certs under /openshift.local/config/ to the API server.
>> There is no specified project.
>> Thank you!
>> Alan
>>
>> On Wed, May 18, 2016 at 2:48 PM, Clayton Coleman 
>> wrote:
>>>
>>>
>>>
>>> On May 18, 2016, at 5:26 PM, Alan Jones  wrote:
>>>
>>> > oadm policy ... -z default
>>> In the version of openshift origin I'm using the oadm command doesn't
>>> take '-z'.
>>> Can you fill in the dot, dot, dot for me?
>>> I'm trying to grant permission for host volume access for a pod created
>>> by the replication controller which was submitted with node credentials to
>>> the API server.
>>> Here is my latest failed attempt to try to follow your advice:
>>> oadm policy add-scc-to-group hostmount-anyuid
>>> system:serviceaccount:default
>>> Again, this would be much easier if I could get logs for what group and
>>> user it is evaluating when it fails.
>>> Alan
>>>
>>>
>>> system:serviceaccount:NAMESPACE:default
>>>
>>> Since policy is global, you have to identify which namespace/project
>>> contains the "default" service account (service accounts are scoped to a
>>> project).
>>>
>>>
>>> On Tue, May 17, 2016 at 5:46 PM, Clayton Coleman 
>>> wrote:

 You need to grant the permission to a service account for the pod (which
 is "default" if you don't fill in the field).  The replication controller's
 SA is not checked.

 oadm policy ... -z default

 On May 17, 2016, at 8:39 PM, Alan Jones  wrote:

 I tried that:
 oadm policy add-acc-to-user hostmount-anyuid
 system:serviceaccount:openshift-infra:replication-controller
 ... and I still get the error.
 Is there any way to get the user name/group that fails authentication?
 Alan

 On Tue, May 17, 2016 at 9:33 AM, Clayton Coleman 
 wrote:
>
> anyuid doesn't grant hostPath, since that's a much more dangerous
> permission.  You want grant hostmount-anyuid
>
> On Tue, May 17, 2016 at 11:44 AM, Alan Jones 
> wrote:
> > I have several containers that we run using K8 that require host
> > volume
> > access.
> > For example, I have a container called "evdispatch-v1" that I'm
> > trying to
> > launch in a replication controller and get the below error.
> > Following an example from "Enable Dockerhub Images that Require Root"
> > in
> >
> > (https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile)
> > I tried:
> > oadm policy add-scc-to-user anyuid
> > system:serviceaccount:openshift-infra:replication-controller
> > But still get the error.
> > Do you know what I need to do?
> > Who knows more about this stuff?
> > Alan
> > ---
> > WARNINGevdispatch-v1
> > 49e7ac4e-1bae-11e6-88c0-080027767789

Re: containters with host volumes from controllers

2016-05-18 Thread Alan Jones
I now reproduced the issue with OpenShift 3.2 on RHEL 7, as apposed to my
few week old origin on CentOS.
Unfortunately, my magic command isn't working.
Here is my procedure:
1) Create node certs with `oadm create-node-config`
2) Use these certs from said node to create a replication set for a
container that requires a host mount.
3) See event with 'hostPath volumes are not allowed to be used'
Note, this process works with standard Kubernetes; so navigating the
OpenShift authentication & permissions is what I'm trying to accomplish.
Also note that there is not *project* specified in this procedure; the node
being certified belongs to system:node, should I use that?
I feel like I'm flying blind because there is no feedback:
1) The command to add privileges doesn't verify that the project or user
exists.
2) The failure doesn't tell me which project/user was attempting to do the
unpermitted task.
Alan
[root@alan-lnx ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 (Maipo)
[root@alan-lnx ~]# openshift version
openshift v3.2.0.20
kubernetes v1.2.0-36-g4a3f9c5
etcd 2.2.5


On Wed, May 18, 2016 at 3:08 PM, Alan Jones  wrote:

> I think I'm making progress:
> oadm policy add-scc-to-user hostmount-anyuid
> system:serviceaccount:openshift-infra:default
> Now when I submit the replica set I get a different mount error that I
> think I understand.
> Note, the context I'm submitting the request in is using the node host
> certs under /openshift.local/config/ to the API server.
> There is no specified project.
> Thank you!
> Alan
>
> On Wed, May 18, 2016 at 2:48 PM, Clayton Coleman 
> wrote:
>
>>
>>
>> On May 18, 2016, at 5:26 PM, Alan Jones  wrote:
>>
>> > oadm policy ... -z default
>> In the version of openshift origin I'm using the oadm command doesn't
>> take '-z'.
>> Can you fill in the dot, dot, dot for me?
>> I'm trying to grant permission for host volume access for a pod created
>> by the replication controller which was submitted with node credentials to
>> the API server.
>> Here is my latest failed attempt to try to follow your advice:
>> oadm policy add-scc-to-group hostmount-anyuid
>> system:serviceaccount:default
>> Again, this would be much easier if I could get logs for what group and
>> user it is evaluating when it fails.
>> Alan
>>
>>
>> system:serviceaccount:NAMESPACE:default
>>
>> Since policy is global, you have to identify which namespace/project
>> contains the "default" service account (service accounts are scoped to a
>> project).
>>
>>
>> On Tue, May 17, 2016 at 5:46 PM, Clayton Coleman 
>> wrote:
>>
>>> You need to grant the permission to a service account for the pod (which
>>> is "default" if you don't fill in the field).  The replication controller's
>>> SA is not checked.
>>>
>>> oadm policy ... -z default
>>>
>>> On May 17, 2016, at 8:39 PM, Alan Jones  wrote:
>>>
>>> I tried that:
>>> oadm policy add-acc-to-user hostmount-anyuid system:serviceaccount:
>>> openshift-infra:replication-controller
>>> ... and I still get the error.
>>> Is there any way to get the user name/group that fails authentication?
>>> Alan
>>>
>>> On Tue, May 17, 2016 at 9:33 AM, Clayton Coleman 
>>> wrote:
>>>
 anyuid doesn't grant hostPath, since that's a much more dangerous
 permission.  You want grant hostmount-anyuid

 On Tue, May 17, 2016 at 11:44 AM, Alan Jones 
 wrote:
 > I have several containers that we run using K8 that require host
 volume
 > access.
 > For example, I have a container called "evdispatch-v1" that I'm
 trying to
 > launch in a replication controller and get the below error.
 > Following an example from "Enable Dockerhub Images that Require Root"
 in
 > (
 https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
 )
 > I tried:
 > oadm policy add-scc-to-user anyuid
 > system:serviceaccount:openshift-infra:replication-controller
 > But still get the error.
 > Do you know what I need to do?
 > Who knows more about this stuff?
 > Alan
 > ---
 > WARNINGevdispatch-v1
 49e7ac4e-1bae-11e6-88c0-080027767789
 > ReplicationController replication-controller
  FailedCreate
 > Error creating: pods "evdispatch-v1-" is forbidden: unable to validate
 > against any security context constraint:
 > [spec.containers[0].securityContext.volumes[0]: Invalid value:
 "hostPath":
 > hostPath volumes are not allowed to be used
 > spec.containers[0].securityContext.volumes[0]: Invalid value:
 "hostPath":
 > hostPath volumes are not allowed to be used]
 >
 > ___
 > users mailing list
 > users@lists.openshift.redhat.com
 > 

Re: containters with host volumes from controllers

2016-05-18 Thread Alan Jones
I think I'm making progress:
oadm policy add-scc-to-user hostmount-anyuid
system:serviceaccount:openshift-infra:default
Now when I submit the replica set I get a different mount error that I
think I understand.
Note, the context I'm submitting the request in is using the node host
certs under /openshift.local/config/ to the API server.
There is no specified project.
Thank you!
Alan

On Wed, May 18, 2016 at 2:48 PM, Clayton Coleman 
wrote:

>
>
> On May 18, 2016, at 5:26 PM, Alan Jones  wrote:
>
> > oadm policy ... -z default
> In the version of openshift origin I'm using the oadm command doesn't take
> '-z'.
> Can you fill in the dot, dot, dot for me?
> I'm trying to grant permission for host volume access for a pod created by
> the replication controller which was submitted with node credentials to the
> API server.
> Here is my latest failed attempt to try to follow your advice:
> oadm policy add-scc-to-group hostmount-anyuid system:serviceaccount:default
> Again, this would be much easier if I could get logs for what group and
> user it is evaluating when it fails.
> Alan
>
>
> system:serviceaccount:NAMESPACE:default
>
> Since policy is global, you have to identify which namespace/project
> contains the "default" service account (service accounts are scoped to a
> project).
>
>
> On Tue, May 17, 2016 at 5:46 PM, Clayton Coleman 
> wrote:
>
>> You need to grant the permission to a service account for the pod (which
>> is "default" if you don't fill in the field).  The replication controller's
>> SA is not checked.
>>
>> oadm policy ... -z default
>>
>> On May 17, 2016, at 8:39 PM, Alan Jones  wrote:
>>
>> I tried that:
>> oadm policy add-acc-to-user hostmount-anyuid system:serviceaccount:
>> openshift-infra:replication-controller
>> ... and I still get the error.
>> Is there any way to get the user name/group that fails authentication?
>> Alan
>>
>> On Tue, May 17, 2016 at 9:33 AM, Clayton Coleman 
>> wrote:
>>
>>> anyuid doesn't grant hostPath, since that's a much more dangerous
>>> permission.  You want grant hostmount-anyuid
>>>
>>> On Tue, May 17, 2016 at 11:44 AM, Alan Jones 
>>> wrote:
>>> > I have several containers that we run using K8 that require host volume
>>> > access.
>>> > For example, I have a container called "evdispatch-v1" that I'm trying
>>> to
>>> > launch in a replication controller and get the below error.
>>> > Following an example from "Enable Dockerhub Images that Require Root"
>>> in
>>> > (
>>> https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
>>> )
>>> > I tried:
>>> > oadm policy add-scc-to-user anyuid
>>> > system:serviceaccount:openshift-infra:replication-controller
>>> > But still get the error.
>>> > Do you know what I need to do?
>>> > Who knows more about this stuff?
>>> > Alan
>>> > ---
>>> > WARNINGevdispatch-v1
>>> 49e7ac4e-1bae-11e6-88c0-080027767789
>>> > ReplicationController replication-controller   FailedCreate
>>> > Error creating: pods "evdispatch-v1-" is forbidden: unable to validate
>>> > against any security context constraint:
>>> > [spec.containers[0].securityContext.volumes[0]: Invalid value:
>>> "hostPath":
>>> > hostPath volumes are not allowed to be used
>>> > spec.containers[0].securityContext.volumes[0]: Invalid value:
>>> "hostPath":
>>> > hostPath volumes are not allowed to be used]
>>> >
>>> > ___
>>> > users mailing list
>>> > users@lists.openshift.redhat.com
>>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>> >
>>>
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: containters with host volumes from controllers

2016-05-18 Thread Clayton Coleman
On May 18, 2016, at 5:26 PM, Alan Jones  wrote:

> oadm policy ... -z default
In the version of openshift origin I'm using the oadm command doesn't take
'-z'.
Can you fill in the dot, dot, dot for me?
I'm trying to grant permission for host volume access for a pod created by
the replication controller which was submitted with node credentials to the
API server.
Here is my latest failed attempt to try to follow your advice:
oadm policy add-scc-to-group hostmount-anyuid system:serviceaccount:default
Again, this would be much easier if I could get logs for what group and
user it is evaluating when it fails.
Alan


system:serviceaccount:NAMESPACE:default

Since policy is global, you have to identify which namespace/project
contains the "default" service account (service accounts are scoped to a
project).


On Tue, May 17, 2016 at 5:46 PM, Clayton Coleman 
wrote:

> You need to grant the permission to a service account for the pod (which
> is "default" if you don't fill in the field).  The replication controller's
> SA is not checked.
>
> oadm policy ... -z default
>
> On May 17, 2016, at 8:39 PM, Alan Jones  wrote:
>
> I tried that:
> oadm policy add-acc-to-user hostmount-anyuid system:serviceaccount:
> openshift-infra:replication-controller
> ... and I still get the error.
> Is there any way to get the user name/group that fails authentication?
> Alan
>
> On Tue, May 17, 2016 at 9:33 AM, Clayton Coleman 
> wrote:
>
>> anyuid doesn't grant hostPath, since that's a much more dangerous
>> permission.  You want grant hostmount-anyuid
>>
>> On Tue, May 17, 2016 at 11:44 AM, Alan Jones  wrote:
>> > I have several containers that we run using K8 that require host volume
>> > access.
>> > For example, I have a container called "evdispatch-v1" that I'm trying
>> to
>> > launch in a replication controller and get the below error.
>> > Following an example from "Enable Dockerhub Images that Require Root" in
>> > (
>> https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
>> )
>> > I tried:
>> > oadm policy add-scc-to-user anyuid
>> > system:serviceaccount:openshift-infra:replication-controller
>> > But still get the error.
>> > Do you know what I need to do?
>> > Who knows more about this stuff?
>> > Alan
>> > ---
>> > WARNINGevdispatch-v149e7ac4e-1bae-11e6-88c0-080027767789
>> > ReplicationController replication-controller   FailedCreate
>> > Error creating: pods "evdispatch-v1-" is forbidden: unable to validate
>> > against any security context constraint:
>> > [spec.containers[0].securityContext.volumes[0]: Invalid value:
>> "hostPath":
>> > hostPath volumes are not allowed to be used
>> > spec.containers[0].securityContext.volumes[0]: Invalid value:
>> "hostPath":
>> > hostPath volumes are not allowed to be used]
>> >
>> > ___
>> > users mailing list
>> > users@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Seems privileged mode cannot be set in a template

2016-05-18 Thread Luis Pabón
Yep, by enabling 'default' to run privileged as you described worked!

Thanks Clayton,

- Luis

- Original Message -
From: "Luis Pabón" 
To: "Clayton Coleman" 
Cc: "users" , "Erin Boyd" , 
"Humble Chirammal" 
Sent: Wednesday, May 18, 2016 3:38:18 PM
Subject: Re: Seems privileged mode cannot be set in a template

I think I am getting it now.

So when I run:
$ oc get serviceaccounts
NAME   SECRETS   AGE
builder2 4h
default2 4h
deployer   2 4h

These accounts are the ones used for the replica deployment as shown in 
https://docs.openshift.com/enterprise/3.0/dev_guide/service_accounts.html.

What I would need to do is create/enable a service account to run privileged 
replication/deployment, right?

- Luis

- Original Message -
From: "Clayton Coleman" 
To: "Luis Pabón" 
Cc: "users" , "Erin Boyd" , 
"Humble Chirammal" 
Sent: Wednesday, May 18, 2016 3:20:28 PM
Subject: Re: Seems privileged mode cannot be set in a template

A service account is not a user.  A service account is its own
concept.  A service account already exists in each namespace - in this
case, if you run "oc get service accounts" you'll see three (default,
builder, and deployer).  The pods that are created have a
spec.serviceAccountName field which defaults to "default", and that is
what is used to determine what the pod can do.

On Wed, May 18, 2016 at 3:09 PM, Luis Pabón  wrote:
> Thanks Clayton, but that did not work.  These are the steps I took:
>
> 1. Create a user called test-admin:
> oadm policy add-cluster-role-to-user cluster-admin test-admin \
> --config=openshift.local.config/master/admin.kubeconfig
>
> 2. Add privileged settings:
> oc edit scc privileged
>
> 3. Add test-admin
> users:
> - system:serviceaccount:openshift-infra:build-controller
> - test-admin
>
> 4. Create a pod with privileged mode -- Works
> 5. Add a template which looks similar to the pod definition
> 6. Deploy a container form the tempalte -- Doesn't deploy
>
> 7. Run:
> oadm policy add-scc-to-user privileged -z test-admin
>
> 8. This added the line "- system:serviceaccount:test:test-admin" to scc 
> privileged
> 9. Deploy a container from the template -- Doesn't deploy
>
>
> Logs:
> $ oc get pods
> NAME  READY STATUSRESTARTS   AGE
> heketi-1-deploy   0/1   Error 0  8m
>
> $ oc logs heketi-1-deploy
> The output of the 'deploy' container is:
> I0518 18:59:49.026072   1 deployer.go:199] Deploying test/heketi-1 for 
> the first time (replicas: 1)
> I0518 18:59:49.029593   1 recreate.go:126] Scaling test/heketi-1 to 1 
> before performing acceptance check
> F0518 19:01:50.134899   1 deployer.go:69] couldn't scale test/heketi-1 to 
> 1: timed out waiting for the condition
>
>
> Seems that it is not working.  Maybe I have another configuration that I need 
> to setup?
>
>
>
> - Original Message -
> From: "Clayton Coleman" 
> To: "Luis Pabón" 
> Cc: "users" , "Erin Boyd" 
> , "Humble Chirammal" 
> Sent: Wednesday, May 18, 2016 2:47:04 PM
> Subject: Re: Seems privileged mode cannot be set in a template
>
> You have to grant access to privileged to the service account in the
> namespace - if you're running as cluster-admin, you can create
> privileged pods, but a regular service account unless you add it:
>
> oadm policy add-scc-to-user privileged -z default
>
> where "default" is the service account that is used if you don't specify one.
>
>
> On Wed, May 18, 2016 at 2:31 PM, Luis Pabón  wrote:
>>
>>
>> Hi all,
>>   I am able to easily deploy a POD with privileged mode enabled in my 
>> openshift cluster.  I am also able to deploy a non-privileged application 
>> from a service/deploymentConfig template.  But, I am unable to create a 
>> template which deploys a POD with privileged mode enabled.  Is this 
>> possible?  Here is a sample template:
>>
>> {
>>   "kind": "Template",
>>   "apiVersion": "v1",
>>   "metadata": {
>> "name": "heketi",
>> "annotations": {
>>   "description": "Heketi application",
>>   "tags": "glusterfs,heketi"
>> }
>>   },
>>   "labels": {
>> "template": "heketi"
>>   },
>>   "objects": [
>> {
>>   "kind": "Service",
>>   "apiVersion": "v1",
>>   "metadata": {
>> "name": "${NAME}",
>> "annotations": {
>>   "description": "Exposes Heketi service"
>> }
>>   },
>>   "spec": {
>> "ports": [
>>   {
>> "name": "rest-api",
>> "port": 8080,
>> "targetPort": 8080
>>   }
>> ],
>> "selector": {
>>   "name": 

Re: Seems privileged mode cannot be set in a template

2016-05-18 Thread Luis Pabón
I think I am getting it now.

So when I run:
$ oc get serviceaccounts
NAME   SECRETS   AGE
builder2 4h
default2 4h
deployer   2 4h

These accounts are the ones used for the replica deployment as shown in 
https://docs.openshift.com/enterprise/3.0/dev_guide/service_accounts.html.

What I would need to do is create/enable a service account to run privileged 
replication/deployment, right?

- Luis

- Original Message -
From: "Clayton Coleman" 
To: "Luis Pabón" 
Cc: "users" , "Erin Boyd" , 
"Humble Chirammal" 
Sent: Wednesday, May 18, 2016 3:20:28 PM
Subject: Re: Seems privileged mode cannot be set in a template

A service account is not a user.  A service account is its own
concept.  A service account already exists in each namespace - in this
case, if you run "oc get service accounts" you'll see three (default,
builder, and deployer).  The pods that are created have a
spec.serviceAccountName field which defaults to "default", and that is
what is used to determine what the pod can do.

On Wed, May 18, 2016 at 3:09 PM, Luis Pabón  wrote:
> Thanks Clayton, but that did not work.  These are the steps I took:
>
> 1. Create a user called test-admin:
> oadm policy add-cluster-role-to-user cluster-admin test-admin \
> --config=openshift.local.config/master/admin.kubeconfig
>
> 2. Add privileged settings:
> oc edit scc privileged
>
> 3. Add test-admin
> users:
> - system:serviceaccount:openshift-infra:build-controller
> - test-admin
>
> 4. Create a pod with privileged mode -- Works
> 5. Add a template which looks similar to the pod definition
> 6. Deploy a container form the tempalte -- Doesn't deploy
>
> 7. Run:
> oadm policy add-scc-to-user privileged -z test-admin
>
> 8. This added the line "- system:serviceaccount:test:test-admin" to scc 
> privileged
> 9. Deploy a container from the template -- Doesn't deploy
>
>
> Logs:
> $ oc get pods
> NAME  READY STATUSRESTARTS   AGE
> heketi-1-deploy   0/1   Error 0  8m
>
> $ oc logs heketi-1-deploy
> The output of the 'deploy' container is:
> I0518 18:59:49.026072   1 deployer.go:199] Deploying test/heketi-1 for 
> the first time (replicas: 1)
> I0518 18:59:49.029593   1 recreate.go:126] Scaling test/heketi-1 to 1 
> before performing acceptance check
> F0518 19:01:50.134899   1 deployer.go:69] couldn't scale test/heketi-1 to 
> 1: timed out waiting for the condition
>
>
> Seems that it is not working.  Maybe I have another configuration that I need 
> to setup?
>
>
>
> - Original Message -
> From: "Clayton Coleman" 
> To: "Luis Pabón" 
> Cc: "users" , "Erin Boyd" 
> , "Humble Chirammal" 
> Sent: Wednesday, May 18, 2016 2:47:04 PM
> Subject: Re: Seems privileged mode cannot be set in a template
>
> You have to grant access to privileged to the service account in the
> namespace - if you're running as cluster-admin, you can create
> privileged pods, but a regular service account unless you add it:
>
> oadm policy add-scc-to-user privileged -z default
>
> where "default" is the service account that is used if you don't specify one.
>
>
> On Wed, May 18, 2016 at 2:31 PM, Luis Pabón  wrote:
>>
>>
>> Hi all,
>>   I am able to easily deploy a POD with privileged mode enabled in my 
>> openshift cluster.  I am also able to deploy a non-privileged application 
>> from a service/deploymentConfig template.  But, I am unable to create a 
>> template which deploys a POD with privileged mode enabled.  Is this 
>> possible?  Here is a sample template:
>>
>> {
>>   "kind": "Template",
>>   "apiVersion": "v1",
>>   "metadata": {
>> "name": "heketi",
>> "annotations": {
>>   "description": "Heketi application",
>>   "tags": "glusterfs,heketi"
>> }
>>   },
>>   "labels": {
>> "template": "heketi"
>>   },
>>   "objects": [
>> {
>>   "kind": "Service",
>>   "apiVersion": "v1",
>>   "metadata": {
>> "name": "${NAME}",
>> "annotations": {
>>   "description": "Exposes Heketi service"
>> }
>>   },
>>   "spec": {
>> "ports": [
>>   {
>> "name": "rest-api",
>> "port": 8080,
>> "targetPort": 8080
>>   }
>> ],
>> "selector": {
>>   "name": "${NAME}"
>> }
>>   }
>> },
>> {
>>   "kind": "DeploymentConfig",
>>   "apiVersion": "v1",
>>   "metadata": {
>> "name": "${NAME}",
>> "annotations": {
>>   "description": "Defines how to deploy Heketi"
>> }
>>   },
>>   "spec": {
>> "replicas": 1,
>> "selector": {
>>   "name": "${NAME}"
>> },
>> "template": {
>>   

Re: Seems privileged mode cannot be set in a template

2016-05-18 Thread Clayton Coleman
A service account is not a user.  A service account is its own
concept.  A service account already exists in each namespace - in this
case, if you run "oc get service accounts" you'll see three (default,
builder, and deployer).  The pods that are created have a
spec.serviceAccountName field which defaults to "default", and that is
what is used to determine what the pod can do.

On Wed, May 18, 2016 at 3:09 PM, Luis Pabón  wrote:
> Thanks Clayton, but that did not work.  These are the steps I took:
>
> 1. Create a user called test-admin:
> oadm policy add-cluster-role-to-user cluster-admin test-admin \
> --config=openshift.local.config/master/admin.kubeconfig
>
> 2. Add privileged settings:
> oc edit scc privileged
>
> 3. Add test-admin
> users:
> - system:serviceaccount:openshift-infra:build-controller
> - test-admin
>
> 4. Create a pod with privileged mode -- Works
> 5. Add a template which looks similar to the pod definition
> 6. Deploy a container form the tempalte -- Doesn't deploy
>
> 7. Run:
> oadm policy add-scc-to-user privileged -z test-admin
>
> 8. This added the line "- system:serviceaccount:test:test-admin" to scc 
> privileged
> 9. Deploy a container from the template -- Doesn't deploy
>
>
> Logs:
> $ oc get pods
> NAME  READY STATUSRESTARTS   AGE
> heketi-1-deploy   0/1   Error 0  8m
>
> $ oc logs heketi-1-deploy
> The output of the 'deploy' container is:
> I0518 18:59:49.026072   1 deployer.go:199] Deploying test/heketi-1 for 
> the first time (replicas: 1)
> I0518 18:59:49.029593   1 recreate.go:126] Scaling test/heketi-1 to 1 
> before performing acceptance check
> F0518 19:01:50.134899   1 deployer.go:69] couldn't scale test/heketi-1 to 
> 1: timed out waiting for the condition
>
>
> Seems that it is not working.  Maybe I have another configuration that I need 
> to setup?
>
>
>
> - Original Message -
> From: "Clayton Coleman" 
> To: "Luis Pabón" 
> Cc: "users" , "Erin Boyd" 
> , "Humble Chirammal" 
> Sent: Wednesday, May 18, 2016 2:47:04 PM
> Subject: Re: Seems privileged mode cannot be set in a template
>
> You have to grant access to privileged to the service account in the
> namespace - if you're running as cluster-admin, you can create
> privileged pods, but a regular service account unless you add it:
>
> oadm policy add-scc-to-user privileged -z default
>
> where "default" is the service account that is used if you don't specify one.
>
>
> On Wed, May 18, 2016 at 2:31 PM, Luis Pabón  wrote:
>>
>>
>> Hi all,
>>   I am able to easily deploy a POD with privileged mode enabled in my 
>> openshift cluster.  I am also able to deploy a non-privileged application 
>> from a service/deploymentConfig template.  But, I am unable to create a 
>> template which deploys a POD with privileged mode enabled.  Is this 
>> possible?  Here is a sample template:
>>
>> {
>>   "kind": "Template",
>>   "apiVersion": "v1",
>>   "metadata": {
>> "name": "heketi",
>> "annotations": {
>>   "description": "Heketi application",
>>   "tags": "glusterfs,heketi"
>> }
>>   },
>>   "labels": {
>> "template": "heketi"
>>   },
>>   "objects": [
>> {
>>   "kind": "Service",
>>   "apiVersion": "v1",
>>   "metadata": {
>> "name": "${NAME}",
>> "annotations": {
>>   "description": "Exposes Heketi service"
>> }
>>   },
>>   "spec": {
>> "ports": [
>>   {
>> "name": "rest-api",
>> "port": 8080,
>> "targetPort": 8080
>>   }
>> ],
>> "selector": {
>>   "name": "${NAME}"
>> }
>>   }
>> },
>> {
>>   "kind": "DeploymentConfig",
>>   "apiVersion": "v1",
>>   "metadata": {
>> "name": "${NAME}",
>> "annotations": {
>>   "description": "Defines how to deploy Heketi"
>> }
>>   },
>>   "spec": {
>> "replicas": 1,
>> "selector": {
>>   "name": "${NAME}"
>> },
>> "template": {
>>   "metadata": {
>> "name": "${NAME}",
>> "labels": {
>>   "name": "${NAME}"
>> }
>>   },
>>   "triggers": [
>> {
>>   "type": "ConfigChange"
>> }
>>   ],
>>   "strategy": {
>> "type": "Rolling"
>>   },
>>   "spec": {
>> "containers": [
>>   {
>> "securityContext" : {
>>   "capabilities" : {},
>>   "privileged" : true
>> },
>> "name": "heketi",
>> "image": "heketi/heketi:dev",
>> "ports": [
>>   {
>> "containerPort": 8080
>>   }
>>

Re: Seems privileged mode cannot be set in a template

2016-05-18 Thread Clayton Coleman
You have to grant access to privileged to the service account in the
namespace - if you're running as cluster-admin, you can create
privileged pods, but a regular service account unless you add it:

oadm policy add-scc-to-user privileged -z default

where "default" is the service account that is used if you don't specify one.


On Wed, May 18, 2016 at 2:31 PM, Luis Pabón  wrote:
>
>
> Hi all,
>   I am able to easily deploy a POD with privileged mode enabled in my 
> openshift cluster.  I am also able to deploy a non-privileged application 
> from a service/deploymentConfig template.  But, I am unable to create a 
> template which deploys a POD with privileged mode enabled.  Is this possible? 
>  Here is a sample template:
>
> {
>   "kind": "Template",
>   "apiVersion": "v1",
>   "metadata": {
> "name": "heketi",
> "annotations": {
>   "description": "Heketi application",
>   "tags": "glusterfs,heketi"
> }
>   },
>   "labels": {
> "template": "heketi"
>   },
>   "objects": [
> {
>   "kind": "Service",
>   "apiVersion": "v1",
>   "metadata": {
> "name": "${NAME}",
> "annotations": {
>   "description": "Exposes Heketi service"
> }
>   },
>   "spec": {
> "ports": [
>   {
> "name": "rest-api",
> "port": 8080,
> "targetPort": 8080
>   }
> ],
> "selector": {
>   "name": "${NAME}"
> }
>   }
> },
> {
>   "kind": "DeploymentConfig",
>   "apiVersion": "v1",
>   "metadata": {
> "name": "${NAME}",
> "annotations": {
>   "description": "Defines how to deploy Heketi"
> }
>   },
>   "spec": {
> "replicas": 1,
> "selector": {
>   "name": "${NAME}"
> },
> "template": {
>   "metadata": {
> "name": "${NAME}",
> "labels": {
>   "name": "${NAME}"
> }
>   },
>   "triggers": [
> {
>   "type": "ConfigChange"
> }
>   ],
>   "strategy": {
> "type": "Rolling"
>   },
>   "spec": {
> "containers": [
>   {
> "securityContext" : {
>   "capabilities" : {},
>   "privileged" : true
> }
> "name": "heketi",
> "image": "heketi/heketi:dev",
> "ports": [
>   {
> "containerPort": 8080
>   }
> ],
> "volumeMounts": [
>   {
> "name": "db",
> "mountPath": "/var/lib/heketi"
>   }
> ],
> "readinessProbe": {
>   "timeoutSeconds": 3,
>   "initialDelaySeconds": 3,
>   "httpGet": {
> "path": "/hello",
> "port": 8080
>   }
> },
> "livenessProbe": {
>   "timeoutSeconds": 3,
>   "initialDelaySeconds": 30,
>   "httpGet": {
> "path": "/hello",
> "port": 8080
>   }
> }
>   }
> ],
> "volumes": [
>   {
> "name": "db"
>   }
> ]
>   }
> }
>   }
> }
>   ],
>   "parameters": [
> {
>   "name": "NAME",
>   "displayName": "Name",
>   "description": "The name assigned to all of the frontend objects 
> defined in this template.",
>   "required": true,
>   "value": "heketi"
> }
>   ]
> }
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Seems privileged mode cannot be set in a template

2016-05-18 Thread Luis Pabón


Hi all,
  I am able to easily deploy a POD with privileged mode enabled in my openshift 
cluster.  I am also able to deploy a non-privileged application from a 
service/deploymentConfig template.  But, I am unable to create a template which 
deploys a POD with privileged mode enabled.  Is this possible?  Here is a 
sample template:

{
  "kind": "Template",
  "apiVersion": "v1",
  "metadata": {
"name": "heketi",
"annotations": {
  "description": "Heketi application",
  "tags": "glusterfs,heketi"
}
  },
  "labels": {
"template": "heketi"
  },
  "objects": [
{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
"name": "${NAME}",
"annotations": {
  "description": "Exposes Heketi service"
}
  },
  "spec": {
"ports": [
  {
"name": "rest-api",
"port": 8080,
"targetPort": 8080
  }
],
"selector": {
  "name": "${NAME}"
}
  }
},
{
  "kind": "DeploymentConfig",
  "apiVersion": "v1",
  "metadata": {
"name": "${NAME}",
"annotations": {
  "description": "Defines how to deploy Heketi"
}
  },
  "spec": {
"replicas": 1,
"selector": {
  "name": "${NAME}"
},
"template": {
  "metadata": {
"name": "${NAME}",
"labels": {
  "name": "${NAME}"
}
  },
  "triggers": [
{
  "type": "ConfigChange"
}
  ],
  "strategy": {
"type": "Rolling"
  },
  "spec": {
"containers": [
  {
"securityContext" : {
  "capabilities" : {},
  "privileged" : true
}
"name": "heketi",
"image": "heketi/heketi:dev",
"ports": [
  {
"containerPort": 8080
  }
],
"volumeMounts": [
  {
"name": "db",
"mountPath": "/var/lib/heketi"
  }
],
"readinessProbe": {
  "timeoutSeconds": 3,
  "initialDelaySeconds": 3,
  "httpGet": {
"path": "/hello",
"port": 8080
  }
},
"livenessProbe": {
  "timeoutSeconds": 3,
  "initialDelaySeconds": 30,
  "httpGet": {
"path": "/hello",
"port": 8080
  }
}
  }
],
"volumes": [
  {
"name": "db"
  }
]
  }
}
  }
}
  ],
  "parameters": [
{
  "name": "NAME",
  "displayName": "Name",
  "description": "The name assigned to all of the frontend objects defined 
in this template.",
  "required": true,
  "value": "heketi"
}
  ]
}

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: dns failures from build pods

2016-05-18 Thread James Falkner

After further investigation (My search skills clearly failed me last night):

https://github.com/openshift/origin/issues/4303

So it's sti that's doing the magic to make dns work inside the build 
container. Non-sti builds don't.


-James


Ben Parees 
May 18, 2016 at 10:27 AM


On Wed, May 18, 2016 at 10:14 AM, James Falkner > wrote:


Is there any reason (other than bugs :) ) that pods instantiated
to execute builds of Docker-based apps wouldn't be able to resolve
*.svc.cluster.local names? I have two apps in a project, one built
using a Docker strategy, and one using s2i.. the s2i one is able
(at buildtime) to contact other services in the cluster using
foo.bar.svc.cluster.local but the Docker one cannot (it is trying
to use a local Maven mirror and failing to resolve the nexus
server hostname using the *.svc.cluster.local name). Using the
externally-exposed name works fine.


​ i'm actually surprised either of them works.  Both the assemble 
script, and your dockerfile commands, are running inside a container 
that's been launched directly on your node host, meaning the 
/etc/resolv.conf that's available to the container is the host's 
/etc/resolv.conf and it doesn't have the openshift skydns ip 
injected.  The assemble container may have DNS access because it picks 
up the networking config from the actual build pod, which I guess may 
include DNS config, not sure.


the accepted workaround currently is to add the skydns ip to your 
host's /etc/resolv.conf.


​


Thanks!

-James

___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Ben Parees | OpenShift

James Falkner 
May 18, 2016 at 10:14 AM
Is there any reason (other than bugs :) ) that pods instantiated to 
execute builds of Docker-based apps wouldn't be able to resolve 
*.svc.cluster.local names? I have two apps in a project, one built 
using a Docker strategy, and one using s2i.. the s2i one is able (at 
buildtime) to contact other services in the cluster using 
foo.bar.svc.cluster.local but the Docker one cannot (it is trying to 
use a local Maven mirror and failing to resolve the nexus server 
hostname using the *.svc.cluster.local name). Using the 
externally-exposed name works fine.


Thanks!

-James


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error updating deployment [deploy] status to Pending

2016-05-18 Thread Philippe Lafoucrière
I have this in the logs (with loglevel=4):
https://gist.github.com/gravis/7454a743cb988f6d192bf5a5c9890a82
So, nothing fancy :(

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: dns failures from build pods

2016-05-18 Thread Ben Parees
On Wed, May 18, 2016 at 10:14 AM, James Falkner  wrote:

> Is there any reason (other than bugs :) ) that pods instantiated to
> execute builds of Docker-based apps wouldn't be able to resolve
> *.svc.cluster.local names? I have two apps in a project, one built using a
> Docker strategy, and one using s2i.. the s2i one is able (at buildtime) to
> contact other services in the cluster using foo.bar.svc.cluster.local but
> the Docker one cannot (it is trying to use a local Maven mirror and failing
> to resolve the nexus server hostname using the *.svc.cluster.local name).
> Using the externally-exposed name works fine.
>

​i'm actually surprised either of them works.  Both the assemble script,
and your dockerfile commands, are running inside a container that's been
launched directly on your node host, meaning the /etc/resolv.conf that's
available to the container is the host's /etc/resolv.conf and it doesn't
have the openshift skydns ip injected.  The assemble container may have DNS
access because it picks up the networking config from the actual build pod,
which I guess may include DNS config, not sure.

the accepted workaround currently is to add the skydns ip to your host's
/etc/resolv.conf.

​


>
> Thanks!
>
> -James
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift Online v3

2016-05-18 Thread Jeremy .
Is there any time-frame for upgrading OpenShift Online to v3? I've been holding 
off a project because of the wait.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users