Re: containters with host volumes from controllers

2016-05-17 Thread Alan Jones
I tried that:
oadm policy add-acc-to-user hostmount-anyuid system:serviceaccount:
openshift-infra:replication-controller
... and I still get the error.
Is there any way to get the user name/group that fails authentication?
Alan

On Tue, May 17, 2016 at 9:33 AM, Clayton Coleman 
wrote:

> anyuid doesn't grant hostPath, since that's a much more dangerous
> permission.  You want grant hostmount-anyuid
>
> On Tue, May 17, 2016 at 11:44 AM, Alan Jones  wrote:
> > I have several containers that we run using K8 that require host volume
> > access.
> > For example, I have a container called "evdispatch-v1" that I'm trying to
> > launch in a replication controller and get the below error.
> > Following an example from "Enable Dockerhub Images that Require Root" in
> > (
> https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
> )
> > I tried:
> > oadm policy add-scc-to-user anyuid
> > system:serviceaccount:openshift-infra:replication-controller
> > But still get the error.
> > Do you know what I need to do?
> > Who knows more about this stuff?
> > Alan
> > ---
> > WARNINGevdispatch-v149e7ac4e-1bae-11e6-88c0-080027767789
> > ReplicationController replication-controller   FailedCreate
> > Error creating: pods "evdispatch-v1-" is forbidden: unable to validate
> > against any security context constraint:
> > [spec.containers[0].securityContext.volumes[0]: Invalid value:
> "hostPath":
> > hostPath volumes are not allowed to be used
> > spec.containers[0].securityContext.volumes[0]: Invalid value: "hostPath":
> > hostPath volumes are not allowed to be used]
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Error updating deployment [deploy] status to Pending

2016-05-17 Thread Philippe Lafoucrière
Hello,

I can't deploy any on os 1.1.3: the deploy (and/or build) is stuck on
pending, with the message:

Error updating deployment [project]/[deploy] status to Pending

in the events. Nothing specific in the logs.
Using oc describe doesn't give any reason, and I see a different event:

Events:
  FirstSeen LastSeenCount   From
 SubobjectPath   TypeReason  Message
  - -   
 -   --  ---
  6m6m  1   {default-scheduler }
 Normal  Scheduled   Successfully assigned
slots-site-8-deploy to node-1

The deploy is starting if I restart origin-node on node-1. If I deploy
again, I get stuck at the same place, and only a origin-node restart can
unlock it.

Any hints to fix that?

Thanks
Philippe
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift Origin Ansible Install (change where /var/lib/origin goes)?

2016-05-17 Thread Scott Dodson
Setting openshift_data_dir will alter this path and many others. You
should set this before installing your node and this isn't something
that we've tested thoroughly.

There's also support for emptydir quotas per project, see
https://docs.openshift.org/latest/install_config/master_node_configuration.html#node-configuration-files
see annotation #4. This will limit the amount of storage each project
can allocate on emptydir in order to push your applications towards
using proper persistent volumes.

On Tue, May 17, 2016 at 1:09 AM, Dean Peterson  wrote:
> When I installed Openshift Origin I first installed Docker and ran
> "docker-storage-setup" to make docker use a dedicated lvm volume group I
> created (4TB).  However, Openshift Origin seems to be storing everything on
> my root partition (rhel-root) that only has 50GB!
>
>
> To find this out I discovered a handy command: du -hx --max-depth=1 /
>
> That command lists all folders one level deep and shows how much space each
> folder is using.  That way, I could follow the storage gobblers all the way
> down to /var/lib../origin/kubernetes.io~empty-dir.
>
>
> I don't dare try to merge volume groups or modify the root partition.  I
> guess it is time to install Openshift Origin on another machine.  Is there a
> way to control where /var/lib/origin/kubernetes.io~empty-dir is stored
> using the Ansible advanced installation option?
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift Origin Ansible Install (change where /var/lib/origin goes)?

2016-05-17 Thread Dean Peterson
Thank you for the info.  You know, what you said about using proper
persistent volumes rang a bell.  I never set up permanent  persistent
volume storage for the openshift private registry.  That is why my images
are going in there.  Thanks!
On May 17, 2016 1:31 PM, "Scott Dodson"  wrote:

> Setting openshift_data_dir will alter this path and many others. You
> should set this before installing your node and this isn't something
> that we've tested thoroughly.
>
> There's also support for emptydir quotas per project, see
>
> https://docs.openshift.org/latest/install_config/master_node_configuration.html#node-configuration-files
> see annotation #4. This will limit the amount of storage each project
> can allocate on emptydir in order to push your applications towards
> using proper persistent volumes.
>
> On Tue, May 17, 2016 at 1:09 AM, Dean Peterson 
> wrote:
> > When I installed Openshift Origin I first installed Docker and ran
> > "docker-storage-setup" to make docker use a dedicated lvm volume group I
> > created (4TB).  However, Openshift Origin seems to be storing everything
> on
> > my root partition (rhel-root) that only has 50GB!
> >
> >
> > To find this out I discovered a handy command: du -hx --max-depth=1 /
> >
> > That command lists all folders one level deep and shows how much space
> each
> > folder is using.  That way, I could follow the storage gobblers all the
> way
> > down to /var/lib../origin/kubernetes.io~empty-dir.
> >
> >
> > I don't dare try to merge volume groups or modify the root partition.  I
> > guess it is time to install Openshift Origin on another machine.  Is
> there a
> > way to control where /var/lib/origin/kubernetes.io~empty-dir is
> stored
> > using the Ansible advanced installation option?
> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: containters with host volumes from controllers

2016-05-17 Thread Clayton Coleman
anyuid doesn't grant hostPath, since that's a much more dangerous
permission.  You want grant hostmount-anyuid

On Tue, May 17, 2016 at 11:44 AM, Alan Jones  wrote:
> I have several containers that we run using K8 that require host volume
> access.
> For example, I have a container called "evdispatch-v1" that I'm trying to
> launch in a replication controller and get the below error.
> Following an example from "Enable Dockerhub Images that Require Root" in
> (https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile)
> I tried:
> oadm policy add-scc-to-user anyuid
> system:serviceaccount:openshift-infra:replication-controller
> But still get the error.
> Do you know what I need to do?
> Who knows more about this stuff?
> Alan
> ---
> WARNINGevdispatch-v149e7ac4e-1bae-11e6-88c0-080027767789
> ReplicationController replication-controller   FailedCreate
> Error creating: pods "evdispatch-v1-" is forbidden: unable to validate
> against any security context constraint:
> [spec.containers[0].securityContext.volumes[0]: Invalid value: "hostPath":
> hostPath volumes are not allowed to be used
> spec.containers[0].securityContext.volumes[0]: Invalid value: "hostPath":
> hostPath volumes are not allowed to be used]
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Is there a command to see on which node pod is installed ?

2016-05-17 Thread Stéphane Klein
Yes :

```
$ oc describe pod docker-registry-3-4bizw
Name:docker-registry-3-4bizw
Namespace:default
Node:atomic-test-node-2.priv.tech-angels.net/172.29.20.211
```


2016-05-17 10:11 GMT+02:00 Stéphane Klein :

> Hi,
>
> is there a command to see on which node pod is installed ?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein 
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Is there a command to see on which node pod is installed ?

2016-05-17 Thread Stéphane Klein
Hi,

is there a command to see on which node pod is installed ?

Best regards,
Stéphane
-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users