Re: docker registry image for openshift 3.2?

2016-06-17 Thread Alan Jones
Restarting docker cleared it up.
I think there is something about my install of OS, perhaps it reinstalls
docker.
Thank you!

On Fri, Jun 17, 2016 at 11:19 AM, Clayton Coleman <ccole...@redhat.com>
wrote:

> Are you trying to pull that from the DockerHub, or do you have an
> --add-registry flag on your daemon?  Because that image only exists on
> the red hat registry.
>
> On Fri, Jun 17, 2016 at 2:12 PM, Alan Jones <ajo...@diamanti.com> wrote:
> > Error syncing pod, skipping: failed to "StartContainer" for "deployment"
> > with ErrImagePull: "Error: image openshift3/ose-deployer:v3.2.0.44 not
> > found"
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


docker registry image for openshift 3.2?

2016-06-17 Thread Alan Jones
Error syncing pod, skipping: failed to "StartContainer" for "deployment"
with ErrImagePull: "Error: image openshift3/ose-deployer:v3.2.0.44 not
found"
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: anyone seen this error from ansible install?

2016-06-08 Thread Alan Jones
See below.  I also have two issues that might be related:

1) This is a "reinstall", I've successfully installed using the "simple"
interactive, remove and added rpms, and run ansible install

2) There is some issue on this cluster that makes ansible run slow, total
18m to fail vs. <8m for success on a VM cluster with identical config

Thanks for any insights you can give me!

Alan


[root@pocsj41 ~]# rpm -qa | grep ansible

openshift-*ansible*-docs-3.0.94-1.git.0.67a822a.el7.noarch

openshift-*ansible*-lookup-plugins-3.0.94-1.git.0.67a822a.el7.noarch

openshift-*ansible*-filter-plugins-3.0.94-1.git.0.67a822a.el7.noarch

openshift-*ansible*-3.0.94-1.git.0.67a822a.el7.noarch

openshift-*ansible*-playbooks-3.0.94-1.git.0.67a822a.el7.noarch

openshift-*ansible*-roles-3.0.94-1.git.0.67a822a.el7.noarch

*ansible*-1.9.4-1.el7aos.noarch

[root@pocsj41 ~]# rpm -qa | grep openshift

*openshift*-ansible-docs-3.0.94-1.git.0.67a822a.el7.noarch

*openshift*-ansible-lookup-plugins-3.0.94-1.git.0.67a822a.el7.noarch

atomic-*openshift*-node-3.2.0.44-1.git.0.a4463d9.el7.x86_64

*openshift*-ansible-filter-plugins-3.0.94-1.git.0.67a822a.el7.noarch

atomic-*openshift*-sdn-ovs-3.2.0.44-1.git.0.a4463d9.el7.x86_64

*openshift*-ansible-3.0.94-1.git.0.67a822a.el7.noarch

*openshift*-ansible-playbooks-3.0.94-1.git.0.67a822a.el7.noarch

atomic-*openshift*-clients-3.2.0.44-1.git.0.a4463d9.el7.x86_64

*openshift*-ansible-roles-3.0.94-1.git.0.67a822a.el7.noarch

atomic-*openshift*-3.2.0.44-1.git.0.a4463d9.el7.x86_64

tuned-profiles-atomic-*openshift*-node-3.2.0.44-1.git.0.a4463d9.el7.x86_64

atomic-*openshift*-master-3.2.0.44-1.git.0.a4463d9.el7.x86_64

atomic-*openshift*-utils-3.0.94-1.git.0.67a822a.el7.noarch

[root@pocsj41 ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 7.2 (Maipo)


On Wed, Jun 8, 2016 at 5:29 AM, Brenton Leanhardt <blean...@redhat.com>
wrote:

> Can you provide the version of ansible you are using as well as the
> RPM or git checkout ref of the playbooks you're using?
>
> Thanks,
> Brenton
>
> On Tue, Jun 7, 2016 at 9:01 PM, Alan Jones <ajo...@diamanti.com> wrote:
> > Error followed by /etc/ansible/hosts below.
> > Alan
> > ---
> > TASK: [openshift_facts | Verify Ansible version is greater than or equal
> to
> > 1.9.4] ***
> > fatal: [pocsj41] => Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py",
> line
> > 586, in _executor
> > exec_rc = self._executor_internal(host, new_stdin)
> >   File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py",
> line
> > 789, in _executor_internal
> > return self._executor_internal_inner(host, self.module_name,
> > self.module_args, inject, port, complex_args=complex_args)
> >   File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py",
> line
> > 869, in _executor_internal_inner
> > if not utils.check_conditional(cond, self.basedir, inject,
> > fail_on_undefined=self.error_on_undefined_vars):
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/__init__.py", line
> > 269, in check_conditional
> > conditional = template.template(basedir, presented, inject)
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> > 124, in template
> > varname = template_from_string(basedir, varname, templatevars,
> > fail_on_undefined)
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> > 382, in template_from_string
> > res = jinja2.utils.concat(rf)
> >   File "", line 6, in root
> >   File "/usr/lib/python2.7/site-packages/jinja2/runtime.py", line 153, in
> > resolve
> > return self.parent[key]
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> > 205, in __getitem__
> > return template(self.basedir, var, self.vars,
> > fail_on_undefined=self.fail_on_undefined)
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> > 124, in template
> > varname = template_from_string(basedir, varname, templatevars,
> > fail_on_undefined)
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> > 382, in template_from_string
> > res = jinja2.utils.concat(rf)
> >   File "", line 10, in root
> >   File "/usr/share/ansible_plugins/filter_plugins/oo_filters.py", line
> 742,
> > in oo_persistent_volumes
> > if len(groups['oo_nfs_to_config']) > 0:
> > KeyError: 'oo_nfs_to_config'
> >
> >
> > FATAL: all hosts have already failed -- aborti

anyone seen this error from ansible install?

2016-06-07 Thread Alan Jones
Error followed by /etc/ansible/hosts below.
Alan
---
TASK: [openshift_facts | Verify Ansible version is greater than or equal to
1.9.4] ***
fatal: [pocsj41] => Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py", line
586, in _executor
exec_rc = self._executor_internal(host, new_stdin)
  File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py", line
789, in _executor_internal
return self._executor_internal_inner(host, self.module_name,
self.module_args, inject, port, complex_args=complex_args)
  File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py", line
869, in _executor_internal_inner
if not utils.check_conditional(cond, self.basedir, inject,
fail_on_undefined=self.error_on_undefined_vars):
  File "/usr/lib/python2.7/site-packages/ansible/utils/__init__.py", line
269, in check_conditional
conditional = template.template(basedir, presented, inject)
  File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
124, in template
varname = template_from_string(basedir, varname, templatevars,
fail_on_undefined)
  File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
382, in template_from_string
res = jinja2.utils.concat(rf)
  File "", line 6, in root
  File "/usr/lib/python2.7/site-packages/jinja2/runtime.py", line 153, in
resolve
return self.parent[key]
  File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
205, in __getitem__
return template(self.basedir, var, self.vars,
fail_on_undefined=self.fail_on_undefined)
  File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
124, in template
varname = template_from_string(basedir, varname, templatevars,
fail_on_undefined)
  File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
382, in template_from_string
res = jinja2.utils.concat(rf)
  File "", line 10, in root
  File "/usr/share/ansible_plugins/filter_plugins/oo_filters.py", line 742,
in oo_persistent_volumes
if len(groups['oo_nfs_to_config']) > 0:
KeyError: 'oo_nfs_to_config'


FATAL: all hosts have already failed -- aborting

--- /etc/ansible/hosts
[OSEv3:children]
masters
nodes
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=openshift-enterprise
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
'filename': '/etc/origin/master/htpasswd'}]
[masters]
pocsj41
[nodes]
pocsj41 openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
openshift_hostname=pocsj41 openshift_public_hostname=pocsj41
openshift_ip=172.16.51.2 openshift_public_ip=172.16.51.2
pocsj42 openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
openshift_hostname=pocsj42 openshift_public_hostname=pocsj42
openshift_ip=172.16.51.4 openshift_public_ip=172.16.51.4
pocsj43 openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
openshift_hostname=pocsj43 openshift_public_hostname=pocsj43
openshift_ip=172.16.51.7 openshift_public_ip=172.16.51.7
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: containters with host volumes from controllers

2016-05-18 Thread Alan Jones
I now reproduced the issue with OpenShift 3.2 on RHEL 7, as apposed to my
few week old origin on CentOS.
Unfortunately, my magic command isn't working.
Here is my procedure:
1) Create node certs with `oadm create-node-config`
2) Use these certs from said node to create a replication set for a
container that requires a host mount.
3) See event with 'hostPath volumes are not allowed to be used'
Note, this process works with standard Kubernetes; so navigating the
OpenShift authentication & permissions is what I'm trying to accomplish.
Also note that there is not *project* specified in this procedure; the node
being certified belongs to system:node, should I use that?
I feel like I'm flying blind because there is no feedback:
1) The command to add privileges doesn't verify that the project or user
exists.
2) The failure doesn't tell me which project/user was attempting to do the
unpermitted task.
Alan
[root@alan-lnx ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 (Maipo)
[root@alan-lnx ~]# openshift version
openshift v3.2.0.20
kubernetes v1.2.0-36-g4a3f9c5
etcd 2.2.5


On Wed, May 18, 2016 at 3:08 PM, Alan Jones <ajo...@diamanti.com> wrote:

> I think I'm making progress:
> oadm policy add-scc-to-user hostmount-anyuid
> system:serviceaccount:openshift-infra:default
> Now when I submit the replica set I get a different mount error that I
> think I understand.
> Note, the context I'm submitting the request in is using the node host
> certs under /openshift.local/config/ to the API server.
> There is no specified project.
> Thank you!
> Alan
>
> On Wed, May 18, 2016 at 2:48 PM, Clayton Coleman <ccole...@redhat.com>
> wrote:
>
>>
>>
>> On May 18, 2016, at 5:26 PM, Alan Jones <ajo...@diamanti.com> wrote:
>>
>> > oadm policy ... -z default
>> In the version of openshift origin I'm using the oadm command doesn't
>> take '-z'.
>> Can you fill in the dot, dot, dot for me?
>> I'm trying to grant permission for host volume access for a pod created
>> by the replication controller which was submitted with node credentials to
>> the API server.
>> Here is my latest failed attempt to try to follow your advice:
>> oadm policy add-scc-to-group hostmount-anyuid
>> system:serviceaccount:default
>> Again, this would be much easier if I could get logs for what group and
>> user it is evaluating when it fails.
>> Alan
>>
>>
>> system:serviceaccount:NAMESPACE:default
>>
>> Since policy is global, you have to identify which namespace/project
>> contains the "default" service account (service accounts are scoped to a
>> project).
>>
>>
>> On Tue, May 17, 2016 at 5:46 PM, Clayton Coleman <ccole...@redhat.com>
>> wrote:
>>
>>> You need to grant the permission to a service account for the pod (which
>>> is "default" if you don't fill in the field).  The replication controller's
>>> SA is not checked.
>>>
>>> oadm policy ... -z default
>>>
>>> On May 17, 2016, at 8:39 PM, Alan Jones <ajo...@diamanti.com> wrote:
>>>
>>> I tried that:
>>> oadm policy add-acc-to-user hostmount-anyuid system:serviceaccount:
>>> openshift-infra:replication-controller
>>> ... and I still get the error.
>>> Is there any way to get the user name/group that fails authentication?
>>> Alan
>>>
>>> On Tue, May 17, 2016 at 9:33 AM, Clayton Coleman <ccole...@redhat.com>
>>> wrote:
>>>
>>>> anyuid doesn't grant hostPath, since that's a much more dangerous
>>>> permission.  You want grant hostmount-anyuid
>>>>
>>>> On Tue, May 17, 2016 at 11:44 AM, Alan Jones <ajo...@diamanti.com>
>>>> wrote:
>>>> > I have several containers that we run using K8 that require host
>>>> volume
>>>> > access.
>>>> > For example, I have a container called "evdispatch-v1" that I'm
>>>> trying to
>>>> > launch in a replication controller and get the below error.
>>>> > Following an example from "Enable Dockerhub Images that Require Root"
>>>> in
>>>> > (
>>>> https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
>>>> )
>>>> > I tried:
>>>> > oadm policy add-scc-to-user anyuid
>>>> > system:serviceaccount:openshift-infra:replication-controller
>>>> > But still get the error.
>>>> > Do you know what I need to do?
>>>> > Who knows more about this stuff?
>

Re: containters with host volumes from controllers

2016-05-18 Thread Alan Jones
I think I'm making progress:
oadm policy add-scc-to-user hostmount-anyuid
system:serviceaccount:openshift-infra:default
Now when I submit the replica set I get a different mount error that I
think I understand.
Note, the context I'm submitting the request in is using the node host
certs under /openshift.local/config/ to the API server.
There is no specified project.
Thank you!
Alan

On Wed, May 18, 2016 at 2:48 PM, Clayton Coleman <ccole...@redhat.com>
wrote:

>
>
> On May 18, 2016, at 5:26 PM, Alan Jones <ajo...@diamanti.com> wrote:
>
> > oadm policy ... -z default
> In the version of openshift origin I'm using the oadm command doesn't take
> '-z'.
> Can you fill in the dot, dot, dot for me?
> I'm trying to grant permission for host volume access for a pod created by
> the replication controller which was submitted with node credentials to the
> API server.
> Here is my latest failed attempt to try to follow your advice:
> oadm policy add-scc-to-group hostmount-anyuid system:serviceaccount:default
> Again, this would be much easier if I could get logs for what group and
> user it is evaluating when it fails.
> Alan
>
>
> system:serviceaccount:NAMESPACE:default
>
> Since policy is global, you have to identify which namespace/project
> contains the "default" service account (service accounts are scoped to a
> project).
>
>
> On Tue, May 17, 2016 at 5:46 PM, Clayton Coleman <ccole...@redhat.com>
> wrote:
>
>> You need to grant the permission to a service account for the pod (which
>> is "default" if you don't fill in the field).  The replication controller's
>> SA is not checked.
>>
>> oadm policy ... -z default
>>
>> On May 17, 2016, at 8:39 PM, Alan Jones <ajo...@diamanti.com> wrote:
>>
>> I tried that:
>> oadm policy add-acc-to-user hostmount-anyuid system:serviceaccount:
>> openshift-infra:replication-controller
>> ... and I still get the error.
>> Is there any way to get the user name/group that fails authentication?
>> Alan
>>
>> On Tue, May 17, 2016 at 9:33 AM, Clayton Coleman <ccole...@redhat.com>
>> wrote:
>>
>>> anyuid doesn't grant hostPath, since that's a much more dangerous
>>> permission.  You want grant hostmount-anyuid
>>>
>>> On Tue, May 17, 2016 at 11:44 AM, Alan Jones <ajo...@diamanti.com>
>>> wrote:
>>> > I have several containers that we run using K8 that require host volume
>>> > access.
>>> > For example, I have a container called "evdispatch-v1" that I'm trying
>>> to
>>> > launch in a replication controller and get the below error.
>>> > Following an example from "Enable Dockerhub Images that Require Root"
>>> in
>>> > (
>>> https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
>>> )
>>> > I tried:
>>> > oadm policy add-scc-to-user anyuid
>>> > system:serviceaccount:openshift-infra:replication-controller
>>> > But still get the error.
>>> > Do you know what I need to do?
>>> > Who knows more about this stuff?
>>> > Alan
>>> > ---
>>> > WARNINGevdispatch-v1
>>> 49e7ac4e-1bae-11e6-88c0-080027767789
>>> > ReplicationController replication-controller   FailedCreate
>>> > Error creating: pods "evdispatch-v1-" is forbidden: unable to validate
>>> > against any security context constraint:
>>> > [spec.containers[0].securityContext.volumes[0]: Invalid value:
>>> "hostPath":
>>> > hostPath volumes are not allowed to be used
>>> > spec.containers[0].securityContext.volumes[0]: Invalid value:
>>> "hostPath":
>>> > hostPath volumes are not allowed to be used]
>>> >
>>> > ___
>>> > users mailing list
>>> > users@lists.openshift.redhat.com
>>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>> >
>>>
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: containters with host volumes from controllers

2016-05-17 Thread Alan Jones
I tried that:
oadm policy add-acc-to-user hostmount-anyuid system:serviceaccount:
openshift-infra:replication-controller
... and I still get the error.
Is there any way to get the user name/group that fails authentication?
Alan

On Tue, May 17, 2016 at 9:33 AM, Clayton Coleman <ccole...@redhat.com>
wrote:

> anyuid doesn't grant hostPath, since that's a much more dangerous
> permission.  You want grant hostmount-anyuid
>
> On Tue, May 17, 2016 at 11:44 AM, Alan Jones <ajo...@diamanti.com> wrote:
> > I have several containers that we run using K8 that require host volume
> > access.
> > For example, I have a container called "evdispatch-v1" that I'm trying to
> > launch in a replication controller and get the below error.
> > Following an example from "Enable Dockerhub Images that Require Root" in
> > (
> https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
> )
> > I tried:
> > oadm policy add-scc-to-user anyuid
> > system:serviceaccount:openshift-infra:replication-controller
> > But still get the error.
> > Do you know what I need to do?
> > Who knows more about this stuff?
> > Alan
> > ---
> > WARNINGevdispatch-v149e7ac4e-1bae-11e6-88c0-080027767789
> > ReplicationController replication-controller   FailedCreate
> > Error creating: pods "evdispatch-v1-" is forbidden: unable to validate
> > against any security context constraint:
> > [spec.containers[0].securityContext.volumes[0]: Invalid value:
> "hostPath":
> > hostPath volumes are not allowed to be used
> > spec.containers[0].securityContext.volumes[0]: Invalid value: "hostPath":
> > hostPath volumes are not allowed to be used]
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users