Re: Prune operations

2016-06-08 Thread Srinivas Naga Kotaru (skotaru)
Thanks Clayton for confirmation 

Srinivas Kotaru

Sent from my iPhone

> On Jun 8, 2016, at 9:56 PM, Clayton Coleman  wrote:
> 
> At the current time cron would be the recommended approach.
> 
> On Wed, Jun 8, 2016 at 11:56 PM, Srinivas Naga Kotaru (skotaru)
>  wrote:
>> Currently all prune operations are run by oadm command manually.  Is there
>> any way to automate and schedule? Is old friend Cron is best recommended or
>> something else?
>> 
>> 
>> 
>> https://docs.openshift.com/enterprise/3.2/admin_guide/pruning_resources.html
>> 
>> 
>> 
>> 
>> 
>> Pl advise
>> 
>> 
>> 
>> --
>> 
>> Srinivas Kotaru
>> 
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Prune operations

2016-06-08 Thread Srinivas Naga Kotaru (skotaru)
Currently all prune operations are run by oadm command manually.  Is there any 
way to automate and schedule? Is old friend Cron is best recommended or 
something else?

https://docs.openshift.com/enterprise/3.2/admin_guide/pruning_resources.html


Pl advise

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: 503 - Maintenance page

2016-06-08 Thread Ram Ranganathan
So an alternative might be to use a temporary redirect on '/' - 302 to some
site-under-maintenance-page  (which can return a 503 http code w/ whatever
custom page content you want). And who knows, that might also make a http
purist happier!! ;^)

On Wed, Jun 8, 2016 at 7:31 AM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

>
> On Tue, Jun 7, 2016 at 6:45 PM, Ram Ranganathan 
> wrote:
>
>> Is your server always returning 503 - example for a GET/HEAD on / ? That
>> could cause haproxy to mark it as down.
>>
>> You can also see the stats in haproxy to look at if the server has been
>> marked down:
>> cmd="echo 'show stat' | socat
>> unix-connect:/var/lib/haproxy/run/haproxy.sock stdio"
>> echo "$cmd"  | oc rsh #  replace with router pod
>> name.
>>
>
> Of course my server is returning a 503 for "/' :) (it's down for
> maintenance). Haproxy thinks no server is available, so it's not even
> trying to pass the page. Make sense.
> Ok, so I guess I'll to use a custom router then :(
>
> Thanks for your help.
> Philippe
>



-- 
Ram//
main(O,s){s=--O;10>4*s)*(O++?-1:1):10)&&\
main(++O,s++);}
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Some s2i builds failing with "manifest unknown", also support for Origin for Docker versions

2016-06-08 Thread Clayton Coleman
Users may have noticed a brief window between yesterday and today
where builds of our S2I based images began to fail with "manifest
unknown" if you were using Docker 1.9 and had pulled the latest
images.  We started building S2I images with Docker 1.10 and are now
in the process of reverting.

This is related to the change to the Docker Hub to begin accepting
schema2 images from Docker 1.10+ systems.  The consequence is that the
actual stored image is different, and you can no longer pull by digest
from a Docker 1.9 system.  We're going to continue building all Origin
s2i images using Docker 1.9.

An Origin 1.2 cluster or older can work with both Docker 1.9 and
Docker 1.10, although we recommend Docker 1.9.  If users wish to
upgrade to Docker 1.10, they should do that across their entire
cluster.

Origin 1.3 will drop support for Docker 1.9 and require Docker 1.10.
Once Origin 1.3 is released, we will begin building images from Docker
1.10 and will not be usable with older clusters.  If you'd like to
continue using Docker 1.9, you should build those images yourself:

oc new-build https://github.com/openshift/s2i-ruby.git
--context-dir=2.0 --to ruby:2.0

(which is also a great way to fork s2i and add your own customizations!)

Please respond with any questions.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: 503 - Maintenance page

2016-06-08 Thread Philippe Lafoucrière
On Tue, Jun 7, 2016 at 6:45 PM, Ram Ranganathan  wrote:

> Is your server always returning 503 - example for a GET/HEAD on / ? That
> could cause haproxy to mark it as down.
>
> You can also see the stats in haproxy to look at if the server has been
> marked down:
> cmd="echo 'show stat' | socat
> unix-connect:/var/lib/haproxy/run/haproxy.sock stdio"
> echo "$cmd"  | oc rsh #  replace with router pod
> name.
>

Of course my server is returning a 503 for "/' :) (it's down for
maintenance). Haproxy thinks no server is available, so it's not even
trying to pass the page. Make sense.
Ok, so I guess I'll to use a custom router then :(

Thanks for your help.
Philippe
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: anyone seen this error from ansible install?

2016-06-08 Thread Alan Jones
See below.  I also have two issues that might be related:

1) This is a "reinstall", I've successfully installed using the "simple"
interactive, remove and added rpms, and run ansible install

2) There is some issue on this cluster that makes ansible run slow, total
18m to fail vs. <8m for success on a VM cluster with identical config

Thanks for any insights you can give me!

Alan


[root@pocsj41 ~]# rpm -qa | grep ansible

openshift-*ansible*-docs-3.0.94-1.git.0.67a822a.el7.noarch

openshift-*ansible*-lookup-plugins-3.0.94-1.git.0.67a822a.el7.noarch

openshift-*ansible*-filter-plugins-3.0.94-1.git.0.67a822a.el7.noarch

openshift-*ansible*-3.0.94-1.git.0.67a822a.el7.noarch

openshift-*ansible*-playbooks-3.0.94-1.git.0.67a822a.el7.noarch

openshift-*ansible*-roles-3.0.94-1.git.0.67a822a.el7.noarch

*ansible*-1.9.4-1.el7aos.noarch

[root@pocsj41 ~]# rpm -qa | grep openshift

*openshift*-ansible-docs-3.0.94-1.git.0.67a822a.el7.noarch

*openshift*-ansible-lookup-plugins-3.0.94-1.git.0.67a822a.el7.noarch

atomic-*openshift*-node-3.2.0.44-1.git.0.a4463d9.el7.x86_64

*openshift*-ansible-filter-plugins-3.0.94-1.git.0.67a822a.el7.noarch

atomic-*openshift*-sdn-ovs-3.2.0.44-1.git.0.a4463d9.el7.x86_64

*openshift*-ansible-3.0.94-1.git.0.67a822a.el7.noarch

*openshift*-ansible-playbooks-3.0.94-1.git.0.67a822a.el7.noarch

atomic-*openshift*-clients-3.2.0.44-1.git.0.a4463d9.el7.x86_64

*openshift*-ansible-roles-3.0.94-1.git.0.67a822a.el7.noarch

atomic-*openshift*-3.2.0.44-1.git.0.a4463d9.el7.x86_64

tuned-profiles-atomic-*openshift*-node-3.2.0.44-1.git.0.a4463d9.el7.x86_64

atomic-*openshift*-master-3.2.0.44-1.git.0.a4463d9.el7.x86_64

atomic-*openshift*-utils-3.0.94-1.git.0.67a822a.el7.noarch

[root@pocsj41 ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 7.2 (Maipo)


On Wed, Jun 8, 2016 at 5:29 AM, Brenton Leanhardt 
wrote:

> Can you provide the version of ansible you are using as well as the
> RPM or git checkout ref of the playbooks you're using?
>
> Thanks,
> Brenton
>
> On Tue, Jun 7, 2016 at 9:01 PM, Alan Jones  wrote:
> > Error followed by /etc/ansible/hosts below.
> > Alan
> > ---
> > TASK: [openshift_facts | Verify Ansible version is greater than or equal
> to
> > 1.9.4] ***
> > fatal: [pocsj41] => Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py",
> line
> > 586, in _executor
> > exec_rc = self._executor_internal(host, new_stdin)
> >   File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py",
> line
> > 789, in _executor_internal
> > return self._executor_internal_inner(host, self.module_name,
> > self.module_args, inject, port, complex_args=complex_args)
> >   File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py",
> line
> > 869, in _executor_internal_inner
> > if not utils.check_conditional(cond, self.basedir, inject,
> > fail_on_undefined=self.error_on_undefined_vars):
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/__init__.py", line
> > 269, in check_conditional
> > conditional = template.template(basedir, presented, inject)
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> > 124, in template
> > varname = template_from_string(basedir, varname, templatevars,
> > fail_on_undefined)
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> > 382, in template_from_string
> > res = jinja2.utils.concat(rf)
> >   File "", line 6, in root
> >   File "/usr/lib/python2.7/site-packages/jinja2/runtime.py", line 153, in
> > resolve
> > return self.parent[key]
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> > 205, in __getitem__
> > return template(self.basedir, var, self.vars,
> > fail_on_undefined=self.fail_on_undefined)
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> > 124, in template
> > varname = template_from_string(basedir, varname, templatevars,
> > fail_on_undefined)
> >   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> > 382, in template_from_string
> > res = jinja2.utils.concat(rf)
> >   File "", line 10, in root
> >   File "/usr/share/ansible_plugins/filter_plugins/oo_filters.py", line
> 742,
> > in oo_persistent_volumes
> > if len(groups['oo_nfs_to_config']) > 0:
> > KeyError: 'oo_nfs_to_config'
> >
> >
> > FATAL: all hosts have already failed -- aborting
> >
> > --- /etc/ansible/hosts
> > [OSEv3:children]
> > masters
> > nodes
> > [OSEv3:vars]
> > ansible_ssh_user=root
> > deployment_type=openshift-enterprise
> > openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
> > 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
> > 'filename': '/etc/origin/master/htpasswd'}]
> > [masters]
> > pocsj41
> > [nodes]
> > pocsj41 openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
> > openshift_hostname=pocsj41 

Re: anyone seen this error from ansible install?

2016-06-08 Thread Brenton Leanhardt
Can you provide the version of ansible you are using as well as the
RPM or git checkout ref of the playbooks you're using?

Thanks,
Brenton

On Tue, Jun 7, 2016 at 9:01 PM, Alan Jones  wrote:
> Error followed by /etc/ansible/hosts below.
> Alan
> ---
> TASK: [openshift_facts | Verify Ansible version is greater than or equal to
> 1.9.4] ***
> fatal: [pocsj41] => Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py", line
> 586, in _executor
> exec_rc = self._executor_internal(host, new_stdin)
>   File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py", line
> 789, in _executor_internal
> return self._executor_internal_inner(host, self.module_name,
> self.module_args, inject, port, complex_args=complex_args)
>   File "/usr/lib/python2.7/site-packages/ansible/runner/__init__.py", line
> 869, in _executor_internal_inner
> if not utils.check_conditional(cond, self.basedir, inject,
> fail_on_undefined=self.error_on_undefined_vars):
>   File "/usr/lib/python2.7/site-packages/ansible/utils/__init__.py", line
> 269, in check_conditional
> conditional = template.template(basedir, presented, inject)
>   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> 124, in template
> varname = template_from_string(basedir, varname, templatevars,
> fail_on_undefined)
>   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> 382, in template_from_string
> res = jinja2.utils.concat(rf)
>   File "", line 6, in root
>   File "/usr/lib/python2.7/site-packages/jinja2/runtime.py", line 153, in
> resolve
> return self.parent[key]
>   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> 205, in __getitem__
> return template(self.basedir, var, self.vars,
> fail_on_undefined=self.fail_on_undefined)
>   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> 124, in template
> varname = template_from_string(basedir, varname, templatevars,
> fail_on_undefined)
>   File "/usr/lib/python2.7/site-packages/ansible/utils/template.py", line
> 382, in template_from_string
> res = jinja2.utils.concat(rf)
>   File "", line 10, in root
>   File "/usr/share/ansible_plugins/filter_plugins/oo_filters.py", line 742,
> in oo_persistent_volumes
> if len(groups['oo_nfs_to_config']) > 0:
> KeyError: 'oo_nfs_to_config'
>
>
> FATAL: all hosts have already failed -- aborting
>
> --- /etc/ansible/hosts
> [OSEv3:children]
> masters
> nodes
> [OSEv3:vars]
> ansible_ssh_user=root
> deployment_type=openshift-enterprise
> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
> 'filename': '/etc/origin/master/htpasswd'}]
> [masters]
> pocsj41
> [nodes]
> pocsj41 openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
> openshift_hostname=pocsj41 openshift_public_hostname=pocsj41
> openshift_ip=172.16.51.2 openshift_public_ip=172.16.51.2
> pocsj42 openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
> openshift_hostname=pocsj42 openshift_public_hostname=pocsj42
> openshift_ip=172.16.51.4 openshift_public_ip=172.16.51.4
> pocsj43 openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
> openshift_hostname=pocsj43 openshift_public_hostname=pocsj43
> openshift_ip=172.16.51.7 openshift_public_ip=172.16.51.7
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users