Re: Failure when adding node - Approve node certificates when bootstrapping

2019-06-26 Thread Dan Pungă

Hi,

I've recently run the scaleup procedeure on a 3.11 OKD cluster with the 
same result(failure) from the ansible run.
However, when checking for node status and extra info I've found that 
the node was successfully added to the cluster and in "Ready" state.


oc get nodes -o wide -> gives the status of the nodes, their role, 
internal IP etc;


I had a similar CSR problem when initially installing the cluster and 
posted a question in here some weeks ago. My problem was DNS related, 
but, while searching for a solution, I found that subsequent runs of the 
node playbook would generate OKD csr-s that would not be approved, but 
in pending state.

You can see if there are any and what state they're in with:

oc get csr

What I did was to enable automatic certificate issue from the master 
with using the variable


openshift_master_bootstrap_auto_approve=true

This is documented as being used in the cluster auto-scaling procedure 
in the case of AWS-deployed clusters. I honestly don't know if this 
change also has side effects apart from eliminating the 
duplicate/invalid csr-s being created in subsequent runs of the same 
playbook. And, again, this was tried while trying to solve the initial 
problem and left like that for the following operations with the 
inventory file.


Going back to the scale-up problem, I also checked, after looking for 
the node state, that the node gets Pods allocated(either by running 
repeated deployments of a test app, or adding a label to the new node 
and specifying it as a selector inside the test DeploymentConfig).
In my case, again, the node addition seems to have been successful, 
despite the ansible install error.


Hope this is of some help,
Dan


On 25.06.2019 21:00, Robert Dahlem wrote:

Hi,

I tried adding a node by adding to /etc/ansible/hosts:
===
[OSEv3:children]
new_nodes

[new_nodes:vars]
openshift_disable_check=disk_availability,memory_availability,docker_storage

[new_nodes]
os-node2.MYDOMAIN openshift_node_group_name='node-config-compute'
===

and running:
# ansible-playbook
/usr/share/ansible/openshift-ansible/playbooks/openshift-node/scaleup.yml

Unfortunately this (repeatedly) ends in:

===
TASK [Approve node certificates when bootstrapping]
***
FAILED - RETRYING: Approve node certificates when bootstrapping (30
retries left).
...FAILED - RETRYING: Approve node certificates when bootstrapping (1
retries left).
...
 to retry, use: --limit
@/usr/share/ansible/openshift-ansible/playbooks/openshift-node/scaleup.retry
===

# uname -a
Linux os-master.MYDOMAIN 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18
16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/;
BUG_REPORT_URL="https://bugs.centos.org/;

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

# oc version
oc v3.11.0+62803d0-1
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://os-master.openshift.rdahlem.de:8443
openshift v3.11.0+7f5d53b-195
kubernetes v1.11.0+d4cacc0


# ansible --version
ansible 2.6.14
   config file = /etc/ansible/ansible.cfg
   configured module search path = [u'/root/.ansible/plugins/modules',
u'/usr/share/ansible/plugins/modules']
   ansible python module location = /usr/lib/python2.7/site-packages/ansible
   executable location = /bin/ansible
   python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5
20150623 (Red Hat 4.8.5-36)]

What additional information would be needed?

Kind regards,
Robert

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OKD 3.10 and Jenkins update

2019-01-10 Thread Dan Pungă

Hello all!

OKD-3.10 ships with Jenkins 2.107.3(I guess LTS version). Looking at the 
Dashboard, there are multiple security vulnerabilities reported for both 
the main version and for the plugins that are shipped with it.
I tried a simple upgrade on all listed plugins, but the result was that 
some failed to work.
I've tried using the docker.io/openshift/jenkins-2-centos7:v3.11, so for 
OKD 3.11, and I've managed to update all plugins with no conflict or 
problem. The pipelines seemed to work fine until I discovered some 
inconsistencies in my tests.


I have no idea where the problem lies, but I would like to know if there 
is any reference with the matrix of versions for Jenkins and 
Openshift-related plugins that both work well and are secure for OKD 3.10.


Thank you,

Dan

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: ServiceAccount token for a build Pod (Dynamic resource creation )

2018-11-30 Thread Dan Pungă
I thought a bit and I guess I can create a fixed name secret that holds 
the access token for the SA and use it as a usual build-time/mounted 
secret. Now my process works.


But to go back to my initial confusion, what is the use case of 
buildconfig.spec.serviceAccount? The API documentation states "(string) 
serviceAccount is the name of the ServiceAccount to use to run the pod 
created by this build. The pod will be allowed to use secrets referenced 
by the ServiceAccount"
If the build process is directly run by the Docker engine, then to which 
Pod does the above doc make reference to?


Thanks again for the info and help,

Dan Pungă


On 30.11.2018 02:49, Ben Parees wrote:



On Thu, Nov 29, 2018 at 6:53 PM Dan Pungă <mailto:dan.pu...@gmail.com>> wrote:


Thanks for the reply!

My response is inline as well.

On 30.11.2018 00:51, Ben Parees wrote:



On Thu, Nov 29, 2018 at 5:34 PM Dan Pungă mailto:dan.pu...@gmail.com>> wrote:

Hello all,

The short version/question would be: How can I use a custom
ServiceAccount with a BuildConfig?


you can choose the SA used by the build via:
buildconfig.spec.serviceAccount

But I don't think this will help you.


It appears the build Pod doesn't have the serviceAcoount's
token mounted
at the location:

cat: /var/run/secrets/kubernetes.io/serviceaccount/token
<http://kubernetes.io/serviceaccount/token>: No such file
or directory


how are you running the cat command?

In general users cannot get into/manipulate the build pod.  If
you're executing that from within your build logic, then it's
going to run inside your build container (ie where your
application is constructd) which does not have the builder
service account available, it's not the same as the build pod
itself which would have the service account token mounted.

It sounds like you might want to use build secrets to make a
credential available to your build logic:

https://docs.okd.io/latest/dev_guide/builds/build_inputs.html#using-secrets-during-build



I'm running the command as a postCommit hook/script. So, if I
understand it right, it should be a temporary pod that runs the
image that was just build.


it's not run as the pod, that is the source of your confusion.  It is 
directly run by the container runtime engine, it is not managed by 
kubernetes/openshift, thus it does not have any "pod" content injected.


The actual BuildConfig holds:

spec:
  
  postCommit:
    command:
  - /bin/bash
  - '-c'
  - $HOME/scripts/checkAndCreateConf.sh
  serviceAccount: manager

I was expecting the same behaviour as with a container defined in
a DeploymentConfig/Job/CronJob where the serviceAccount's token is
mounted in /var/run/secrets/kubernetes.io/serviceaccount/token
<http://kubernetes.io/serviceaccount/token>

So I don't use it during the actual build process and I can't
configure it as a build input because I can't reference the secret
by name in a consistent way. OKD creates the secrets for SAs with
some appended random 5 charactersmanager-token-x


ok, if you can't define a consistently named secret yourself that the 
build can reference, i'm afraid I don't have another option for you 
that just uses the buildconfig.


You might be better served by using a jenkins pipeline that executes 
the actions you want.






Thank you!

Longer version:

I'm trying to create Openshift resources from within a Pod.
The starting point is the app - that needs to be deployed -
which holds
an "unknown" number of configurations/customers that need to
run on
their own containers. So for each of them I need a set of
resources
created inside an Openshift/OKD project; mainly a
deploymentConfig and a
service that exposes the runtime ports.

I can build the application for all the customers and the
build is also
triggered by a repository hook. So each time a build is done,
it is
certain that the image pushed to the stream holds app-builds
for all
those customers.

What I've done so far is to make use of a custom
ServiceAccount with a
custom project role given to it and a Template that defines the
DeploymentConfig, Service, etc in parameterized form. The
idea being
that I would run a pod, using the ServiceAccount, on a image
that holds
the built application, authenticate via token to the OKD API
and, based
on some logic, it would discover the customers that don't
have the
needed resources and create those from the template with
specific
parameter values.

I've

Re: ServiceAccount token for a build Pod (Dynamic resource creation )

2018-11-29 Thread Dan Pungă

Thanks for the reply!

My response is inline as well.

On 30.11.2018 00:51, Ben Parees wrote:



On Thu, Nov 29, 2018 at 5:34 PM Dan Pungă <mailto:dan.pu...@gmail.com>> wrote:


Hello all,

The short version/question would be: How can I use a custom
ServiceAccount with a BuildConfig?


you can choose the SA used by the build via: 
buildconfig.spec.serviceAccount


But I don't think this will help you.


It appears the build Pod doesn't have the serviceAcoount's token
mounted
at the location:

cat: /var/run/secrets/kubernetes.io/serviceaccount/token
<http://kubernetes.io/serviceaccount/token>: No such file
or directory


how are you running the cat command?

In general users cannot get into/manipulate the build pod.  If you're 
executing that from within your build logic, then it's going to run 
inside your build container (ie where your application is constructd) 
which does not have the builder service account available, it's not 
the same as the build pod itself which would have the service account 
token mounted.


It sounds like you might want to use build secrets to make a 
credential available to your build logic:

https://docs.okd.io/latest/dev_guide/builds/build_inputs.html#using-secrets-during-build


I'm running the command as a postCommit hook/script. So, if I understand 
it right, it should be a temporary pod that runs the image that was just 
build.


The actual BuildConfig holds:

spec:
  
  postCommit:
    command:
  - /bin/bash
  - '-c'
  - $HOME/scripts/checkAndCreateConf.sh
  serviceAccount: manager

I was expecting the same behaviour as with a container defined in a 
DeploymentConfig/Job/CronJob where the serviceAccount's token is mounted 
in /var/run/secrets/kubernetes.io/serviceaccount/token 
<http://kubernetes.io/serviceaccount/token>


So I don't use it during the actual build process and I can't configure 
it as a build input because I can't reference the secret by name in a 
consistent way. OKD creates the secrets for SAs with some appended 
random 5 charactersmanager-token-x






Thank you!

Longer version:

I'm trying to create Openshift resources from within a Pod.
The starting point is the app - that needs to be deployed - which
holds
an "unknown" number of configurations/customers that need to run on
their own containers. So for each of them I need a set of resources
created inside an Openshift/OKD project; mainly a deploymentConfig
and a
service that exposes the runtime ports.

I can build the application for all the customers and the build is
also
triggered by a repository hook. So each time a build is done, it is
certain that the image pushed to the stream holds app-builds for all
those customers.

What I've done so far is to make use of a custom ServiceAccount
with a
custom project role given to it and a Template that defines the
DeploymentConfig, Service, etc in parameterized form. The idea being
that I would run a pod, using the ServiceAccount, on a image that
holds
the built application, authenticate via token to the OKD API and,
based
on some logic, it would discover the customers that don't have the
needed resources and create those from the template with specific
parameter values.

I've tried using a Job, only to realize that it has "run once"
behaviour. So I cannot use the triggering mechanism.

I've also tried using a CronJob, and i'll probably use it if
there's no
other way to achieve the goal. I'd rather have this work by way of
notification and not by "polling".

I've tried using the postCommit hook and call my scripted logic after
the build is done, but I get the error about the unfound token. I
also
think I'll need to extend the custom role of the service account
so it
also has the rights of the builder SA.

___
users mailing list
users@lists.openshift.redhat.com
<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



--
Ben Parees | OpenShift

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD 3.9 to 3.10 upgrade failure on CentOS

2018-11-29 Thread Dan Pungă

Hi Dharmit,

What you're experiencing looks a lot like a problem I had with the 
upgrade. I ended up doing a fresh install.


I've tried fiddling around with the ansible config and as I was trying 
to get my head about what was happening I discovered an issue about node 
names. With this reply from Michael Gugino that shed some light on the 
matter: 
https://github.com/openshift/openshift-ansible/issues/9935#issuecomment-423268110


Basically my problem was that the upgrade playbook of OKD 3.10 expected 
that the node names from the previously isntalled version be the short 
name versions and not the FQDN.


I guess I was precisely in your position and I really didn't know what 
else to try except doing a fresh install. I have no idea if there is a 
way of changing node names of a running cluster. Maybe someone who knows 
more about the internals could be of help in this respect...


Since I see your installation is also a fresh one, maybe it would worth 
uninstalling 3.9 and installing the 3.10. Or maybe have a try at the 
newest 3.11.


Hope it helps,

Dan

On 20.11.2018 04:38, Dharmit Shah wrote:

Hi,

I'm trying to upgrade my OKD 3.9 cluster to 3.10 using
openshift-ansible. I have already described the problem in detail and
provided logs on the GitHub issue [1].

I could really use some help on this issue!

Regards,
Dharmit

[1] https://github.com/openshift/openshift-ansible/issues/10690



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


ServiceAccount token for a build Pod (Dynamic resource creation )

2018-11-29 Thread Dan Pungă

Hello all,

The short version/question would be: How can I use a custom 
ServiceAccount with a BuildConfig?


It appears the build Pod doesn't have the serviceAcoount's token mounted 
at the location:


cat: /var/run/secrets/kubernetes.io/serviceaccount/token: No such file 
or directory


Thank you!

Longer version:

I'm trying to create Openshift resources from within a Pod.
The starting point is the app - that needs to be deployed - which holds 
an "unknown" number of configurations/customers that need to run on 
their own containers. So for each of them I need a set of resources 
created inside an Openshift/OKD project; mainly a deploymentConfig and a 
service that exposes the runtime ports.


I can build the application for all the customers and the build is also 
triggered by a repository hook. So each time a build is done, it is 
certain that the image pushed to the stream holds app-builds for all 
those customers.


What I've done so far is to make use of a custom ServiceAccount with a 
custom project role given to it and a Template that defines the 
DeploymentConfig, Service, etc in parameterized form. The idea being 
that I would run a pod, using the ServiceAccount, on a image that holds 
the built application, authenticate via token to the OKD API and, based 
on some logic, it would discover the customers that don't have the 
needed resources and create those from the template with specific 
parameter values.


I've tried using a Job, only to realize that it has "run once" 
behaviour. So I cannot use the triggering mechanism.


I've also tried using a CronJob, and i'll probably use it if there's no 
other way to achieve the goal. I'd rather have this work by way of 
notification and not by "polling".


I've tried using the postCommit hook and call my scripted logic after 
the build is done, but I get the error about the unfound token. I also 
think I'll need to extend the custom role of the service account so it 
also has the rights of the builder SA.


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Node names as IPs not hostnames

2018-10-09 Thread Dan Pungă

Hi Rich!

What I understand from the description of your github issue is that 
you're trying to achieve node names as their IP addresses when 
specifically integrating with OpenStack.


My problem is that I don't want to integrate with OpenStack, but the 
openshift_facts.py script from the 3.9 release would still discover it 
as a provider and disregard the host-level/VM configuration when it 
comes to node naming. So I think my problem is different than your 
github issue.


As Scott Dodson pointed out, this is addressed in the release-3.10 
version of the script, where it takes into account the provider 
configuration only if this is so marked in the inventory file. Haven't 
tested it, but I guess the fix has to do with the check around here: 
https://github.com/openshift/openshift-ansible/blob/release-3.10/roles/openshift_facts/library/openshift_facts.py#L1033-L1036




On 09.10.2018 20:45, Rich Megginson wrote:
Are you hitting 
https://github.com/openshift/openshift-ansible/pull/9598 ?


On 10/9/18 11:25 AM, Dan Pungă wrote:

Thanks for the reply Scott!

I've used the release branches for both 3.9 and 3.10 of the 
openshift-ansible project, yes.
I've initially checked the openshift_facts.py script flow in the 3.9 
branch; now looking at the 3.10 version, I do see the change that 
you're pointing.


On 09.10.2018 05:40, Scott Dodson wrote:
Dan, are you using the latest from release-3.10 branch? I believe 
we've disabled the IaaS interrogation when you've not configured a 
cloud provider via openshift-ansible in the latest on that branch.


On Mon, Oct 8, 2018, 7:38 PM Dan Pungă <mailto:dan.pu...@gmail.com>> wrote:


    I've done a bit of digging and apparently my problem is 
precisely connected to the fact that I'm running the cluster on the 
OpenStack provider.


    Briefly put, the openshift_facts playbook relies on the 
openshift-ansible/roles/openshift_facts/library/openshift_facts.py 
script. This script uses the ansible.module_utils tools to
    discover the underlying system, including any existing IaaS 
provider with its detailis. In my case it discovers the OpenStack 
provider and when setting the hostnames, the provider
    configuration takes precedence over whatever I've configured at 
the VM level.


    In my case, I haven't properly set up the FQDNs/hostnames at the 
OpenStack level. Instead, after I've created and launched the 
instances, I disabled at the VM level the ability of the
    cloud provider to reset my hostname definition/configuration and 
I thought this would be enough.


    I guess I'll try a reinstall on a lab environment with the 
openshift_facts.py script modified so that it passes over the 
Openstack check and hope it does what I'd expect, which is to be

    agnostic to the type of hosts on which I install.
    I actually thought that the only way the OpenShift/OKD installer 
would try to integrate with a provider was if I'd specifically set 
the openshift_cloudprovider_kind variable in the

    inventory file along with the rest of the specific variables.

    Regards,
    Dan Pungă

    On 08.10.2018 18:44, Dan Pungă wrote:

    Hello all,

    I'm trying to upgrade a working cluster from Openshift Origin 
3.9 to OKD 3.10 and the control plane update fails at one point 
with host not found.
    I've looked abit over the problem and found this issue on 
github: https://github.com/openshift/openshift-ansible/issues/9935 
where michaelgugino points out that "when upgrading from
    3.9, your hostnames match the node names in 'oc get nodes' 
otherwise, we won't be able to find the CSRs for your nodes."


    In fact my issue is precisely this: the node names are in fact 
their IPs and not the hostnames of the specific machines. It was 
something that I saw upon installation, but as the 3.9

    cluster was functioning all right, I let it be.
    The idea is that I (think) I have the DNS resolution set up 
properly, with all machines being able to resolve each-other by 
FQDNs, however the 3.9 installer configured the node names
    with their respective IP addresses and I don't know how to 
address this.
    I should mention that the cluster is deployed inside an 
Openstack project, but the install config doesn't use 
OpenShift-Openstack configuration. However when running the
    ~/openshift-ansible/playbooks/byo/openshift_facts.yml I get 
references to the underlying openstack(somehow the installer 
"figures out" the undelying Openstack and treats it as a
    provider, the way I see it). I've pasted the output for one of 
the nodes below.


    Has any of you come across this node name config problem and 
were you able to solve it?
    Is there any procedure to change node names of a working 
cluster? I should say that the masters are also 
nodes(infrasructure), so I'm guessing the procedure, if there is 
one, would
    have to do with deprecating one master at a time, while for the 
nodes with a delete/change config/re-add procedure.

Re: Node names as IPs not hostnames

2018-10-09 Thread Dan Pungă

Thanks for the reply Scott!

I've used the release branches for both 3.9 and 3.10 of the 
openshift-ansible project, yes.
I've initially checked the openshift_facts.py script flow in the 3.9 
branch; now looking at the 3.10 version, I do see the change that you're 
pointing.


On 09.10.2018 05:40, Scott Dodson wrote:
Dan, are you using the latest from release-3.10 branch? I believe 
we've disabled the IaaS interrogation when you've not configured a 
cloud provider via openshift-ansible in the latest on that branch.


On Mon, Oct 8, 2018, 7:38 PM Dan Pungă <mailto:dan.pu...@gmail.com>> wrote:


I've done a bit of digging and apparently my problem is precisely
connected to the fact that I'm running the cluster on the
OpenStack provider.

Briefly put, the openshift_facts playbook relies on the
openshift-ansible/roles/openshift_facts/library/openshift_facts.py
script. This script uses the ansible.module_utils tools to
discover the underlying system, including any existing IaaS
provider with its detailis. In my case it discovers the OpenStack
provider and when setting the hostnames, the provider
configuration takes precedence over whatever I've configured at
the VM level.

In my case, I haven't properly set up the FQDNs/hostnames at the
OpenStack level. Instead, after I've created and launched the
instances, I disabled at the VM level the ability of the cloud
provider to reset my hostname definition/configuration and I
thought this would be enough.

I guess I'll try a reinstall on a lab environment with the
openshift_facts.py script modified so that it passes over the
Openstack check and hope it does what I'd expect, which is to be
agnostic to the type of hosts on which I install.
I actually thought that the only way the OpenShift/OKD installer
would try to integrate with a provider was if I'd specifically set
the openshift_cloudprovider_kind variable in the inventory file
along with the rest of the specific variables.

Regards,
Dan Pungă

On 08.10.2018 18:44, Dan Pungă wrote:

Hello all,

I'm trying to upgrade a working cluster from Openshift Origin 3.9
to OKD 3.10 and the control plane update fails at one point with
host not found.
I've looked abit over the problem and found this issue on github:
https://github.com/openshift/openshift-ansible/issues/9935 where
michaelgugino points out that "when upgrading from 3.9, your
hostnames match the node names in 'oc get nodes' otherwise, we
won't be able to find the CSRs for your nodes."

In fact my issue is precisely this: the node names are in fact
their IPs and not the hostnames of the specific machines. It was
something that I saw upon installation, but as the 3.9 cluster
was functioning all right, I let it be.
The idea is that I (think) I have the DNS resolution set up
properly, with all machines being able to resolve each-other by
FQDNs, however the 3.9 installer configured the node names with
their respective IP addresses and I don't know how to address this.
I should mention that the cluster is deployed inside an Openstack
project, but the install config doesn't use OpenShift-Openstack
configuration. However when running the
~/openshift-ansible/playbooks/byo/openshift_facts.yml I get
references to the underlying openstack(somehow the installer
"figures out" the undelying Openstack and treats it as a
provider, the way I see it). I've pasted the output for one of
the nodes below.

Has any of you come across this node name config problem and were
you able to solve it?
Is there any procedure to change node names of a working cluster?
I should say that the masters are also nodes(infrasructure), so
I'm guessing the procedure, if there is one, would have to do
with deprecating one master at a time, while for the nodes with a
delete/change config/re-add procedure.

Thank you!

Output from openshift_facts playbook:

ok: [node1.oshift-pinfold.intra] => {
    "result": {
    "ansible_facts": {
    "openshift": {
    "common": {
    "all_hostnames": [
    "node1.oshift-pinfold.intra",
    "192.168.150.22"
    ],
    "config_base": "/etc/origin",
    "deployment_subtype": "basic",
    "deployment_type": "origin",
    "dns_domain": "cluster.local",
    "examples_content_version": "v3.9",
    "generate_no_proxy_hosts": true,
    "hostname": 

Re: Node names as IPs not hostnames

2018-10-08 Thread Dan Pungă
I've done a bit of digging and apparently my problem is precisely 
connected to the fact that I'm running the cluster on the OpenStack 
provider.


Briefly put, the openshift_facts playbook relies on the 
openshift-ansible/roles/openshift_facts/library/openshift_facts.py 
script. This script uses the ansible.module_utils tools to discover the 
underlying system, including any existing IaaS provider with its 
detailis. In my case it discovers the OpenStack provider and when 
setting the hostnames, the provider configuration takes precedence over 
whatever I've configured at the VM level.


In my case, I haven't properly set up the FQDNs/hostnames at the 
OpenStack level. Instead, after I've created and launched the instances, 
I disabled at the VM level the ability of the cloud provider to reset my 
hostname definition/configuration and I thought this would be enough.


I guess I'll try a reinstall on a lab environment with the 
openshift_facts.py script modified so that it passes over the Openstack 
check and hope it does what I'd expect, which is to be agnostic to the 
type of hosts on which I install.
I actually thought that the only way the OpenShift/OKD installer would 
try to integrate with a provider was if I'd specifically set the 
openshift_cloudprovider_kind variable in the inventory file along with 
the rest of the specific variables.


Regards,
Dan Pungă

On 08.10.2018 18:44, Dan Pungă wrote:

Hello all,

I'm trying to upgrade a working cluster from Openshift Origin 3.9 to 
OKD 3.10 and the control plane update fails at one point with host not 
found.
I've looked abit over the problem and found this issue on github: 
https://github.com/openshift/openshift-ansible/issues/9935 where 
michaelgugino points out that "when upgrading from 3.9, your hostnames 
match the node names in 'oc get nodes' otherwise, we won't be able to 
find the CSRs for your nodes."


In fact my issue is precisely this: the node names are in fact their 
IPs and not the hostnames of the specific machines. It was something 
that I saw upon installation, but as the 3.9 cluster was functioning 
all right, I let it be.
The idea is that I (think) I have the DNS resolution set up properly, 
with all machines being able to resolve each-other by FQDNs, however 
the 3.9 installer configured the node names with their respective IP 
addresses and I don't know how to address this.
I should mention that the cluster is deployed inside an Openstack 
project, but the install config doesn't use OpenShift-Openstack 
configuration. However when running the 
~/openshift-ansible/playbooks/byo/openshift_facts.yml I get references 
to the underlying openstack(somehow the installer "figures out" the 
undelying Openstack and treats it as a provider, the way I see it). 
I've pasted the output for one of the nodes below.


Has any of you come across this node name config problem and were you 
able to solve it?
Is there any procedure to change node names of a working cluster? I 
should say that the masters are also nodes(infrasructure), so I'm 
guessing the procedure, if there is one, would have to do with 
deprecating one master at a time, while for the nodes with a 
delete/change config/re-add procedure.


Thank you!

Output from openshift_facts playbook:

ok: [node1.oshift-pinfold.intra] => {
    "result": {
    "ansible_facts": {
    "openshift": {
    "common": {
    "all_hostnames": [
    "node1.oshift-pinfold.intra",
    "192.168.150.22"
    ],
    "config_base": "/etc/origin",
    "deployment_subtype": "basic",
    "deployment_type": "origin",
    "dns_domain": "cluster.local",
    "examples_content_version": "v3.9",
    "generate_no_proxy_hosts": true,
    "hostname": "192.168.150.22",
    "internal_hostnames": [
    "192.168.150.22"
    ],
    "ip": "192.168.150.22",
    "kube_svc_ip": "172.30.0.1",
    "portal_net": "172.30.0.0/16",
    "public_hostname": "node1.oshift-pinfold.intra",
    "public_ip": "192.168.150.22",
    "short_version": "3.9",
    "version": "3.9.0",
    "version_gte_3_10": false,
    "version_gte_3_6": true,
    "version_gte_3_7": true,
    "version_gte_3_8": true,
  

Node names as IPs not hostnames

2018-10-08 Thread Dan Pungă

Hello all,

I'm trying to upgrade a working cluster from Openshift Origin 3.9 to OKD 
3.10 and the control plane update fails at one point with host not found.
I've looked abit over the problem and found this issue on github: 
https://github.com/openshift/openshift-ansible/issues/9935 where 
michaelgugino points out that "when upgrading from 3.9, your hostnames 
match the node names in 'oc get nodes' otherwise, we won't be able to 
find the CSRs for your nodes."


In fact my issue is precisely this: the node names are in fact their IPs 
and not the hostnames of the specific machines. It was something that I 
saw upon installation, but as the 3.9 cluster was functioning all right, 
I let it be.
The idea is that I (think) I have the DNS resolution set up properly, 
with all machines being able to resolve each-other by FQDNs, however the 
3.9 installer configured the node names with their respective IP 
addresses and I don't know how to address this.
I should mention that the cluster is deployed inside an Openstack 
project, but the install config doesn't use OpenShift-Openstack 
configuration. However when running the 
~/openshift-ansible/playbooks/byo/openshift_facts.yml I get references 
to the underlying openstack(somehow the installer "figures out" the 
undelying Openstack and treats it as a provider, the way I see it). I've 
pasted the output for one of the nodes below.


Has any of you come across this node name config problem and were you 
able to solve it?
Is there any procedure to change node names of a working cluster? I 
should say that the masters are also nodes(infrasructure), so I'm 
guessing the procedure, if there is one, would have to do with 
deprecating one master at a time, while for the nodes with a 
delete/change config/re-add procedure.


Thank you!

Output from openshift_facts playbook:

ok: [node1.oshift-pinfold.intra] => {
    "result": {
    "ansible_facts": {
    "openshift": {
    "common": {
    "all_hostnames": [
    "node1.oshift-pinfold.intra",
    "192.168.150.22"
    ],
    "config_base": "/etc/origin",
    "deployment_subtype": "basic",
    "deployment_type": "origin",
    "dns_domain": "cluster.local",
    "examples_content_version": "v3.9",
    "generate_no_proxy_hosts": true,
    "hostname": "192.168.150.22",
    "internal_hostnames": [
    "192.168.150.22"
    ],
    "ip": "192.168.150.22",
    "kube_svc_ip": "172.30.0.1",
    "portal_net": "172.30.0.0/16",
    "public_hostname": "node1.oshift-pinfold.intra",
    "public_ip": "192.168.150.22",
    "short_version": "3.9",
    "version": "3.9.0",
    "version_gte_3_10": false,
    "version_gte_3_6": true,
    "version_gte_3_7": true,
    "version_gte_3_8": true,
    "version_gte_3_9": true
    },
    "current_config": {
    "roles": [
    "node"
    ]
    },
    "node": {
    "bootstrapped": false,
    "nodename": "192.168.150.22",
    "sdn_mtu": "1408"
    },
    "provider": {
    "metadata": {
    "availability_zone": "nova",
    "ec2_compat": {
    "ami-id": "None",
    "ami-launch-index": "0",
    "ami-manifest-path": "FIXME",
    "block-device-mapping": {
    "ami": "vda",
    "ebs0": "/dev/vda",
    "ebs1": "/dev/vdb",
    "root": "/dev/vda"
    },
    "hostname": "node1.novalocal",
    "instance-action": "none",
    "instance-id": "i-0583",
    "instance-type": "1cpu-2ram-20disk",
    "local-hostname": "node1.novalocal",
    "local-ipv4": "192.168.150.22",
    "placement": {
    "availability-zone": "nova"
    },
    "public-hostname": "node1.novalocal",
    "public-ipv4": [],
    "public-keys/": "0=xxx",
    "reservation-id": "r-la13azpq",
    "security-groups": [
    "DefaultInternal",
    "oshift-node"
 

Openshift Origin 3.9 - Web-Console masterURL configMap and authentication problems - part 2

2018-06-04 Thread Dan Pungă

Hello all!

I'll have to resubmit this issue as it's still a problem for my 
installed cluster.


My environment consists of 2 masters and one load-balancer that has the 
default HAProxy installed by the automated install procedure via the 
openshift-ansible project. So it's basically the "default" version for a 
cluster with 2 masters.
What I find is that the config map for the web-console gets created by 
default with one of the masters as consolePublicURL and masterPublicURL 
and not the load-balancer entry-point, as I would have expected.


So if I try to simulate a master fail for the host that's configured in 
the configMap, this effectively means that I cannot reach the 
webconsole, even though I do have the second master available.


I have tried editing the configMap for the webconsole and the 
oauthclient openshift-web-console, but this results in invalid request 
errors when trying to access the web-console via a web-browser.


The important edits are:
- in configmap webconsole-config:

  consolePublicURL: https://loadbalancer.my.net:8443/console/
  masterPublicURL: https://loadbalancer.my.net:8443

- in oauthclient openshift-web-console:

redirectURIs:
- https://loadbalancer.my.net:8443/
- https://master1.my.net:8443/console/
- https://master2.my.net:8443/console/

I've already gone through some mail exchanges with Sam Padgett who has 
pointed out that it might be load-balancer config related, but, as I 
stated, the load-balancer is the default one configured by the 
installer. The HAProxy config file seems to be the default one provided 
in the openshift-ansible project.


I'd appreciate if you have any ideas about where to look for this problem.

Thanks in advance!



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: hawkular-cassandra failed to startup on openshift origin 3.9

2018-05-25 Thread Dan Pungă

Hi,

I've installed a similar configuration and it works. Origin 3.9 with 
metrics installed and ephemeral storage(emptyDir/default).

What I have specified in my inventory file is

openshift_metrics_image_prefix=docker.io/openshift/origin-
openshift_metrics_image_version=v3.9

so I also have the var for openshift_metrics_image_prefix, but I think 
the value there is actually the default one, so the config should be 
identical.


I've attached the replication controller for the hawkular-cassandra pod 
on my cluster (I've removed some annotations and state info). You could 
compare it to yours and see if there are differences

oc get rc/hawkular-cassandra-1 -n openshift-infra -o yaml to see yours

Hope it helps!

On 25.05.2018 13:29, Yu Wei wrote:

configuration as below,

/openshift_metrics_install_metrics=true
/
/openshift_metrics_image_version=v3.9
/
/openshift_master_default_subdomain=paas-dev.dataos.io
/
/#openshift_hosted_logging_deploy=true
/
/openshift_logging_install_logging=true
/
/openshift_logging_image_version=v3.9
/
/openshift_disable_check=disk_availability,docker_image_availability,docker_storage
/
/osm_etcd_image=registry.access.redhat.com/rhel7/etcd
/
/
/
/openshift_enable_service_catalog=true
/
/openshift_service_catalog_image_prefix=openshift/origin-
/
/openshift_service_catalog_image_version=v3.9.0/

*From:* users-boun...@lists.openshift.redhat.com 
 on behalf of Tim Dudgeon 


*Sent:* Friday, May 25, 2018 6:21 PM
*To:* users@lists.openshift.redhat.com
*Subject:* Re: hawkular-cassandra failed to startup on openshift 
origin 3.9


So what was the configuration for metrics in the inventory file.


On 25/05/18 11:04, Yu Wei wrote:

Yes, I deployed that via ansible-playbooks.

*From:* users-boun...@lists.openshift.redhat.com 
 
 
 on behalf of Tim 
Dudgeon  

*Sent:* Friday, May 25, 2018 5:51 PM
*To:* users@lists.openshift.redhat.com 

*Subject:* Re: hawkular-cassandra failed to startup on openshift 
origin 3.9


How are you deploying this? Using the ansible playbooks?


On 25/05/18 10:25, Yu Wei wrote:

Hi,
I tried to deploy hawkular-cassandra on openshift origin 3.9 cluster.
However, pod failed to start up with error as below,
/WARN [main] 2018-05-25 09:17:43,277 StartupChecks.java:267 - 
Directory /cassandra_data/data doesn't exist /


/ERROR [main] 2018-05-25 09:17:43,279 CassandraDaemon.java:710 - Has 
no permission to create directory /cassandra_data/data/


I tried emptyDir and persistent volume as cassandra-data, both failed.

Any advice for this issue?

Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux



___
users mailing list
users@lists.openshift.redhat.com 


http://lists.openshift.redhat.com/openshiftmm/listinfo/users






___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




hawk_cass.yaml
Description: application/yaml
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Origin 3.9 master-api error

2018-05-25 Thread Dan Pungă

Hello all!

Yet another question/problem from yours truly ...:)

I'm trying to access the cluster with oc login which returns

Error from server (InternalError): Internal error occurred: unexpected 
response: 400


I've tried both the lb entrypoint and also to directly connect to a master.
Don't know if this is the reason, but the origin-master-api service 
shows some errors:



May 25 12:45:06 master1 atomic-openshift-master-api: E0525 
12:45:06.089926    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:08 master1 atomic-openshift-master-api: E0525 
12:45:08.093085    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:08 master1 atomic-openshift-master-api: E0525 
12:45:08.828681    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:10 master1 atomic-openshift-master-api: E0525 
12:45:10.184361    1418 osinserver.go:111] internal error: urls don't 
validate: https://master2.oshift-pinfold.intra:8443/oauth/token/implicit 
/ https://master1.oshift-pinfold.intra:8443/oauth/token/implicit
May 25 12:45:10 master1 atomic-openshift-master-api: E0525 
12:45:10.797415    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:24 master1 atomic-openshift-master-api: E0525 
12:45:24.120997    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:26 master1 atomic-openshift-master-api: E0525 
12:45:26.168915    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:26 master1 atomic-openshift-master-api: E0525 
12:45:26.625063    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:26 master1 atomic-openshift-master-api: E0525 
12:45:26.871406    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted


I've gone into this issue before and a restart of the origin-master-api 
solved the connecting problem, but this is not an option for long term use.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin 3.9 Installation Issue

2018-05-25 Thread Dan Pungă

Hi,

Not sure about the error, but I've noticed a task that I haven't seen 
during my installation attempts (also Origin 3.9 on a cluster).
From what I see, "origin_control_plane" is an ansible role that's 
present on the master branch of the openshift-ansible repo, but not on 
the release-3.9 branch.
https://docs.openshift.org/latest/install_config/install/host_preparation.html#preparing-for-advanced-installations-origin 
ststes that we should use the release-3.9 branch and that the master is 
intended for the currently developed version of OShift Origin.


 Hope it helps!

On 24.05.2018 21:52, Jason Marshall wrote:

Good afternoon,

I am attempting to do an advanced installation of Origin 3.9 in a 
cluster with a single master and 2 nodes, with a node role on the 
master server.


I am able to run the prerequisites.yml playbook with no issue. The 
deploy_cluster.yml  fails at the point where the origin.node service 
attempts to start on the master server. The error that comes up is:


TASK [openshift_control_plane : Start and enable self-hosting node] 

fatal: [openshift-master.expdev.local]: FAILED! => {"changed": false, 
"msg": "Unable to restart service origin-node: Job for 
origin-node.service failed because the control process exited with 
error code. See \"systemctl status origin-node.service\" and 
\"journalctl -xe\" for details.\n"}

...ignoring

    "May 24 14:40:53 cmhldshftlab01.expdev.local 
origin-node[2657]: /usr/local/bin/openshift-node: line 17: 
/usr/bin/openshift-node-config: No such file or directory",
    "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: 
origin-node.service: main process exited, code=exited, status=1/FAILURE",
    "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: 
Failed to start OpenShift Node.",
    "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: Unit 
origin-node.service entered failed state.",
    "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: 
origin-node.service failed."


INSTALLER STATUS 
***

Initialization : Complete (0:00:33)
Health Check   : Complete (0:00:24)
Node Preparation   : Complete (0:00:01)
etcd Install   : Complete (0:00:41)
Load Balancer Install  : Complete (0:00:18)
Master Install : In Progress (0:01:47)
    This phase can be restarted by running: 
playbooks/openshift-master/config.yml



Failure summary:


  1. Hosts:    openshift-master.expdev.local
 Play: Configure masters
 Task: openshift_control_plane : fail
 Message:  Node start failed.




I go looking for openshift-node-config, and can't find it anywhere. 
Nor can I find where this file comes from, even when using "yum 
whatprovides" or a find command in the openshift-ansible directory I 
am installing from.


Am I running into a potential configuration issue, or a bug with the 
version of origin I am running? My openshift-ansible folder was pulled 
down at around 2PM Eastern today, as I refreshed it to see if there 
was any difference in behavior.


Any suggestions or troubleshooting tips would be most appreciated.

Thank you,

Jason




___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Minimum hardware requirements regarding storage

2018-05-23 Thread Dan Pungă

Hello all!

With the installation of Openshift Origin 3.9 cluster I've followed the 
storage requirements described here: 
https://docs.openshift.org/latest/install_config/install/prerequisites.html#hardware


With the framework up and some test deployments run, I'm trying to 
centralize configuration for a later installation. I'm using centOS 7 as 
the base operating system.


I wanted to ask about the storage requirements regarding masters and 
nodes. On my current framework I have 2 masters that are also 
infrastructure nodes and 2 compute nodes.


During install I could see there's heavy use of the /tmp so I understand 
why the 1GB requirement.
But I'm looking at /usr/local/bin and especially /var and the space used 
by my current installation is far less than what is stated as a minimum.
/usr/local/bin is completely empty on all hosts and for /var it's at 
most 3.6GB used on one of the masters (with 42GB allocated to meet the 
40GB free requirement/check); on the nodes the /var partition is below 
500MB used. The space is mostly used by system logging and Openshift's 
audit logging(on the masters).


In what context do these partitions fill/are used up to their stated 
requirements?


Thank you,
Dan Pungă


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Image pull error when using the integrated registry

2018-05-22 Thread Dan Pungă
46faf9c82c76f8513338c2a4b19b6f318b5" 
http.request.useragent="docker/1.13.1 go/go1.8.3 
kernel/3.10.0-862.2.3.el7.x86_64 os/linux arch/amd64 
UpstreamClient(go-dockerclient)" 
instance.id=be8746aa-b85c-40fb-978f-040a25b6c1d1 
vars.name=generic/ot-builder-npm-is 
vars.reference="sha256:9efdf954ec62e662e67d3f1c71f9d46faf9c82c76f8513338c2a4b19b6f318b5"







On 22.05.2018 19:49, Ben Parees wrote:



On Tue, May 22, 2018 at 11:46 AM, Dan Pungă <dan.pu...@gmail.com 
<mailto:dan.pu...@gmail.com>> wrote:


Hello all!

I'm experiencing a problem when trying to pull an image from
Openshift's container registry.
I've recently installed OpenshiftOrigin 3.9 with docker-registry
deployed.

I'm using 2 projects, one where "generic" images are built and one
for "applications". When running a build in the "application"
project that is based on an image from the "generic" project, the
build process fails at times with errors such:

Pulling image

docker-registry.default.svc:5000/generic/ot-builder-maven-is@sha256:ff3a7e558a44adc6212bd86dc3c0799537afd47f05be5678b0b986f7c7e3398c
...
Checking for Docker config file for PULL_DOCKERCFG_PATH in path
/var/run/secrets/openshift.io/pull <http://openshift.io/pull>
Using Docker config file
/var/run/secrets/openshift.io/pull/.dockercfg
<http://openshift.io/pull/.dockercfg>
Step 1/11 : FROM

docker-registry.default.svc:5000/generic/ot-builder-maven-is@sha256:ff3a7e558a44adc6212bd86dc3c0799537afd47f05be5678b0b986f7c7e3398c
Trying to pull repository
docker-registry.default.svc:5000/generic/ot-builder-maven-is ...
error: build error: unauthorized: authentication required

The imagestream is there and the sha is the right one. This seems
to happen at random and it goes away if I pause between build
triesso random.


it might be enlightening to look at the logs from the registry pod(or 
pods if you're running multiple replica instances) to see if it's 
getting errors talking to the api server.


I haven't done some through tests to see if it's the same
behaviour for source imageStreams inside the same project...
Any idea what to try?

Not sure if this is related, but I was trying to login to the
registry and trying to this from outside the cluster, I get
Error response from daemon: Get
https://docker-registry-default..:5000/v2/
<https://docker-registry-default..:5000/v2/>: net/http:
request canceled while waiting for connection (Client.Timeout
exceeded while awaiting headers)
This looks like timeout config/networking issues and I wonder if
it's what causing the initial problem(even though the registry
storage node, the registry pod and the application node where the
build is executed are inside the same subnet).


___
users mailing list
users@lists.openshift.redhat.com
<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
<http://lists.openshift.redhat.com/openshiftmm/listinfo/users>




--
Ben Parees | OpenShift



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Image pull error when using the integrated registry

2018-05-22 Thread Dan Pungă

Hello all!

I'm experiencing a problem when trying to pull an image from Openshift's 
container registry.

I've recently installed OpenshiftOrigin 3.9 with docker-registry deployed.

I'm using 2 projects, one where "generic" images are built and one for 
"applications". When running a build in the "application" project that 
is based on an image from the "generic" project, the build process fails 
at times with errors such:


Pulling image 
docker-registry.default.svc:5000/generic/ot-builder-maven-is@sha256:ff3a7e558a44adc6212bd86dc3c0799537afd47f05be5678b0b986f7c7e3398c 
...
Checking for Docker config file for PULL_DOCKERCFG_PATH in path 
/var/run/secrets/openshift.io/pull

Using Docker config file /var/run/secrets/openshift.io/pull/.dockercfg
Step 1/11 : FROM 
docker-registry.default.svc:5000/generic/ot-builder-maven-is@sha256:ff3a7e558a44adc6212bd86dc3c0799537afd47f05be5678b0b986f7c7e3398c
Trying to pull repository 
docker-registry.default.svc:5000/generic/ot-builder-maven-is ...

error: build error: unauthorized: authentication required

The imagestream is there and the sha is the right one. This seems to 
happen at random and it goes away if I pause between build triesso 
random.
I haven't done some through tests to see if it's the same behaviour for 
source imageStreams inside the same project...

Any idea what to try?

Not sure if this is related, but I was trying to login to the registry 
and trying to this from outside the cluster, I get
Error response from daemon: Get 
https://docker-registry-default..:5000/v2/: net/http: request 
canceled while waiting for connection (Client.Timeout exceeded while 
awaiting headers)
This looks like timeout config/networking issues and I wonder if it's 
what causing the initial problem(even though the registry storage node, 
the registry pod and the application node where the build is executed 
are inside the same subnet).


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Provisioning persistence for metrics with GlusterFS

2018-05-21 Thread Dan Pungă

Hello Rodrigo, I appreciate your answer!

In the meantime I had reached for the heketi-cli related support(chat) 
and I got the same reference. There's a config map generated by the 
installer for the heketi-registry pod that has the default for 
block-hosting volumes size set at 100GB.
What I thought was that the "block hosting volume" would be an 
equivalent of a logical volume and it(heketi-cli) tries to create a lv 
of size 100GB inside the already created 
vg_bd61a1e6f317bb9decade964449c12e8(which has 26GB).


I've actually modified the encrypted json config and tried to restart 
the heketi-registry pod, which failed. So I ended up with some unmanaged 
glusterFS storage, but since I'm on a test envionment, it's fine. 
Otherwise, good to know for the future.


Now what I also don't understand is how did the initial volume group for 
the registry got created with just 26GB of storage if the default is for 
100GB? Is there a rule such as: "create block-hosting volume of default 
size=100GB or max available"?
The integrated registry's persistence is set to 5GB. This is, I believe, 
a default value, as I haven't set anything related to it in my inventory 
file when installing Openshift Origin. How can I use the remaining 
storage in my vg with glusterFS and Openshift?


Thank you!

On 19.05.2018 02:43, Rodrigo Bersa wrote:

Hi Dan,

The Gluster Block volumes works with the concept of block-hosting 
volume, and these ones are created with 100GB by default.


To clarify, the block volumes will be provisioned over the block 
hosting volumes.


Let's say you need a 10GB block volume, it will create a block hosting 
volume with 100GB and then the 10GB block volume over it, as the next 
block volumes requested until it reaches the 100GB. After that a new 
block hosting volume will be created and so on.


So, if you have just 26GB available in each server, it's not enough to 
create the block hosting volume. You may need to add more devices to 
your CNS Cluster to grow your free space.



Kind regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.com <mailto:rbe...@redhat.com> M: +55-11-99557-5841 
<tel:+55-11-99557-5841>


<https://red.ht/sig>  
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

Red Hat é reconhecida entre as melhores empresas para trabalhar no 
Brasil pelo *Great Place to Work*.


On Wed, May 16, 2018 at 10:35 PM, Dan Pungă <dan.pu...@gmail.com 
<mailto:dan.pu...@gmail.com>> wrote:


Hello all!

I have setup a cluster with 3 glusterFS nodes for disk persistence
just as specified in the docs. I have configured the inventory
file to install the containerized version to be used by
Openshift's integrated registry. This works fine.

Now I wanted to install the metrics component and I followed the
procedure described here:

https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_glusterfs.html#install-example-infra

<https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_glusterfs.html#install-example-infra>

I end up with openshift-infra project set up, but with 3 pods
failing to start and I think this has to do with the PVC for
cassandra that fails to create.

oc get pvc metrics-cassandra-1 -o yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
control-plane.alpha.kubernetes.io/leader
<http://control-plane.alpha.kubernetes.io/leader>:

'{"holderIdentity":"8ef584d1-5923-11e8-8730-0a580a830040","leaseDurationSeconds":15,"acquireTime":"2018-05-17T00:38:34Z","renewTime":"2018-05-17T00:55:33Z","leaderTransitions":0}'
kubectl.kubernetes.io/last-applied-configuration
<http://kubectl.kubernetes.io/last-applied-configuration>: |

{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-provisioner

<http://volume.beta.kubernetes.io/storage-provisioner>":"gluster.org/glusterblock

<http://gluster.org/glusterblock>"},"labels":{"metrics-infra":"hawkular-cassandra"},"name":"metrics-cassandra-1","namespace":"openshift-infra"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"6Gi"}},"storageClassName":"glusterfs-registry-block"}}
volume.beta.kubernetes.io/storage-provisioner
<http://volume.beta.kubernetes.io/storage-provisioner>:
gluster.org/glusterblock <http://gluster.org/glusterblock>
  creationTimestamp: 2018-05-17T00:38:34Z
  labels:
 

Re: Web-Console masterURL configMap and authentication problems

2018-05-17 Thread Dan Pungă

I'm using https://loadbalance.my.net:8443 to access the web console, yes.

I'm really stuck with this one and it seems there isn't much discussion 
on this topic, not even previous bugs of this sort that I can find.
I've tried different versions for the 2 configurations (the configMap 
and the oauthclient file) with no result.



On 17.05.2018 16:46, Sam Padgett wrote:
Can you make sure when you first visit the console (before logging in) 
you use the public URL? One reason you'd see that error is if you 
visited https://master1.my.net:8443/console/ first instead of the 
public URL.


On Wed, May 16, 2018 at 7:34 PM, Dan Pungă <dan.pu...@gmail.com 
<mailto:dan.pu...@gmail.com>> wrote:


Thanks for the reply Sam!

Unfortunately with this setup I get only the "invalid request"
page that I've attached previously. But now the URL stays on
loadbalance.my.net:8443/console
<http://loadbalance.my.net:8443/console>:


https://loadbalance.my.net:8443/console/error?error=invalid_request_description=Client%20state%20could%20not%20be%20verified_uri=

<http://oadbalance.my.net:8443/console/error?error=invalid_request_description=Client%20state%20could%20not%20be%20verified_uri=>

The new configMap looks like this:

apiVersion: v1
data:
  webconsole-config.yaml: |
    apiVersion: webconsole.config.openshift.io/v1
<http://webconsole.config.openshift.io/v1>
    clusterInfo:
  consolePublicURL: https://loadbalance.my.net:8443/console/
<http://loadbalance.my.net:8443/console/>
  loggingPublicURL: https://kibana.apps.my.net <http://my.net>
  logoutPublicURL: ''
  masterPublicURL: https://loadbalance.my.net:8443
<http://loadbalance.my.net:8443>
  metricsPublicURL: https://hawkular-metrics.apps.my.net
<http://my.net>/hawkular/metrics
    extensions:
  properties: {}
  scriptURLs: []
  stylesheetURLs: []
    features:
  clusterResourceOverridesEnabled: false
  inactivityTimeoutMinutes: 0
    kind: WebConsoleConfiguration
    servingInfo:
  bindAddress: 0.0.0.0:8443 <http://0.0.0.0:8443>
  bindNetwork: tcp4
  certFile: /var/serving-cert/tls.crt
  clientCA: ''
  keyFile: /var/serving-cert/tls.key
  maxRequestsInFlight: 0
  namedCertificates: null
  requestTimeoutSeconds: 0
kind: ConfigMap
metadata:
  creationTimestamp: 2018-05-16T23:11:11Z
  name: webconsole-config
  namespace: openshift-web-console
  resourceVersion: "1187596"
  selfLink:
/api/v1/namespaces/openshift-web-console/configmaps/webconsole-config
  uid: 6c33acdd-595e-11e8-8a63-fa163ed601cb

The new oauthclient/openshift-web-console is now:

apiVersion: v1
grantMethod: auto
kind: OAuthClient
metadata:
  creationTimestamp: 2018-05-16T23:20:11Z
  name: openshift-web-console
  resourceVersion: "1189032"
  selfLink: /oapi/v1/oauthclients/openshift-web-console
  uid: ae780fee-595f-11e8-8a63-fa163ed601cb
redirectURIs:
- https://loadbalance.my.net <http://oadbalance.my.net>:8443/console
- https://master1.my.net <http://my.net>:8443/console
- https://master2.my.net <http://my.net>:8443/console

Anything else I need to check maybe?


On 17.05.2018 01:32, Sam Padgett wrote:

I'd make these updates to the config map:

consolePublicURL: https://loadbalance.my.net:8443/console/
<https://loadbalance.my.net:8443/console/>
masterPublicURL: https://loadbalance.my.net:8443
<https://loadbalance.my.net:8443>

Then edit the OAuth client as cluster-admin to add the console
public URL to the allowed callbacks.

$ oc patch oauthclient/openshift-web-console -p
'{"redirectURIs":["https://loadbalance.my.net:8443/
<https://loadbalance.my.net:8443/>"]}'

Editing the OAuth client should fix the invalid request error on
login.

Sam


On Wed, May 16, 2018 at 6:03 PM, Dan Pungă <dan.pu...@gmail.com
<mailto:dan.pu...@gmail.com>> wrote:

Hello all!

I'm setting up a recently installed Openshift Origin v3.9 and
I've discovered a problem with the web-console.
The environment has 2 masters: master1 and master2 and a
loadbalancer, all installed via openshift-ansible.
I'm accessing the web-console UI with
https://loadbalance.my.net:8443 <https://loadbalance.my.net:8443>
I've noticed some problems with the login form in the
webconsole, where I got some error about invalid request
(attached image). On a second attempt I can login succesfully.

A second problem, maybe unrelated, is the content of the
webconsole-config configmap which has:
consolePublicURL: https://ma

Provisioning persistence for metrics with GlusterFS

2018-05-16 Thread Dan Pungă
Size:6 Clusters:[] Name: Hacount:3 Auth:true}
E0516 22:38:49.355122   1 glusterblock-provisioner.go:451] BLOCK 
VOLUME RESPONSE: 
E0516 22:38:49.355204   1 glusterblock-provisioner.go:453] [heketi] 
failed to create volume: Failed to allocate new block volume: No space
E0516 22:38:49.355262   1 controller.go:895] Failed to provision 
volume for claim "openshift-infra/metrics-cassandra-1" with StorageClass 
"glusterfs-registry-block": failed to create volume: [heketi] failed to 
create volume: Failed to allocate new block volume: No space
E0516 22:38:49.355365   1 goroutinemap.go:165] Operation for 
"provision-openshift-infra/metrics-cassandra-1[1191fb8d-5959-11e8-94c9-fa163e1cba7f]" 
failed. No retries permitted until 2018-05-16 22:40:51.355301022 + 
UTC m=+23465.283195247 (durationBeforeRetry 2m2s). Error: "failed to 
create volume: [heketi] failed to create volume: Failed to allocate new 
block volume: No space"
I0516 22:38:51.241605   1 leaderelection.go:198] stopped trying to 
renew lease to provision for pvc openshift-infra/metrics-cassandra-1, 
task failed


Regarding the no space message, I am certain that there is space on the 
device (if there isn't some glusterFS config that's done on the servers 
which prevents them to extend/create the volumes). All disks have the 
same 26GB capacity and lvs  on one of the machines shows:


LV VG  Attr   LSize 
Pool    Origin Data%  Meta%  Move Log 
Cpy%Sync Convert
  docker-pool rootvg  twi-aot--- 
<4,16g    52,37 2,62

  home rootvg  -wi-ao 1,00g
  root rootvg  -wi-ao 2,00g
  swap rootvg  -wi-a- 2,00g
  tmp rootvg  -wi-ao 1,17g
  usr rootvg  -wi-ao 4,00g
  var rootvg  -wi-ao 4,00g
  brick_7aa3a789badd1ae620a2bbefe51b8c73 
vg_bd61a1e6f317bb9decade964449c12e8 Vwi-aotz--  2,00g 
tp_7aa3a789badd1ae620a2bbefe51b8c73 0,71
  brick_8818ffee7ab2244ca721b7d15ea1e514 
vg_bd61a1e6f317bb9decade964449c12e8 Vwi-aotz--  5,00g 
tp_8818ffee7ab2244ca721b7d15ea1e514 7,57
  tp_7aa3a789badd1ae620a2bbefe51b8c73 
vg_bd61a1e6f317bb9decade964449c12e8 twi-aotz-- 
2,00g    0,71 0,33
  tp_8818ffee7ab2244ca721b7d15ea1e514 
vg_bd61a1e6f317bb9decade964449c12e8 twi-aotz-- 
5,00g    7,57   0,29


Any ideas where to look for misconfigurations?

Thank you,
Dan Pungă
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Web-Console masterURL configMap and authentication problems

2018-05-16 Thread Dan Pungă

Thanks for the reply Sam!

Unfortunately with this setup I get only the "invalid request" page that 
I've attached previously. But now the URL stays on 
loadbalance.my.net:8443/console:


https://loadbalance.my.net:8443/console/error?error=invalid_request_description=Client%20state%20could%20not%20be%20verified_uri=

The new configMap looks like this:

apiVersion: v1
data:
  webconsole-config.yaml: |
    apiVersion: webconsole.config.openshift.io/v1
    clusterInfo:
  consolePublicURL: https://loadbalance.my.net:8443/console/
  loggingPublicURL: https://kibana.apps.my.net
  logoutPublicURL: ''
  masterPublicURL: https://loadbalance.my.net:8443
  metricsPublicURL: 
https://hawkular-metrics.apps.my.net/hawkular/metrics

    extensions:
  properties: {}
  scriptURLs: []
  stylesheetURLs: []
    features:
  clusterResourceOverridesEnabled: false
  inactivityTimeoutMinutes: 0
    kind: WebConsoleConfiguration
    servingInfo:
  bindAddress: 0.0.0.0:8443
  bindNetwork: tcp4
  certFile: /var/serving-cert/tls.crt
  clientCA: ''
  keyFile: /var/serving-cert/tls.key
  maxRequestsInFlight: 0
  namedCertificates: null
  requestTimeoutSeconds: 0
kind: ConfigMap
metadata:
  creationTimestamp: 2018-05-16T23:11:11Z
  name: webconsole-config
  namespace: openshift-web-console
  resourceVersion: "1187596"
  selfLink: 
/api/v1/namespaces/openshift-web-console/configmaps/webconsole-config

  uid: 6c33acdd-595e-11e8-8a63-fa163ed601cb

The new oauthclient/openshift-web-console is now:

apiVersion: v1
grantMethod: auto
kind: OAuthClient
metadata:
  creationTimestamp: 2018-05-16T23:20:11Z
  name: openshift-web-console
  resourceVersion: "1189032"
  selfLink: /oapi/v1/oauthclients/openshift-web-console
  uid: ae780fee-595f-11e8-8a63-fa163ed601cb
redirectURIs:
- https://loadbalance.my.net:8443/console
- https://master1.my.net:8443/console
- https://master2.my.net:8443/console

Anything else I need to check maybe?

On 17.05.2018 01:32, Sam Padgett wrote:

I'd make these updates to the config map:

consolePublicURL: https://loadbalance.my.net:8443/console/
masterPublicURL: https://loadbalance.my.net:8443

Then edit the OAuth client as cluster-admin to add the console public 
URL to the allowed callbacks.


$ oc patch oauthclient/openshift-web-console -p 
'{"redirectURIs":["https://loadbalance.my.net:8443/"]}'


Editing the OAuth client should fix the invalid request error on login.

Sam


On Wed, May 16, 2018 at 6:03 PM, Dan Pungă <dan.pu...@gmail.com 
<mailto:dan.pu...@gmail.com>> wrote:


Hello all!

I'm setting up a recently installed Openshift Origin v3.9 and I've
discovered a problem with the web-console.
The environment has 2 masters: master1 and master2 and a
loadbalancer, all installed via openshift-ansible.
I'm accessing the web-console UI with
https://loadbalance.my.net:8443 <https://loadbalance.my.net:8443>
I've noticed some problems with the login form in the webconsole,
where I got some error about invalid request (attached image). On
a second attempt I can login succesfully.

A second problem, maybe unrelated, is the content of the
webconsole-config configmap which has:
consolePublicURL: https://master1.my.net:8443/console/
<https://master1.my.net:8443/console/>
loggingPublicURL: https://
logoutPublicURL: ''
masterPublicURL: https://master1.my.net:8443

This looks like the configuration uses only the master1. I've
tried modifying the values for consolePublicURL and
masterPublicURL to point to loadbalance.my.net:8443
<http://loadbalance.my.net:8443>, but after pod restart I get a
json response with invalid request and the console doesn't load.
I've checked the master-config.yaml on both masters and it "looks"
fine to me:

masterPublicURL: https://master1.my.net:8443
  assetPublicURL: https://master1.my.net:8443/console/
<https://master1.my.net:8443/console/>
  masterPublicURL: https://master1.my.net:8443
  masterURL: https://loadbalance.my.net:8443
<https://loadbalance.my.net:8443>
  subdomain: my.net <http://my.net>

and the equivalent for master2.

Also, I've read through the archives and I've checked the 
oauthclient/openshift-web-console resource which is

apiVersion: v1
grantMethod: auto
kind: OAuthClient
metadata:
  creationTimestamp: 2018-05-11T13:09:54Z
  name: openshift-web-console
  resourceVersion: "1123438"
  selfLink: /oapi/v1/oauthclients/openshift-web-console
  uid: 98c50270-551c-11e8-a51b-fa163ed601cb
redirectURIs:
- https://master1.my.net:8443/console/
<https://master1.my.net:8443/console/>
- https://master2.my.net <http://my.net>:8443/console/


Do you have any ideas about these 2 issues? Especially the second 

Web-Console masterURL configMap and authentication problems

2018-05-16 Thread Dan Pungă

Hello all!

I'm setting up a recently installed Openshift Origin v3.9 and I've 
discovered a problem with the web-console.
The environment has 2 masters: master1 and master2 and a loadbalancer, 
all installed via openshift-ansible.

I'm accessing the web-console UI with https://loadbalance.my.net:8443
I've noticed some problems with the login form in the webconsole, where 
I got some error about invalid request (attached image). On a second 
attempt I can login succesfully.


A second problem, maybe unrelated, is the content of the 
webconsole-config configmap which has:

consolePublicURL: https://master1.my.net:8443/console/
loggingPublicURL: https://
logoutPublicURL: ''
masterPublicURL: https://master1.my.net:8443

This looks like the configuration uses only the master1. I've tried 
modifying the values for consolePublicURL and masterPublicURL to point 
to loadbalance.my.net:8443, but after pod restart I get a json response 
with invalid request and the console doesn't load.
I've checked the master-config.yaml on both masters and it "looks" fine 
to me:


masterPublicURL: https://master1.my.net:8443
  assetPublicURL: https://master1.my.net:8443/console/
  masterPublicURL: https://master1.my.net:8443
  masterURL: https://loadbalance.my.net:8443
  subdomain: my.net

and the equivalent for master2.

Also, I've read through the archives and I've checked the 
oauthclient/openshift-web-console resource which is


apiVersion: v1
grantMethod: auto
kind: OAuthClient
metadata:
  creationTimestamp: 2018-05-11T13:09:54Z
  name: openshift-web-console
  resourceVersion: "1123438"
  selfLink: /oapi/v1/oauthclients/openshift-web-console
  uid: 98c50270-551c-11e8-a51b-fa163ed601cb
redirectURIs:
- https://master1.my.net:8443/console/
- https://master2.my.net:8443/console/


Do you have any ideas about these 2 issues? Especially the second one.

Thank you for any help in advance,
Dan Pungă

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OShift install - DNS resolution and NetworkManager issues

2018-05-11 Thread Dan Pungă
Managed to solve this with adding my dns entry in the interface 
configuration file for NetworkManager on all dns client hosts.


So, keep /etc/NetworkManager/NetworkManager.conf :
.
[main]
"dns=none"

and add DNS1 for my dns-server
/etc/sysconfig/network-scripts/ifcfg-
PEERDNS=YES #as required by the OShift install procedure for dnsmasq
DNS1=192.168.150.5 #my internal dns-server

This way /etc/resolv.conf remains untouched by NetworkManager who still 
gets the DNS servers via DHCP, but also adds mine.

nmcli conn show "System eth0" | grep IP4
shows the DNSs got via DHCP + my 192.168.150.5

On 09.05.2018 19:30, Dan Pungă wrote:

Hello all!

Let me first start by specifying that my problem isn't specifically 
OpenShift related, but I'm trying this mailing list in hopes that 
someone faced my problem before and somehow managed to solve it.


I'm running an OpenShift-Origin installation using the 
openshift-ansible playbooks and it all goes pretty fine until ansible 
is trying to install and configure the nodes.


Some background: all hosts are running updated CentOS 7 and are part 
of a private network with IPs allocated through DHCP. I'm running a 
separate host that is configured as a dns server (as required by the 
install procedure) and all the other hosts are configured to use this 
dns-server host for name resolution. In order to achieve this I had to 
disable the NetworkManager service's ability to configure DNS. This 
was done by specifying in /etc/NetworkManager/NetworkManager.conf 
"dns=none" under the main section. This option/configuration prevents 
the overwrite of /etc/resolv.conf by the NetworkManager service.


The OpenShift installation runs fine up to a task in the "Install 
nodes" playbook/batch where it tries starting and enabling the 
origin-node services. Curious enough, this task fails for only 1 node, 
while the other 3 seem to pass it, but at a later point, where the 
task is to restart the origin-node service, the remaining 3 fail as well.


By inspecting the journalctl logs for origin-node, I've found that 
there was no connectivity to a host on the network
dial tcp: lookup lb.oshift-pinfold.intra on 192.168.150.16:53: no 
such host
In fact there's no connectivity to the entire network and 
/etc/resolv.conf has been rewritten.


By doing some research on what was going on, I've found out that 
there's a script copied and run by the OpenSHift installer: 
/etc/NetworkManager/dispatcher.d/99-origin-dns.sh that overwrites the 
/etc/resolv.conf. I'm not really experienced in how this works, but 
I'm guessing that the behaviour would be to pass name-resolution to 
the dnsmasq service. I've found that the script also generates 
/etc/origin/node/resolv.conf and 
/etc/dnsmasq.d/origin-upstream-dns.conf which seems to copy the 
nameservers found in /etc/resolv.conf at first run.
However, editing by hand /etc/resolv.conf to remake the initial 
condiguration and doing a systemctl restart NetworkManager, disregards 
my internal nameserver.


I'm thinking that the NetworkManager service somehow overwrites the 
/etc/resolv.conf file prior to invoking the 
/etc/NetworkManager/dispatcher.d/99-origin-dns.sh script.
I've tried manually editing /etc/origin/node/resolv.conf and 
/etc/dnsmasq.d/origin-upstream-dns.conf and adding the dns server 
without restarting NetworkManager service. This way name resolution is 
functioning and I'm also able to start the origin-node service, but 
I'm afraid this is not suited for the automated installation process.


Any help/hints are much appreciated!

Dan Pungă

==

actual behaviour:
### the starting contents of /etc/resolv.conf (with my internal dns 
server configured as 192.168.150.5)

cat /etc/resolv.conf
search openstacklocal
search oshift-pinfold.intra
nameserver 192.168.150.5
nameserver 8.8.8.8
nameserver 8.8.4.4

###output of initial contents of configuration files
cat /etc/dnsmasq.d/origin-upstream-dns.conf
server=8.8.8.8
server=8.8.4.4
cat /etc/origin/node/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4

###empty configuration files written by 
/etc/NetworkManager/dispatcher.d/99-origin-dns.sh (just to prove the 
point)

 > /etc/dnsmasq.d/origin-upstream-dns.conf
> /etc/origin/node/resolv.conf

### restart NetworkManager
systemctl restart NetworkManager

###results...
cat /etc/resolv.conf
# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
search cluster.local openstacklocal
search cluster.local oshift-pinfold.intra
nameserver 192.168.150.22

cat /etc/origin/node/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4

cat /etc/dnsmasq.d/origin-upstream-dns.conf
server=8.8.8.8
server=8.8.4.4

So the /etc/NetworkManager/dispatcher.d/99-origin-dns.sh script should 
write to the config files, shown here, the nameservers found in 
/etc/resolv.conf (when it's not watermarked). But it doesn't write the 
nameserver for my internal dns. The 8.8.8.8 and 8.8.

OShift install - DNS resolution and NetworkManager issues

2018-05-09 Thread Dan Pungă

Hello all!

Let me first start by specifying that my problem isn't specifically 
OpenShift related, but I'm trying this mailing list in hopes that 
someone faced my problem before and somehow managed to solve it.


I'm running an OpenShift-Origin installation using the openshift-ansible 
playbooks and it all goes pretty fine until ansible is trying to install 
and configure the nodes.


Some background: all hosts are running updated CentOS 7 and are part of 
a private network with IPs allocated through DHCP. I'm running a 
separate host that is configured as a dns server (as required by the 
install procedure) and all the other hosts are configured to use this 
dns-server host for name resolution. In order to achieve this I had to 
disable the NetworkManager service's ability to configure DNS. This was 
done by specifying in /etc/NetworkManager/NetworkManager.conf "dns=none" 
under the main section. This option/configuration prevents the overwrite 
of /etc/resolv.conf by the NetworkManager service.


The OpenShift installation runs fine up to a task in the "Install nodes" 
playbook/batch where it tries starting and enabling the origin-node 
services. Curious enough, this task fails for only 1 node, while the 
other 3 seem to pass it, but at a later point, where the task is to 
restart the origin-node service, the remaining 3 fail as well.


By inspecting the journalctl logs for origin-node, I've found that there 
was no connectivity to a host on the network
dial tcp: lookup lb.oshift-pinfold.intra on 192.168.150.16:53: no 
such host
In fact there's no connectivity to the entire network and 
/etc/resolv.conf has been rewritten.


By doing some research on what was going on, I've found out that there's 
a script copied and run by the OpenSHift installer: 
/etc/NetworkManager/dispatcher.d/99-origin-dns.sh that overwrites the 
/etc/resolv.conf. I'm not really experienced in how this works, but I'm 
guessing that the behaviour would be to pass name-resolution to the 
dnsmasq service. I've found that the script also generates 
/etc/origin/node/resolv.conf and /etc/dnsmasq.d/origin-upstream-dns.conf 
which seems to copy the nameservers found in /etc/resolv.conf at first run.
However, editing by hand /etc/resolv.conf to remake the initial 
condiguration and doing a systemctl restart NetworkManager, disregards 
my internal nameserver.


I'm thinking that the NetworkManager service somehow overwrites the 
/etc/resolv.conf file prior to invoking the 
/etc/NetworkManager/dispatcher.d/99-origin-dns.sh script.
I've tried manually editing /etc/origin/node/resolv.conf and 
/etc/dnsmasq.d/origin-upstream-dns.conf and adding the dns server 
without restarting NetworkManager service. This way name resolution is 
functioning and I'm also able to start the origin-node service, but I'm 
afraid this is not suited for the automated installation process.


Any help/hints are much appreciated!

Dan Pungă

==

actual behaviour:
### the starting contents of /etc/resolv.conf (with my internal dns 
server configured as 192.168.150.5)

cat /etc/resolv.conf
search openstacklocal
search oshift-pinfold.intra
nameserver 192.168.150.5
nameserver 8.8.8.8
nameserver 8.8.4.4

###output of initial contents of configuration files
cat /etc/dnsmasq.d/origin-upstream-dns.conf
server=8.8.8.8
server=8.8.4.4
cat /etc/origin/node/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4

###empty configuration files written by 
/etc/NetworkManager/dispatcher.d/99-origin-dns.sh (just to prove the point)

 > /etc/dnsmasq.d/origin-upstream-dns.conf
> /etc/origin/node/resolv.conf

### restart NetworkManager
systemctl restart NetworkManager

###results...
cat /etc/resolv.conf
# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
search cluster.local openstacklocal
search cluster.local oshift-pinfold.intra
nameserver 192.168.150.22

cat /etc/origin/node/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4

cat /etc/dnsmasq.d/origin-upstream-dns.conf
server=8.8.8.8
server=8.8.4.4

So the /etc/NetworkManager/dispatcher.d/99-origin-dns.sh script should 
write to the config files, shown here, the nameservers found in 
/etc/resolv.conf (when it's not watermarked). But it doesn't write the 
nameserver for my internal dns. The 8.8.8.8 and 8.8.4.4 could be 
confusing, but if I make an /etc/resolv.conf with some bogus 
nameservers, the result is precisely the same. I don't know how it finds 
those 8.8 nameservers and my guess, as I mentioned in the first part of 
the message, is that there's some config "elsewere" with the 8.8.. and 
it is used by the NetworkManager service to overwrite the 
/etc/resolv.conf file and it is this modified version that the 
/etc/NetworkManager/dispatcher.d/99-origin-dns.sh script finds and works 
with
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deployment Strategy: lifecycle hooks how to inject configuration

2018-02-22 Thread Dan Pungă

Thank you all for your considerations and advice!

I just wanted to get some idea about hook-uses and how/if I should work 
with them at this point. I guess I first relied more on the naming of 
the option..."deployment lifecycle hook" and description "allow behavior 
to be injected into the deployment process".


Now, if you'd a allow a newbie to make some considerations, this is a 
bit misleading. What I initially thought after reading this, is that 
these are running environments somewhat similar to what Tomas linked in 
the first reply with the Kubernetes initContainer.
In fact these are separate, (even more..)ephemeral pods, that get 
instantiated from what the DeploymentConfig states. They're not "hooks" 
(which I interpreted as "an attachement to") for the deployment, but 
rather volatile replicas used to do some "things" outside the scope of 
the deployment itself, after which they're goneblink pods :)
Now, for the standard examples that I see online with the database 
provisioning/replication etc, not one of them explicitly underlined 
that, in order for this to work, you need to use persistent volumes, 
because on that external resource it is where all pre/mid/post hook 
procedure gets persisted. Or maybe that's just standard knowledge that I 
didn't have..
(just as a side issue and coming from the recent exchange between Graham 
and Fernando: 
https://blog.openshift.com/using-post-hook-to-initialize-a-database/ at 
the very start of the post: "


You can solve this in multiple ways, such as:

 * You can create a custom database image and bake in the script into
   that image
 * You can do the DB initialization from the application container that
   runs the script on the database

"
Now I wonder how your colleague would implement the first option. I'm 
guessing more or less Graham's approach.)


Thank you Graham for you examples! I've actually tried changing the 
start command for the pod, more or less in the same way. Not through a 
mounted ConfigMap, but through a script that was doing my changes and 
then starting the pod(it was available to the image because I was not in 
your scenario with standard image; I was/am using a custom one). However 
this failed. I haven't really checked to see the actual reason. Might be 
that the primary process was the script and at some point it 
exited(didn't include the actual start command), or the timeout for the 
readiness probe was exceeded.

The trick with the wrapper is greatly appreciated, thank you!

In the end I got it solved with Fernando's approach to push the 
configuration at build time. I was not bound to not being able to create 
an extra layer/custom image. In fact I was actually on the "extra" layer 
of composing the artifact image (built with S2I) with the Runtime 
Wildfly instance. My inline Dockerfile got a bit more content than a 
FROM, COPY and CMD.
Another advantage here would also be that the rolling out of a new 
deployment is quicker, with the old pods being quickly switched to the 
new ones. In a stateless environment, such as mine, this is nice.


Thanks again,
Dan Pungă

PS: I'm kind of interfering in an ongoing discussion. Please, don't let 
my message stop you; this is first-hand knowledge! :)


On 22.02.2018 14:42, Fernando Lozano wrote:

Hi Graham,

If the image was designed to be configured using environment 
variables or configuration files that can be provided as volumes, yes 
you don't need a custom image. But from Dan message I expect more 
extensive customizations which would become cumbersome.


And the idea of forcing the image to run a different command than its 
entrypoint, them get more files from a volume, to customize the image 
or compensate for deficiencies in the original entrypoint command, 
seem also cumbersome to me. You are making extensive changes each 
time you start the container (to it's ephemeral read/write layer). I 
don't see the advantage compared to just creating a child image with 
an extra layer that has the customizations.



[]s, Fernando Lozano



On Wed, Feb 21, 2018 at 7:40 PM, Graham Dumpleton 
<gdump...@redhat.com <mailto:gdump...@redhat.com>> wrote:


Another example of where this can be useful is where the primary
process in the container doesn't do what is required of process
ID 1. That is, reap zombie processes. If that becomes an issue
you can use a run script wrapper like:

#!/bin/sh

trap 'kill -TERM $PID' TERM INT

/usr/libexec/s2i/run &

PID=$!
wait $PID
trap - TERM INT
wait $PID
STATUS=$?
exit $STATUS

This simple alternative to a mini init process manager such as
tini, will work fine in many cases.

Replace /usr/libexec/s2i/run with actual program to run.

Graham


On 22 Feb 2018, at 9:33 am, Graham Dumpleton
<gdump...@redhat.com <mailto:gdump...@redhat.com>> wrote:

Badly worded perhaps.

Re: Parametrizing a BuildConfiguration with Docker ARGs

2018-02-08 Thread Dan Pungă
Yes, in fact your answer made me wonder about my config(and reference 
docs that I use, but that's my prob..:) ).


Not to add to the confusion, but I use minishift, so a local cluster 
environment. Current OC version is 3.6.0 and Docker is 1.12.
However I run my Docker tests on a separate installation which is the 
version that I've posted...17.12.



If OShift injects its */from/* instruction in the "right" place, *I'd 
guess* the behaviour - when using /ARG/s as first instructions in the 
Dockerfile - would be something like:
- if docker daemon is 17.06 or newer (if the top voted reply here is 
right 
https://stackoverflow.com/questions/40273070/docker-build-arg-in-source-file), 
the build process will work with OShift (don't know starting with what 
version), but the /FROM/ in Dockerfile will be replaced with the /from/ 
in OShift's BuildConfig; the very first /ARG/ instructions will just be 
ignored(??)
- if daemon is pre 17.06, OShift's build will crash as the Docker build 
will crash, when not using /FROM/ as the very first instruction in the 
Dockerfile


But now it's just guessing from my part. If/when I have the time I'll 
try to do some tests.

For the time being no ARGs prior to FROM for me. :)

Thanks again for the help, Ben!


On 08.02.2018 23:01, Ben Parees wrote:



On Thu, Feb 8, 2018 at 3:53 PM, Dan Pungă <dan.pu...@gmail.com 
<mailto:dan.pu...@gmail.com>> wrote:


I guess it does have to do with your Docker version.
I have:

Client:
 Version:    17.12.0-ce
 API version:    1.35
 Go version:    go1.9.2
 Git commit:    c97c6d6
 Built:    Wed Dec 27 20:11:19 2017
 OS/Arch:    linux/amd64

Server:
 Engine:
  Version:    17.12.0-ce
  API version:    1.35 (minimum version 1.12)
  Go version:    go1.9.2
  Git commit:    c97c6d6
  Built:    Wed Dec 27 20:09:53 2017
  OS/Arch:    linux/amd64
  Experimental:    false

Yes, found this in the reference manual:
https://docs.docker.com/v1.13/engine/reference/builder/#/from
<https://docs.docker.com/v1.13/engine/reference/builder/#/from>
"/As such, a valid|Dockerfile|//must have//|FROM|as its
first instruction/"

while for 17.12, it appears they have relaxed this requirement.



Based on that, I would expect whatever behavior you're observing is 
driven by the docker daemon version on your openshift cluster nodes.






On 08.02.2018 17:19, Ben Parees wrote:



    On Wed, Feb 7, 2018 at 10:30 AM, Dan Pungă <dan.pu...@gmail.com
<mailto:dan.pu...@gmail.com>> wrote:

Thanks for your answers Ben!

And yes, apparently, I've skimmed through the docs with this
bit, which explains why the devs didn't have to implement
handling ARGs before the FROM instructions in Dockerfiles...
:) So I'll just have to point to different images in the yaml
config.

Regarding the first reply and example, the ARG instruction
has scope within Dockerfile. So, in your second example, the
OS_name is available just for the FROM instruction, after
which it losses scope. You have to redefine it to use it
after the FROM. However the --build-argoverwrites all references:

ARG OS_name="centos"
ARG OS_version="7"

FROM $OS_name:$OS_version

ARG OS_version="foobar"

RUN echo $OS_version
RUN exit 1
=

$_*docker build .*_   ##so with default values for ARG taken
into consideration


This fails for me.  And not because of the exit 1(which is
intentional so we can see the echo output):

$ cat Dockerfile
ARG OS_name="centos"
ARG OS_version="7"

FROM $OS_name:$OS_version

ARG OS_version="foobar"

RUN echo $OS_version
RUN exit 1

$ docker build .
Sending build context to Docker daemon 3.072 kB
*Step 1/6 : ARG OS_name="centos"
Please provide a source image with `from` prior to commit
*


$ docker version
Client:
 Version: 1.13.1
 API version: 1.26
 Package version: docker-1.13.1-44.git584d391.fc27.x86_64
 Go version:  go1.9.1
 Git commit:  caba767-unsupported
 Built:   Thu Nov 23 21:17:26 2017
 OS/Arch: linux/amd64

Server:
 Version: 1.13.1
 API version: 1.26 (minimum version 1.12)
 Package version: docker-1.13.1-44.git584d391.fc27.x86_64
 Go version:  go1.9.1
 Git commit:  caba767-unsupported
 Built:   Thu Nov 23 21:17:26 2017
 OS/Arch: linux/amd64
 Experimental:    false


Sending build context to Docker daemon  2.048kB
Step 1/6 : ARG OS_name="centos"
Step 2/6 : ARG OS_version="7"
Step 3/6 : FROM $OS_name:$OS_version
 ---> ff4

Re: Parametrizing a BuildConfiguration with Docker ARGs

2018-02-08 Thread Dan Pungă

I guess it does have to do with your Docker version.
I have:

Client:
 Version:    17.12.0-ce
 API version:    1.35
 Go version:    go1.9.2
 Git commit:    c97c6d6
 Built:    Wed Dec 27 20:11:19 2017
 OS/Arch:    linux/amd64

Server:
 Engine:
  Version:    17.12.0-ce
  API version:    1.35 (minimum version 1.12)
  Go version:    go1.9.2
  Git commit:    c97c6d6
  Built:    Wed Dec 27 20:09:53 2017
  OS/Arch:    linux/amd64
  Experimental:    false

Yes, found this in the reference manual: 
https://docs.docker.com/v1.13/engine/reference/builder/#/from
"/As such, a valid|Dockerfile|//must have//|FROM|as its first 
instruction/"


while for 17.12, it appears they have relaxed this requirement.



On 08.02.2018 17:19, Ben Parees wrote:



On Wed, Feb 7, 2018 at 10:30 AM, Dan Pungă <dan.pu...@gmail.com 
<mailto:dan.pu...@gmail.com>> wrote:


Thanks for your answers Ben!

And yes, apparently, I've skimmed through the docs with this bit,
which explains why the devs didn't have to implement handling ARGs
before the FROM instructions in Dockerfiles... :) So I'll just
have to point to different images in the yaml config.

Regarding the first reply and example, the ARG instruction has
scope within Dockerfile. So, in your second example, the OS_name
is available just for the FROM instruction, after which it losses
scope. You have to redefine it to use it after the FROM. However
the --build-argoverwrites all references:

ARG OS_name="centos"
ARG OS_version="7"

FROM $OS_name:$OS_version

ARG OS_version="foobar"

RUN echo $OS_version
RUN exit 1
=

$_*docker build .*_##so with default values for ARG taken into
consideration


This fails for me.  And not because of the exit 1(which is intentional 
so we can see the echo output):


$ cat Dockerfile
ARG OS_name="centos"
ARG OS_version="7"

FROM $OS_name:$OS_version

ARG OS_version="foobar"

RUN echo $OS_version
RUN exit 1

$ docker build .
Sending build context to Docker daemon 3.072 kB
*Step 1/6 : ARG OS_name="centos"
Please provide a source image with `from` prior to commit
*


$ docker version
Client:
 Version: 1.13.1
 API version: 1.26
 Package version: docker-1.13.1-44.git584d391.fc27.x86_64
 Go version:  go1.9.1
 Git commit:  caba767-unsupported
 Built:   Thu Nov 23 21:17:26 2017
 OS/Arch: linux/amd64

Server:
 Version: 1.13.1
 API version: 1.26 (minimum version 1.12)
 Package version: docker-1.13.1-44.git584d391.fc27.x86_64
 Go version:  go1.9.1
 Git commit:  caba767-unsupported
 Built:   Thu Nov 23 21:17:26 2017
 OS/Arch: linux/amd64
 Experimental:    false


Sending build context to Docker daemon  2.048kB
Step 1/6 : ARG OS_name="centos"
Step 2/6 : ARG OS_version="7"
Step 3/6 : FROM $OS_name:$OS_version
 ---> ff426288ea90
Step 4/6 : ARG OS_version="foobar"
 ---> Running in b5ac67ae7fc5
Removing intermediate container b5ac67ae7fc5
 ---> 753bc14d3a4b
Step 5/6 : RUN echo $OS_version
 ---> Running in 15c759544a4b
_*foobar*_
Removing intermediate container 15c759544a4b
 ---> 0e1d41c4ddda
Step 6/6 : RUN exit 1
 ---> Running in 9dfc7176d6b9
The command '/bin/sh -c exit 1' returned a non-zero code: 1

=

$ _*docker build -t tst --build-arg OS_version=6.9 .*_  ##the
OS_version passed as cmd option is taken into account in all scopes
Sending build context to Docker daemon  2.048kB
Step 1/6 : ARG OS_name="centos"
Step 2/6 : ARG OS_version="7"
Step 3/6 : FROM $OS_name:$OS_version
6.9: Pulling from library/centos
993c50d47469: Pull complete
Digest:
sha256:5cf988fbf143af398f879bd626ee677da3f8d229049b7210790928a02613ab26
Status: Downloaded newer image for _*centos:6.9*_
 ---> fca4c61d0fa7
Step 4/6 : ARG OS_version="foobar"
 ---> Running in d58a5321aa65
Removing intermediate container d58a5321aa65
 ---> d345fcd2fe46
Step 5/6 : RUN echo $OS_version
 ---> Running in a408a3cd16ee
_*6.9*_
Removing intermediate container a408a3cd16ee
 ---> 2d8e5ee7cc03
Step 6/6 : RUN exit 1
 ---> Running in 61b8011e52dd
The command '/bin/sh -c exit 1' returned a non-zero code: 1



On 07.02.2018 16:50, Ben Parees wrote:

btw, openshift will happily substitute your FROM statement w/ an
image referenced by your BuildConfig, so if that's your goal,
perhaps that is a way to accomplish it.


https://docs.openshift.org/latest/dev_guide/builds/build_strategies.html#docker-strategy-from

<https://docs.openshift.org/latest/dev_guide/builds/build_strategies.html#docker-strategy-from>

On Wed, Feb 7, 2018 at 9:48 AM, Ben Parees <bpa

Re: Parametrizing a BuildConfiguration with Docker ARGs

2018-02-07 Thread Dan Pungă

Thanks for your answers Ben!

And yes, apparently, I've skimmed through the docs with this bit, which 
explains why the devs didn't have to implement handling ARGs before the 
FROM instructions in Dockerfiles... :) So I'll just have to point to 
different images in the yaml config.


Regarding the first reply and example, the ARG instruction has scope 
within Dockerfile. So, in your second example, the OS_name is available 
just for the FROM instruction, after which it losses scope. You have to 
redefine it to use it after the FROM. However the --build-argoverwrites 
all references:


ARG OS_name="centos"
ARG OS_version="7"

FROM $OS_name:$OS_version

ARG OS_version="foobar"

RUN echo $OS_version
RUN exit 1
=

$_*docker build .*_   ##so with default values for ARG taken into 
consideration

Sending build context to Docker daemon  2.048kB
Step 1/6 : ARG OS_name="centos"
Step 2/6 : ARG OS_version="7"
Step 3/6 : FROM $OS_name:$OS_version
 ---> ff426288ea90
Step 4/6 : ARG OS_version="foobar"
 ---> Running in b5ac67ae7fc5
Removing intermediate container b5ac67ae7fc5
 ---> 753bc14d3a4b
Step 5/6 : RUN echo $OS_version
 ---> Running in 15c759544a4b
_*foobar*_
Removing intermediate container 15c759544a4b
 ---> 0e1d41c4ddda
Step 6/6 : RUN exit 1
 ---> Running in 9dfc7176d6b9
The command '/bin/sh -c exit 1' returned a non-zero code: 1

=

$ _*docker build -t tst --build-arg OS_version=6.9 .*_  ##the OS_version 
passed as cmd option is taken into account in all scopes

Sending build context to Docker daemon  2.048kB
Step 1/6 : ARG OS_name="centos"
Step 2/6 : ARG OS_version="7"
Step 3/6 : FROM $OS_name:$OS_version
6.9: Pulling from library/centos
993c50d47469: Pull complete
Digest: 
sha256:5cf988fbf143af398f879bd626ee677da3f8d229049b7210790928a02613ab26

Status: Downloaded newer image for _*centos:6.9*_
 ---> fca4c61d0fa7
Step 4/6 : ARG OS_version="foobar"
 ---> Running in d58a5321aa65
Removing intermediate container d58a5321aa65
 ---> d345fcd2fe46
Step 5/6 : RUN echo $OS_version
 ---> Running in a408a3cd16ee
_*6.9*_
Removing intermediate container a408a3cd16ee
 ---> 2d8e5ee7cc03
Step 6/6 : RUN exit 1
 ---> Running in 61b8011e52dd
The command '/bin/sh -c exit 1' returned a non-zero code: 1



On 07.02.2018 16:50, Ben Parees wrote:
btw, openshift will happily substitute your FROM statement w/ an image 
referenced by your BuildConfig, so if that's your goal, perhaps that 
is a way to accomplish it.


https://docs.openshift.org/latest/dev_guide/builds/build_strategies.html#docker-strategy-from

On Wed, Feb 7, 2018 at 9:48 AM, Ben Parees <bpar...@redhat.com 
<mailto:bpar...@redhat.com>> wrote:




On Wed, Feb 7, 2018 at 6:59 AM, Dan Pungă <dan.pu...@gmail.com
<mailto:dan.pu...@gmail.com>> wrote:

Hello all!

I've recently discovered and join this mailing list; hope I'm
in the right place.
I'm new to the OShift ecosystem, currently trying to develop a
configuration to containerize some apps. I'm using minishift
local cluster on a Ubuntu 16.04 machine (details below).

I want to write a parametrized yaml template to configure the
build process for my layers (those with a dockerStrategy) with
using(or, better said connecting to ) the arguments defined in
my Dockerfiles. I have found that OShift doesn't support ARG
instructions prior to the FROM one when it reads the Dockerfile.


you sure even docker supports that?  It's not working for me:

this works (just using an arg generically and echoing it out):

$ cat Dockerfile
FROM centos
ARG OS_name="centos"

RUN echo $OS_name
RUN exit 1

$ docker build --build-arg OS_name=centos .
Sending build context to Docker daemon 2.048 kB
Step 1/4 : FROM centos
 ---> ff426288ea90
Step 2/4 : ARG OS_name="centos"
 ---> Using cache
 ---> 59f6494cb002
Step 3/4 : RUN echo $OS_name
 ---> Running in 092e2600490e
*centos
* ---> 8a3f570a033c
Removing intermediate container 092e2600490e
Step 4/4 : RUN exit 1
 ---> Running in 543cefc9eab8
The command '/bin/sh -c exit 1' returned a non-zero code: 1

This does not (not even referencing the arg in my FROM, just
putting the ARG before FROM):
$ cat Dockerfile
ARG OS_name="centos"
FROM centos

RUN echo $OS_name
RUN exit 1

$ docker build --build-arg OS_name=centos .
Sending build context to Docker daemon 2.048 kB
Step 1/4 : ARG OS_name="centos"
Please provide a source image with `from` prior to commit



So i think this is a docker restriction, not an openshift one.


So, even if a docker build would run successfully with
something like:

ARG OS_name="

Parametrizing a BuildConfiguration with Docker ARGs

2018-02-07 Thread Dan Pungă

Hello all!

I've recently discovered and join this mailing list; hope I'm in the 
right place.
I'm new to the OShift ecosystem, currently trying to develop a 
configuration to containerize some apps. I'm using minishift local 
cluster on a Ubuntu 16.04 machine (details below).


I want to write a parametrized yaml template to configure the build 
process for my layers (those with a dockerStrategy) with using(or, 
better said connecting to ) the arguments defined in my Dockerfiles. I 
have found that OShift doesn't support ARG instructions prior to the 
FROM one when it reads the Dockerfile.

So, even if a docker build would run successfully with something like:

ARG OS_name="centos"
ARG OS_version="6.8"

FROM ${OS_name}:${OS_version}
#rest of Dockerfile instructions...

if I try to define in my yaml config

strategy:
  dockerStrategy:
    buildArgs:
    - name: OS_name
  value: "7"

the build process does not work.

Has anyone else come across this issue and how did you get around it? 
What I'm trying to achieve is single configuration structure for 
multiple versions, so I wouldn't have to write separate Docker configs 
for different app versions. For example building a Java JRE layer on top 
of different OSs with one file.


Thank you,
Dan

PS: The closest thread regarding this issue that I've found in the 
archive is 
https://lists.openshift.redhat.com/openshift-archives/users/2017-January/msg00104.html 



Running env details:

oc version
oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://192.168.99.100:8443
openshift v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
=

docker@minishift:~$ docker version
Client:
 Version:  1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:    Wed Oct 26 23:26:11 2016
 OS/Arch:  linux/amd64

Server:
 Version:  1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:    Wed Oct 26 23:26:11 2016
 OS/Arch:  linux/amd64

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users