Re: Issue by installing OKD OpenShift 3.11

2019-06-09 Thread Samuel Martín Moro
Indeed, avoiding multiple network interfaces is preferable, if an option.

To stop/start openshift, you could just stop/start (using systemctl)
origin-node and your container runtime (docker and/or crio).
Now for any stateful workload you could eventually be hosting, you will
want to come up with your own process (usually: shutdown frontends, make
sure backends synced everything to disk and shut them down, ...)

Regards.

On Fri, Jun 7, 2019 at 4:53 PM Jérôme Meyer 
wrote:

> Finally, I've configured my systems with only one network, it is the
> easiest way with kvm on laptop.
>
> Then I've used your config example and the deployment was successfully
> done… 😉
>
>
>
> Is there a possibility to stop/start okd openshift?
>
>
>
> Thank you really for your help and support.
>
>
>
> Best regards, J.
>
>
>
>
>
>
>
>
> *From:* users-boun...@lists.openshift.redhat.com <
> users-boun...@lists.openshift.redhat.com> *On Behalf Of *Samuel Martín
> Moro
> *Sent:* Freitag, 31. Mai 2019 20:43
> *To:* bjoern.baertsc...@swisscom.com
> *Cc:* OpenShift Users List 
> *Subject:* Re: Issue by installing OKD OpenShift 3.11
>
>
>
> Yup, and that was my point in setting the node-ip kubelet argument, which
> kinda-replaces that variable, as of node-config moving to ConfigMaps.
>
>
>
> @jerome: I should have told you, once you'ld have fixed your inventory
> setting your own node groups configurations, you should re-deploy from
> scratch.
>
> Make sure do drop everything, especially certificates and configurations
> that might still mention your former Ip addresses.
>
> Openshift has an uninstall playbook. Once applied, make sure all
> containers are down, /var/lib/etcd sould be empty, ... If you can re-
> deployall nodes, that's even better (the uninstall tends to leave stuff,
> ...)
>
> Also: if using crio, make sure to check the default value for
> openshift_node_groups, as it includes a few additional edits you'ld need
> ...
>
>
>
>
>
> Let us know if you have any questions
>
>
>
> Apologies for the late reply
>
>
>
> Regards.
>
>
>
>
>
>
>
> On Fri, May 31, 2019, 8:57 AM  wrote:
>
> Hi
>
>
>
> This is the first time I write to this mailing list, I'd like to say hello
> to everyone.
>
>
>
> I once had a similar issue when installing openshift on my notebook using
> VirtualBox, I had 2 network interfaces per host (on NATed with internet
> access and an internal only) and openshift took the "wrong" one. Then I had
> to set the host variable 'openshift_ip' to explicitly set my ip address to
> the of the "correct" device.
>
>
>
> I cannot find it with 3.11 documentation, but within 3.9.
>
>
>
>
> https://docs.openshift.com/container-platform/3.9/install_config/install/advanced_install.html#configuring-host-variables
>
>
>
> regards, Björn
>
>
>
>
>
> *Von:* users-boun...@lists.openshift.redhat.com <
> users-boun...@lists.openshift.redhat.com> *Im Auftrag von *Jérôme Meyer
> *Gesendet:* Mittwoch, 29. Mai 2019 17:19
> *An:* Samuel Martín Moro 
> *Cc:* users@lists.openshift.redhat.com
> *Betreff**:* RE:Issue by installing OKD OpenShift 3.11
>
>
>
>
>
> Thanks for your help and advise.
>
> Unfortunately it don't work yet but perhaps it is a network issue...
>
>
>
> So, I'll explain more deeply my network architecture...
>
>
>
> All my VMs are using the default network 192.168.122.0/24 with forwarding
> NAT to go to the Internet. My Laptop are the 192.168.122.1 and it is the
> default gateway for all systems too (only one default gateway). This
> network works with DHCP.
>
>
>
> Then, I've defined a separate intern subnet to perform the container
> network: 192.168.100.0/24 as isolated network and internal routing only.
> This network used static ip address and address are in DNS defined.
>
>
>
> Here're details:
>
>
>
> node1
> ens10: 192.168.100.101/24
>
> eth1: 192.168.122.193/24
>
> docker0: 172.17.0.1/16
>
>
>
> node2
>
> ens10: 192.168.100.102/24
>
> eth1: 192.168.122.240/24
>
> docker0: 172.17.0.1/16
>
>
>
> master
>
> ens10: 192.168.100.100/24
>
> eth1: 192.168.122.54/24
>
> docker0: 172.17.0.1/16
>
>
>
> services
>
> ens10: 192.168.100.103/24
>
> eth1: 192.168.122.234/24
> docker0: 172.17.0.1/16
>
>
>
> I'm connecting and start the ansible's job from my workstation VM
> 192.168.100.50.
>
>
>
> Now, if I've right understood, the openshift service will 

Re: Issue by installing OKD OpenShift 3.11

2019-05-31 Thread Samuel Martín Moro
Yup, and that was my point in setting the node-ip kubelet argument, which
kinda-replaces that variable, as of node-config moving to ConfigMaps.

@jerome: I should have told you, once you'ld have fixed your inventory
setting your own node groups configurations, you should re-deploy from
scratch.
Make sure do drop everything, especially certificates and configurations
that might still mention your former Ip addresses.
Openshift has an uninstall playbook. Once applied, make sure all containers
are down, /var/lib/etcd sould be empty, ... If you can re-deployall nodes,
that's even better (the uninstall tends to leave stuff, ...)
Also: if using crio, make sure to check the default value for
openshift_node_groups, as it includes a few additional edits you'ld need ...


Let us know if you have any questions

Apologies for the late reply

Regards.



On Fri, May 31, 2019, 8:57 AM  wrote:

> Hi
>
>
>
> This is the first time I write to this mailing list, I'd like to say hello
> to everyone.
>
>
>
> I once had a similar issue when installing openshift on my notebook using
> VirtualBox, I had 2 network interfaces per host (on NATed with internet
> access and an internal only) and openshift took the "wrong" one. Then I had
> to set the host variable 'openshift_ip' to explicitly set my ip address to
> the of the "correct" device.
>
>
>
> I cannot find it with 3.11 documentation, but within 3.9.
>
>
>
>
> https://docs.openshift.com/container-platform/3.9/install_config/install/advanced_install.html#configuring-host-variables
>
>
>
> regards, Björn
>
>
>
>
>
> *Von:* users-boun...@lists.openshift.redhat.com <
> users-boun...@lists.openshift.redhat.com> *Im Auftrag von *Jérôme Meyer
> *Gesendet:* Mittwoch, 29. Mai 2019 17:19
> *An:* Samuel Martín Moro 
> *Cc:* users@lists.openshift.redhat.com
> *Betreff:* RE:Issue by installing OKD OpenShift 3.11
>
>
>
>
>
> Thanks for your help and advise.
>
> Unfortunately it don't work yet but perhaps it is a network issue...
>
>
>
> So, I'll explain more deeply my network architecture...
>
>
>
> All my VMs are using the default network 192.168.122.0/24 with forwarding
> NAT to go to the Internet. My Laptop are the 192.168.122.1 and it is the
> default gateway for all systems too (only one default gateway). This
> network works with DHCP.
>
>
>
> Then, I've defined a separate intern subnet to perform the container
> network: 192.168.100.0/24 as isolated network and internal routing only.
> This network used static ip address and address are in DNS defined.
>
>
>
> Here're details:
>
>
>
> node1
> ens10: 192.168.100.101/24
>
> eth1: 192.168.122.193/24
>
> docker0: 172.17.0.1/16
>
>
>
> node2
>
> ens10: 192.168.100.102/24
>
> eth1: 192.168.122.240/24
>
> docker0: 172.17.0.1/16
>
>
>
> master
>
> ens10: 192.168.100.100/24
>
> eth1: 192.168.122.54/24
>
> docker0: 172.17.0.1/16
>
>
>
> services
>
> ens10: 192.168.100.103/24
>
> eth1: 192.168.122.234/24
> docker0: 172.17.0.1/16
>
>
>
> I'm connecting and start the ansible's job from my workstation VM
> 192.168.100.50.
>
>
>
> Now, if I've right understood, the openshift service will bind http port
> to the same subnet as the default gateway? In my case, it will be the
> subnet 192.168.122... ? right?
>
> Could it that be the problem?
>
>
>
> I've defined all ip address for my system in openshift with 192.168.100
> subnet. Is that correct?
>
> It's possible to use 2 networks has in my case?
>
>
>
> It's not yet very clear how the network should be configured for openshift
> hosts. I thought about defining a network for external connection
> (internet) and a network for internal connection specific to openshift but
> I'm not sure is it ok...
>
>
>
> Regards, J
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
>
> *De :* Samuel Martín Moro [faus...@gmail.com]
> *Envoyé :* vendredi 24 mai 2019 21:45
> *À :* Jérôme Meyer
> *Cc :* users@lists.openshift.redhat.com
> *Objet :* Re: Issue by installing OKD OpenShift 3.11
>
> Oh, that makes perfect sense
>
> I would assume that your default gateway points to your workstation, in
> 192.168.100.0/24?
>
>
>
> -- although lately, I've seen some inconsistencies: usually, OpenShift
> services would bind on the address assigned to whichever interface routes
> to your default gateway.
>
>
>
> Assuming that switching your default gateway is not an option, then you
>

AW: Issue by installing OKD OpenShift 3.11

2019-05-30 Thread Bjoern.Baertschi1
Hi

This is the first time I write to this mailing list, I'd like to say hello to 
everyone.

I once had a similar issue when installing openshift on my notebook using 
VirtualBox, I had 2 network interfaces per host (on NATed with internet access 
and an internal only) and openshift took the "wrong" one. Then I had to set the 
host variable 'openshift_ip' to explicitly set my ip address to the of the 
"correct" device.

I cannot find it with 3.11 documentation, but within 3.9.

https://docs.openshift.com/container-platform/3.9/install_config/install/advanced_install.html#configuring-host-variables

regards, Björn


Von: users-boun...@lists.openshift.redhat.com 
 Im Auftrag von Jérôme Meyer
Gesendet: Mittwoch, 29. Mai 2019 17:19
An: Samuel Martín Moro 
Cc: users@lists.openshift.redhat.com
Betreff: RE:Issue by installing OKD OpenShift 3.11


Thanks for your help and advise.
Unfortunately it don't work yet but perhaps it is a network issue...

So, I'll explain more deeply my network architecture...

All my VMs are using the default network 192.168.122.0/24 with forwarding NAT 
to go to the Internet. My Laptop are the 192.168.122.1 and it is the default 
gateway for all systems too (only one default gateway). This network works with 
DHCP.

Then, I've defined a separate intern subnet to perform the container network: 
192.168.100.0/24 as isolated network and internal routing only. This network 
used static ip address and address are in DNS defined.

Here're details:

node1
ens10: 192.168.100.101/24
eth1: 192.168.122.193/24
docker0: 172.17.0.1/16

node2
ens10: 192.168.100.102/24
eth1: 192.168.122.240/24
docker0: 172.17.0.1/16

master
ens10: 192.168.100.100/24
eth1: 192.168.122.54/24
docker0: 172.17.0.1/16

services
ens10: 192.168.100.103/24
eth1: 192.168.122.234/24
docker0: 172.17.0.1/16

I'm connecting and start the ansible's job from my workstation VM 
192.168.100.50.

Now, if I've right understood, the openshift service will bind http port to the 
same subnet as the default gateway? In my case, it will be the subnet 
192.168.122... ? right?
Could it that be the problem?

I've defined all ip address for my system in openshift with 192.168.100 subnet. 
Is that correct?
It's possible to use 2 networks has in my case?

It's not yet very clear how the network should be configured for openshift 
hosts. I thought about defining a network for external connection (internet) 
and a network for internal connection specific to openshift but I'm not sure is 
it ok...

Regards, J








De : Samuel Martín Moro [faus...@gmail.com]
Envoyé : vendredi 24 mai 2019 21:45
À : Jérôme Meyer
Cc : users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Objet : Re: Issue by installing OKD OpenShift 3.11
Oh, that makes perfect sense
I would assume that your default gateway points to your workstation, in 
192.168.100.0/24<http://192.168.100.0/24>?

-- although lately, I've seen some inconsistencies: usually, OpenShift services 
would bind on the address assigned to whichever interface routes to your 
default gateway.

Assuming that switching your default gateway is not an option, then you may 
force OpenShift bind address from your openshift_node_groups definition.
Dealing with that variable in ini format is quite painful, and usually leads to 
syntax errors, ... First we'll create a "group_vars" sub-folder alongside our 
inventory.


mkdir -p /group_vars



In that folder, we would create a file OSEv3.yml, with the following content:


openshift_node_groups:
- name: node-config-master-infra
  labels:
- 
'node-role.kubernetes.io/master=true<http://node-role.kubernetes.io/master=true>'
- 
'node-role.kubernetes.io/infra=true<http://node-role.kubernetes.io/infra=true>'
  edits:
  - key: kubeletArguments.node-ip
value: [ 192.168.122.54 ]
- name: node-config-node1
  labels:
- 
'node-role.kubernetes.io/compute=true<http://node-role.kubernetes.io/compute=true>'
  edits:
  - key: kubeletArguments.node-ip
value: [  ]
- name: node-config-node2
  labels:
- 
'node-role.kubernetes.io/compute=true<http://node-role.kubernetes.io/compute=true>'
  edits:
  - key: kubeletArguments.node-ip
value: [  ]



see ./roles/openshift_facts/defaults/main.yml for the default 
openshift_node_groups definition, if you're curious.

Also make sure that each node from your cluster would load its own 
configuration:



[masters]
master.olab.oshift.edu<http://master.olab.oshift.edu> 
openshift_node_groups_name=node-config-master-infra

[etcd:children]
masters

[compute]
node1.olab.oshift.edu<http://node1.olab.oshift.edu> 
openshift_node_groups_name=node-config-node1
node2.olab.oshift.edu<http://node2.olab.oshift.edu> 
openshift_node_groups_name=node-config-node2

[nodes:children]
master

Re: Issue by installing OKD OpenShift 3.11

2019-05-24 Thread Samuel Martín Moro
rom
> k8s.io/client-go/informers/factory.go:130\n
> <http://k8s.io/client-go/informers/factory.go:130/n>
> ","stream":"stderr","time":"2019-05-24T12:19:13.747735832Z"}
> {"log":"I0524 14:19:13.745340   1 reflector.go:171] Listing and
> watching *v1.ReplicationController from
> k8s.io/client-go/informers/factory.go:130\n
> <http://k8s.io/client-go/informers/factory.go:130/n>
> ","stream":"stderr","time":"2019-05-24T12:19:13.747740084Z"}
> {"log":"I0524 14:19:13.745907   1 reflector.go:133] Starting reflector
> *v1beta1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:130\n
> <http://k8s.io/client-go/informers/factory.go:130/n>
> ","stream":"stderr","time":"2019-05-24T12:19:13.747744229Z"}
> {"log":"I0524 14:19:13.745925   1 reflector.go:171] Listing and
> watching *v1beta1.ReplicaSet from
> k8s.io/client-go/informers/factory.go:130\n
> <http://k8s.io/client-go/informers/factory.go:130/n>
> ","stream":"stderr","time":"2019-05-24T12:19:13.747748717Z"}
> {"log":"I0524 14:19:13.746647   1 controllermanager.go:128] Version:
> v1.11.0+d4cacc0\n","stream":"stderr","time":"2019-05-24T12:19:13.747753221Z"}
> {"log":"I0524 14:19:13.746697   1 leaderelection.go:185] attempting to
> acquire leader lease  kube
> -system/kube-controller-manager...\n","stream":"stderr","time":"2019-05-24T12:19:13.747757701Z"}
> {"log":"I0524 14:19:13.746889   1 standalone_apiserver.go:101] Started
> health checks at 0.0.0.0:8444
> \n","stream":"stderr","time":"2019-05-24T12:19:13.747761834Z"}
> {"log":"F0524 14:19:13.747339   1 standalone_apiserver.go:117] listen
> tcp4 0.0.0.0:8444: bind: address already in
> use\n","stream":"stderr","time":"2019-05-24T12:19:13.747765655Z"}
>
> Best regards, J
>
>
>
>
>
> *From:* Samuel Martín Moro 
> *Sent:* Donnerstag, 23. Mai 2019 23:53
> *To:* Jérôme Meyer 
> *Cc:* users@lists.openshift.redhat.com
> *Subject:* Re: Issue by installing OKD OpenShift 3.11
>
>
>
> Hi,
>
>
>
>
>
> As a general rule, you may want to check for the corresponding container
> health and logs.
>
>
>
> You won't find any apache or nginx listening. The process serving on
> :8443 is openshift, it should be started in a container.
>
> Note that the master-api container, in charge of that service, closely
> rely on another container: etcd. Which is what ansible's waiting for, in
> your logs.
>
>
>
> On the master node, use "docker ps" (worst case scenario, "docker ps -a").
>
> Locate your etcd and master-api containers ID (first column).
>
> Then use "docker logs [-f] ", search for errors.
>
>
>
> You may find file copies of these logs in /var/log/containers (and
> /var/log/pods).
>
>
>
> Let us know how that goes.
>
>
>
> And try to avoid mailing your htpasswd entries ;)
>
>
>
>
>
> Regards.
>
>
>
>
>
>
>
> On Thu, May 23, 2019 at 10:42 AM Jérôme Meyer 
> wrote:
>
> Dear Team,
>
> I've encountered some issue to installing openshift (okd 3.11) on 3 vms
> (1 master and 2 nodes).
> I followed the recommendations and procedure as described in docs.
> Then I launched the ansible prerequiste playbook without issue, all was
> fine. But unfortunately the deploy_cluster playbook didn't finished.
> Some errors appears when he start the pod.
>
> 2019-05-17 16:58:52,157 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appear (2 retries left).
> 2019-05-17 16:58:57,607 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appear (1 retries left).
> 2019-05-17 16:59:02,998 p=6592 u=root |  failed: [master.lab.oshift.edu]
> (item=etcd) => {"attempts": 60, "changed": false, "item": "etcd", "msg":
> {"cmd": "/usr/bin/oc get pod master-etcd-master.lab.oshift.edu -o json -n
> kube-system", "results": [{}], "returncode": 1, "stderr": "The connection
> to the server master:8443 was refused - did you specify the right host or
> port?\n", "stdout": ""}}
> 2019-05-17 16:59:03,531 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appea

RE: Issue by installing OKD OpenShift 3.11

2019-05-24 Thread Jérôme Meyer
Hi Andrej,

Thanks for this useful information.
I've changed my config and restart the playbook. Unfortunately, the problem 
still occurs but I think that the issue is elsewhere, as indicated in the logs.

Best regards, J

-Original Message-
From: Andrej Golis  
Sent: Donnerstag, 23. Mai 2019 18:15
To: Jérôme Meyer 
Cc: users@lists.openshift.redhat.com
Subject: Re: Issue by installing OKD OpenShift 3.11

Hi,

if you have master and etcd colocated on the same node, you should use 
'node-config-master-infra' node group instead of 'node-config-master'.

Check the last 2 paragraphs of [1].

Andrej

[1] 
https://docs.openshift.com/container-platform/3.11/install/configuring_inventory_file.html#configuring-dedicated-infrastructure-nodes

On Thu, May 23, 2019 at 10:42 AM Jérôme Meyer  wrote:
>
> Dear Team,
>
> I've encountered some issue to installing openshift (okd 3.11) on 3 vms (1 
> master and 2 nodes).
> I followed the recommendations and procedure as described in docs.
> Then I launched the ansible prerequiste playbook without issue, all was fine. 
> But unfortunately the deploy_cluster playbook didn't finished.
> Some errors appears when he start the pod.
>
> 2019-05-17 16:58:52,157 p=6592 u=root |  FAILED - RETRYING: Wait for control 
> plane pods to appear (2 retries left).
> 2019-05-17 16:58:57,607 p=6592 u=root |  FAILED - RETRYING: Wait for control 
> plane pods to appear (1 retries left).
> 2019-05-17 16:59:02,998 p=6592 u=root |  failed: 
> [master.lab.oshift.edu] (item=etcd) => {"attempts": 60, "changed": 
> false, "item": "etcd", "msg": {"cmd": "/usr/bin/oc get pod 
> master-etcd-master.lab.oshift.edu -o json -n kube-system", "results": 
> [{}], "returncode": 1, "stderr": "The connection to the server 
> master:8443 was refused - did you specify the right host or port?\n", 
> "stdout": ""}}
> 2019-05-17 16:59:03,531 p=6592 u=root |  FAILED - RETRYING: Wait for control 
> plane pods to appear (60 retries left).
> 2019-05-17 16:59:08,980 p=6592 u=root |  FAILED - RETRYING: Wait for control 
> plane pods to appear (59 retries left).
>
> Regarding this issue, I've checked the master server and I didn't seen the 
> http port 8443 open or no http/nginx/or whatever service are running, 
> strange.
>
>
> DNS server was installed on a vm called services and the dig command was ok.
>
>
>
> Please let me know if I failed to install something or is the inventory 
> config wrong? what should I do to troubleshoot this problem?
>
> Thanks and best regards, J.
>
>
>
>
>
> Here's the inventory file:
>
>
>
> # cat inventory/hosts
> #
> #
> # HOSTS configuration for our labs
> #
> # 2019-05-17
> #
> #
>
> [workstation]
> workstation.lab.oshift.edu
>
> [masters]
> master.lab.oshift.edu
>
> [etcd]
> master.lab.oshift.edu
>
> [nodes]
> master.lab.oshift.edu openshift_node_group_name="node-config-master"
> node1.lab.oshift.edu openshift_node_group_name="node-config-compute"
> node2.lab.oshift.edu openshift_node_group_name="node-config-compute"
>
> [nfs]
> services.lab.oshift.edu
>
> # Create an OSEv3 group that contains the masters and nodes groups 
> [OSEv3:children] masters nodes etcd nfs
>
> [OSEv3:vars]
> ###
> # Common/ Required configuration variables follow 
> #
> ##
> #
> # How ansible access hosts
> ansible_user=root
> ansible_become=true
>
> openshift_deployment_type=origin
>
> openshift_release="3.11"
>
> openshift_master_default_subdomain=apps.lab.oshift.edu
>
> ###
> # Additional configuration variables follow   
> #
> ##
> #
>
> # DEBUG
> debug_level=4
>
> # DISABLE SOME CHECKS
> openshift_disable_check=disk_availability,memory_availability,docker_s
> torage
>
> # Enable etcd debug logging, defaults to false etcd_debug=true # Set 
> etcd log levels by package 
> etcd_log_package_levels="etcdserver=WARNING,security=INFO"
>
> # htpasswd auth
> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 
> 'l

RE: Issue by installing OKD OpenShift 3.11

2019-05-24 Thread Jérôme Meyer
lugins.go:84] Registered admission 
plugin 
\"PodSecurityPolicy\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016622959Z"}
{"log":"I0524 14:18:50.016622   1 plugins.go:84] Registered admission 
plugin 
\"Priority\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016659601Z"}
{"log":"I0524 14:18:50.016662   1 plugins.go:84] Registered admission 
plugin 
\"SecurityContextDeny\"\n","stream":"stderr","time":"2019-05-24T12:18:50.01670916Z"}
{"log":"I0524 14:18:50.016713   1 plugins.go:84] Registered admission 
plugin 
\"ServiceAccount\"\n","stream":"stderr","time":"2019-05-24T12:18:50.01678609Z"}
{"log":"I0524 14:18:50.016753   1 plugins.go:84] Registered admission 
plugin 
\"DefaultStorageClass\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016856209Z"}
{"log":"I0524 14:18:50.016784   1 plugins.go:84] Registered admission 
plugin 
\"PersistentVolumeClaimResize\"\n","stream":"stderr","time":"2019-05-24T12:18:50.01686304Z"}
{"log":"I0524 14:18:50.016801   1 plugins.go:84] Registered admission 
plugin 
\"StorageObjectInUseProtection\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016865753Z"}
{"log":"F0524 14:19:20.021832   1 start_api.go:68] dial tcp 
192.168.100.100:2379<http://192.168.100.100:2379>: connect: connection 
refused\n","stream":"stderr","time":"2019-05-24T12:19:20.02217046Z"}

Container log

[root@master controllers]# tail -f 7.log
{"log":"I0524 14:19:13.744728   1 reflector.go:133] Starting reflector 
*v1.PersistentVolumeClaim (0s) from 
k8s.io/client-go/informers/factory.go:130\n<http://k8s.io/client-go/informers/factory.go:130/n>","stream":"stderr","time":"2019-05-24T12:19:13.747727009Z"}
{"log":"I0524 14:19:13.744754   1 reflector.go:171] Listing and watching 
*v1.PersistentVolumeClaim from 
k8s.io/client-go/informers/factory.go:130\n<http://k8s.io/client-go/informers/factory.go:130/n>","stream":"stderr","time":"2019-05-24T12:19:13.74773138Z"}
{"log":"I0524 14:19:13.745323   1 reflector.go:133] Starting reflector 
*v1.ReplicationController (0s) from 
k8s.io/client-go/informers/factory.go:130\n<http://k8s.io/client-go/informers/factory.go:130/n>","stream":"stderr","time":"2019-05-24T12:19:13.747735832Z"}
{"log":"I0524 14:19:13.745340   1 reflector.go:171] Listing and watching 
*v1.ReplicationController from 
k8s.io/client-go/informers/factory.go:130\n<http://k8s.io/client-go/informers/factory.go:130/n>","stream":"stderr","time":"2019-05-24T12:19:13.747740084Z"}
{"log":"I0524 14:19:13.745907   1 reflector.go:133] Starting reflector 
*v1beta1.ReplicaSet (0s) from 
k8s.io/client-go/informers/factory.go:130\n<http://k8s.io/client-go/informers/factory.go:130/n>","stream":"stderr","time":"2019-05-24T12:19:13.747744229Z"}
{"log":"I0524 14:19:13.745925   1 reflector.go:171] Listing and watching 
*v1beta1.ReplicaSet from 
k8s.io/client-go/informers/factory.go:130\n<http://k8s.io/client-go/informers/factory.go:130/n>","stream":"stderr","time":"2019-05-24T12:19:13.747748717Z"}
{"log":"I0524 14:19:13.746647   1 controllermanager.go:128] Version: 
v1.11.0+d4cacc0\n","stream":"stderr","time":"2019-05-24T12:19:13.747753221Z"}
{"log":"I0524 14:19:13.746697   1 leaderelection.go:185] attempting to 
acquire leader lease  
kube-system/kube-controller-manager...\n","stream":"stderr","time":"2019-05-24T12:19:13.747757701Z"}
{"log":"I0524 14:19:13.746889   1 standalone_apiserver.go:101] Started 
health checks at 
0.0.0.0:8444<http://0.0.0.0:8444>\n","stream":"stderr","time":"2019-05-24T12:19:13.747761834Z"}
{"log":"F0524 14:19:13.747339   1 standalone_apiserver.go:117] listen tcp4 
0.0.0.0:8444<http://0.0.0.0:8444>: bind: address already in 
use\n","stream":"stderr","time":"2019-05-24T12:19:13.747765655Z"}
Best regards, J


From: Samuel Martín Moro 
Sent: Donnerstag, 23. 

Re: Issue by installing OKD OpenShift 3.11

2019-05-23 Thread Samuel Martín Moro
Hi,


As a general rule, you may want to check for the corresponding container
health and logs.

You won't find any apache or nginx listening. The process serving on :8443
is openshift, it should be started in a container.
Note that the master-api container, in charge of that service, closely rely
on another container: etcd. Which is what ansible's waiting for, in your
logs.

On the master node, use "docker ps" (worst case scenario, "docker ps -a").
Locate your etcd and master-api containers ID (first column).
Then use "docker logs [-f] ", search for errors.

You may find file copies of these logs in /var/log/containers (and
/var/log/pods).

Let us know how that goes.

And try to avoid mailing your htpasswd entries ;)


Regards.



On Thu, May 23, 2019 at 10:42 AM Jérôme Meyer 
wrote:

> Dear Team,
>
> I've encountered some issue to installing openshift (okd 3.11) on 3 vms
> (1 master and 2 nodes).
> I followed the recommendations and procedure as described in docs.
> Then I launched the ansible prerequiste playbook without issue, all was
> fine. But unfortunately the deploy_cluster playbook didn't finished.
> Some errors appears when he start the pod.
>
> 2019-05-17 16:58:52,157 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appear (2 retries left).
> 2019-05-17 16:58:57,607 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appear (1 retries left).
> 2019-05-17 16:59:02,998 p=6592 u=root |  failed: [master.lab.oshift.edu]
> (item=etcd) => {"attempts": 60, "changed": false, "item": "etcd", "msg":
> {"cmd": "/usr/bin/oc get pod master-etcd-master.lab.oshift.edu -o json -n
> kube-system", "results": [{}], "returncode": 1, "stderr": "The connection
> to the server master:8443 was refused - did you specify the right host or
> port?\n", "stdout": ""}}
> 2019-05-17 16:59:03,531 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appear (60 retries left).
> 2019-05-17 16:59:08,980 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appear (59 retries left).
>
> Regarding this issue, I've checked the master server and I didn't seen
> the http port 8443 open or no http/nginx/or whatever service are running,
> strange.
>
>
> DNS server was installed on a vm called services and the dig command was
> ok.
>
>
>
> Please let me know if I failed to install something or is the inventory
> config wrong? what should I do to troubleshoot this problem?
>
> Thanks and best regards, J.
>
>
>
>
>
> *Here's the inventory file:*
>
>
>
> # cat inventory/hosts
> #
> #
> # HOSTS configuration for our labs
> #
> # 2019-05-17
> #
> #
>
> [workstation]
> workstation.lab.oshift.edu
>
> [masters]
> master.lab.oshift.edu
>
> [etcd]
> master.lab.oshift.edu
>
> [nodes]
> master.lab.oshift.edu openshift_node_group_name="node-config-master"
> node1.lab.oshift.edu openshift_node_group_name="node-config-compute"
> node2.lab.oshift.edu openshift_node_group_name="node-config-compute"
>
> [nfs]
> services.lab.oshift.edu
>
> # Create an OSEv3 group that contains the masters and nodes groups
> [OSEv3:children]
> masters
> nodes
> etcd
> nfs
>
> [OSEv3:vars]
>
> ###
> # Common/ Required configuration variables
> follow #
>
> ###
> # How ansible access hosts
> ansible_user=root
> ansible_become=true
>
> openshift_deployment_type=origin
>
> openshift_release="3.11"
>
> openshift_master_default_subdomain=apps.lab.oshift.edu
>
>
> ###
> # Additional configuration variables
> follow   #
>
> ###
>
> # DEBUG
> debug_level=4
>
> # DISABLE SOME CHECKS
>
> openshift_disable_check=disk_availability,memory_availability,docker_storage
>
> # Enable etcd debug logging, defaults to false
> etcd_debug=true
> # Set etcd log levels by package
> etcd_log_package_levels="etcdserver=WARNING,security=INFO"
>
> # htpasswd auth
> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
> # Defining htpasswd users
> openshift_master_htpasswd_users={'admin':
> '$apr1$Ky/ZY39n$Z8/t3xJsnxGANzypVTtmD0', 'developer':
> '$apr1$MdVAOTmy$8nB.ANU4OeciLjDeU68w/1'}
>
> # Option B - External NFS Host
> openshift_hosted_registry_storage_kind=nfs
> openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
> openshift_hosted_registry_storage_nfs_directory=/openshift_storage
> openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
> openshift_hosted_registry_storage_volume_name=registry
> openshift_hosted_registry_sto

Re: Issue by installing OKD OpenShift 3.11

2019-05-23 Thread Andrej Golis
Hi,

if you have master and etcd colocated on the same node, you should use
'node-config-master-infra' node group instead of 'node-config-master'.

Check the last 2 paragraphs of [1].

Andrej

[1] 
https://docs.openshift.com/container-platform/3.11/install/configuring_inventory_file.html#configuring-dedicated-infrastructure-nodes

On Thu, May 23, 2019 at 10:42 AM Jérôme Meyer  wrote:
>
> Dear Team,
>
> I've encountered some issue to installing openshift (okd 3.11) on 3 vms (1 
> master and 2 nodes).
> I followed the recommendations and procedure as described in docs.
> Then I launched the ansible prerequiste playbook without issue, all was fine. 
> But unfortunately the deploy_cluster playbook didn't finished.
> Some errors appears when he start the pod.
>
> 2019-05-17 16:58:52,157 p=6592 u=root |  FAILED - RETRYING: Wait for control 
> plane pods to appear (2 retries left).
> 2019-05-17 16:58:57,607 p=6592 u=root |  FAILED - RETRYING: Wait for control 
> plane pods to appear (1 retries left).
> 2019-05-17 16:59:02,998 p=6592 u=root |  failed: [master.lab.oshift.edu] 
> (item=etcd) => {"attempts": 60, "changed": false, "item": "etcd", "msg": 
> {"cmd": "/usr/bin/oc get pod master-etcd-master.lab.oshift.edu -o json -n 
> kube-system", "results": [{}], "returncode": 1, "stderr": "The connection to 
> the server master:8443 was refused - did you specify the right host or 
> port?\n", "stdout": ""}}
> 2019-05-17 16:59:03,531 p=6592 u=root |  FAILED - RETRYING: Wait for control 
> plane pods to appear (60 retries left).
> 2019-05-17 16:59:08,980 p=6592 u=root |  FAILED - RETRYING: Wait for control 
> plane pods to appear (59 retries left).
>
> Regarding this issue, I've checked the master server and I didn't seen the 
> http port 8443 open or no http/nginx/or whatever service are running, 
> strange.
>
>
> DNS server was installed on a vm called services and the dig command was ok.
>
>
>
> Please let me know if I failed to install something or is the inventory 
> config wrong? what should I do to troubleshoot this problem?
>
> Thanks and best regards, J.
>
>
>
>
>
> Here's the inventory file:
>
>
>
> # cat inventory/hosts
> #
> #
> # HOSTS configuration for our labs
> #
> # 2019-05-17
> #
> #
>
> [workstation]
> workstation.lab.oshift.edu
>
> [masters]
> master.lab.oshift.edu
>
> [etcd]
> master.lab.oshift.edu
>
> [nodes]
> master.lab.oshift.edu openshift_node_group_name="node-config-master"
> node1.lab.oshift.edu openshift_node_group_name="node-config-compute"
> node2.lab.oshift.edu openshift_node_group_name="node-config-compute"
>
> [nfs]
> services.lab.oshift.edu
>
> # Create an OSEv3 group that contains the masters and nodes groups
> [OSEv3:children]
> masters
> nodes
> etcd
> nfs
>
> [OSEv3:vars]
> ###
> # Common/ Required configuration variables follow 
> #
> ###
> # How ansible access hosts
> ansible_user=root
> ansible_become=true
>
> openshift_deployment_type=origin
>
> openshift_release="3.11"
>
> openshift_master_default_subdomain=apps.lab.oshift.edu
>
> ###
> # Additional configuration variables follow   
> #
> ###
>
> # DEBUG
> debug_level=4
>
> # DISABLE SOME CHECKS
> openshift_disable_check=disk_availability,memory_availability,docker_storage
>
> # Enable etcd debug logging, defaults to false
> etcd_debug=true
> # Set etcd log levels by package
> etcd_log_package_levels="etcdserver=WARNING,security=INFO"
>
> # htpasswd auth
> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 
> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
> # Defining htpasswd users
> openshift_master_htpasswd_users={'admin': 
> '$apr1$Ky/ZY39n$Z8/t3xJsnxGANzypVTtmD0', 'developer': 
> '$apr1$MdVAOTmy$8nB.ANU4OeciLjDeU68w/1'}
>
> # Option B - External NFS Host
> openshift_hosted_registry_storage_kind=nfs
> openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
> openshift_hosted_registry_storage_nfs_directory=/openshift_storage
> openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
> openshift_hosted_registry_storage_volume_name=registry
> openshift_hosted_registry_storage_volume_size=10Gi
>
> # ENABLE FIREWALLD
> os_firewall_use_firewalld=true
> [root@workstation openshift-ansible]#
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://li

Issue by installing OKD OpenShift 3.11

2019-05-23 Thread Jérôme Meyer
Dear Team,

I've encountered some issue to installing openshift (okd 3.11) on 3 vms (1 
master and 2 nodes).
I followed the recommendations and procedure as described in docs.
Then I launched the ansible prerequiste playbook without issue, all was fine. 
But unfortunately the deploy_cluster playbook didn't finished.
Some errors appears when he start the pod.

2019-05-17 16:58:52,157 p=6592 u=root |  FAILED - RETRYING: Wait for control 
plane pods to appear (2 retries left).
2019-05-17 16:58:57,607 p=6592 u=root |  FAILED - RETRYING: Wait for control 
plane pods to appear (1 retries left).
2019-05-17 16:59:02,998 p=6592 u=root |  failed: [master.lab.oshift.edu] 
(item=etcd) => {"attempts": 60, "changed": false, "item": "etcd", "msg": 
{"cmd": "/usr/bin/oc get pod master-etcd-master.lab.oshift.edu -o json -n 
kube-system", "results": [{}], "returncode": 1, "stderr": "The connection to 
the server master:8443 was refused - did you specify the right host or 
port?\n", "stdout": ""}}
2019-05-17 16:59:03,531 p=6592 u=root |  FAILED - RETRYING: Wait for control 
plane pods to appear (60 retries left).
2019-05-17 16:59:08,980 p=6592 u=root |  FAILED - RETRYING: Wait for control 
plane pods to appear (59 retries left).
Regarding this issue, I've checked the master server and I didn't seen the http 
port 8443 open or no http/nginx/or whatever service are running, strange.

DNS server was installed on a vm called services and the dig command was ok.

Please let me know if I failed to install something or is the inventory config 
wrong? what should I do to troubleshoot this problem?
Thanks and best regards, J.


Here's the inventory file:


# cat inventory/hosts
#
#
# HOSTS configuration for our labs
#
# 2019-05-17
#
#

[workstation]
workstation.lab.oshift.edu

[masters]
master.lab.oshift.edu

[etcd]
master.lab.oshift.edu

[nodes]
master.lab.oshift.edu openshift_node_group_name="node-config-master"
node1.lab.oshift.edu openshift_node_group_name="node-config-compute"
node2.lab.oshift.edu openshift_node_group_name="node-config-compute"

[nfs]
services.lab.oshift.edu

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd
nfs

[OSEv3:vars]
###
# Common/ Required configuration variables follow #
###
# How ansible access hosts
ansible_user=root
ansible_become=true

openshift_deployment_type=origin

openshift_release="3.11"

openshift_master_default_subdomain=apps.lab.oshift.edu

###
# Additional configuration variables follow   #
###

# DEBUG
debug_level=4

# DISABLE SOME CHECKS
openshift_disable_check=disk_availability,memory_availability,docker_storage

# Enable etcd debug logging, defaults to false
etcd_debug=true
# Set etcd log levels by package
etcd_log_package_levels="etcdserver=WARNING,security=INFO"

# htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# Defining htpasswd users
openshift_master_htpasswd_users={'admin': 
'$apr1$Ky/ZY39n$Z8/t3xJsnxGANzypVTtmD0', 'developer': 
'$apr1$MdVAOTmy$8nB.ANU4OeciLjDeU68w/1'}

# Option B - External NFS Host
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory=/openshift_storage
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi

# ENABLE FIREWALLD
os_firewall_use_firewalld=true
[root@workstation openshift-ansible]#



smime.p7s
Description: S/MIME cryptographic signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users