Indeed, avoiding multiple network interfaces is preferable, if an option.

To stop/start openshift, you could just stop/start (using systemctl)
origin-node and your container runtime (docker and/or crio).
Now for any stateful workload you could eventually be hosting, you will
want to come up with your own process (usually: shutdown frontends, make
sure backends synced everything to disk and shut them down, ...)

Regards.

On Fri, Jun 7, 2019 at 4:53 PM Jérôme Meyer <jerome.me...@lcsystems.ch>
wrote:

> Finally, I've configured my systems with only one network, it is the
> easiest way with kvm on laptop.
>
> Then I've used your config example and the deployment was successfully
> done… 😉
>
>
>
> Is there a possibility to stop/start okd openshift?
>
>
>
> Thank you really for your help and support.
>
>
>
> Best regards, J.
>
>
>
>
>
>
>
>
> *From:* users-boun...@lists.openshift.redhat.com <
> users-boun...@lists.openshift.redhat.com> *On Behalf Of *Samuel Martín
> Moro
> *Sent:* Freitag, 31. Mai 2019 20:43
> *To:* bjoern.baertsc...@swisscom.com
> *Cc:* OpenShift Users List <users@lists.openshift.redhat.com>
> *Subject:* Re: Issue by installing OKD OpenShift 3.11
>
>
>
> Yup, and that was my point in setting the node-ip kubelet argument, which
> kinda-replaces that variable, as of node-config moving to ConfigMaps.
>
>
>
> @jerome: I should have told you, once you'ld have fixed your inventory
> setting your own node groups configurations, you should re-deploy from
> scratch.
>
> Make sure do drop everything, especially certificates and configurations
> that might still mention your former Ip addresses.
>
> Openshift has an uninstall playbook. Once applied, make sure all
> containers are down, /var/lib/etcd sould be empty, ... If you can re-
> deployall nodes, that's even better (the uninstall tends to leave stuff,
> ...)
>
> Also: if using crio, make sure to check the default value for
> openshift_node_groups, as it includes a few additional edits you'ld need
> ...
>
>
>
>
>
> Let us know if you have any questions
>
>
>
> Apologies for the late reply
>
>
>
> Regards.
>
>
>
>
>
>
>
> On Fri, May 31, 2019, 8:57 AM <bjoern.baertsc...@swisscom.com> wrote:
>
> Hi
>
>
>
> This is the first time I write to this mailing list, I'd like to say hello
> to everyone.
>
>
>
> I once had a similar issue when installing openshift on my notebook using
> VirtualBox, I had 2 network interfaces per host (on NATed with internet
> access and an internal only) and openshift took the "wrong" one. Then I had
> to set the host variable 'openshift_ip' to explicitly set my ip address to
> the of the "correct" device.
>
>
>
> I cannot find it with 3.11 documentation, but within 3.9.
>
>
>
>
> https://docs.openshift.com/container-platform/3.9/install_config/install/advanced_install.html#configuring-host-variables
>
>
>
> regards, Björn
>
>
>
>
>
> *Von:* users-boun...@lists.openshift.redhat.com <
> users-boun...@lists.openshift.redhat.com> *Im Auftrag von *Jérôme Meyer
> *Gesendet:* Mittwoch, 29. Mai 2019 17:19
> *An:* Samuel Martín Moro <faus...@gmail.com>
> *Cc:* users@lists.openshift.redhat.com
> *Betreff**:* RE:Issue by installing OKD OpenShift 3.11
>
>
>
>
>
> Thanks for your help and advise.
>
> Unfortunately it don't work yet but perhaps it is a network issue...
>
>
>
> So, I'll explain more deeply my network architecture...
>
>
>
> All my VMs are using the default network 192.168.122.0/24 with forwarding
> NAT to go to the Internet. My Laptop are the 192.168.122.1 and it is the
> default gateway for all systems too (only one default gateway). This
> network works with DHCP.
>
>
>
> Then, I've defined a separate intern subnet to perform the container
> network: 192.168.100.0/24 as isolated network and internal routing only.
> This network used static ip address and address are in DNS defined.
>
>
>
> Here're details:
>
>
>
> node1
> ens10: 192.168.100.101/24
>
> eth1: 192.168.122.193/24
>
> docker0: 172.17.0.1/16
>
>
>
> node2
>
> ens10: 192.168.100.102/24
>
> eth1: 192.168.122.240/24
>
> docker0: 172.17.0.1/16
>
>
>
> master
>
> ens10: 192.168.100.100/24
>
> eth1: 192.168.122.54/24
>
> docker0: 172.17.0.1/16
>
>
>
> services
>
> ens10: 192.168.100.103/24
>
> eth1: 192.168.122.234/24
> docker0: 172.17.0.1/16
>
>
>
> I'm connecting and start the ansible's job from my workstation VM
> 192.168.100.50.
>
>
>
> Now, if I've right understood, the openshift service will bind http port
> to the same subnet as the default gateway? In my case, it will be the
> subnet 192.168.122... ? right?
>
> Could it that be the problem?
>
>
>
> I've defined all ip address for my system in openshift with 192.168.100
> subnet. Is that correct?
>
> It's possible to use 2 networks has in my case?
>
>
>
> It's not yet very clear how the network should be configured for openshift
> hosts. I thought about defining a network for external connection
> (internet) and a network for internal connection specific to openshift
> but I'm not sure is it ok...
>
>
>
> Regards, J
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> ------------------------------
>
> *De :* Samuel Martín Moro [faus...@gmail.com]
> *Envoyé :* vendredi 24 mai 2019 21:45
> *À :* Jérôme Meyer
> *Cc :* users@lists.openshift.redhat.com
> *Objet :* Re: Issue by installing OKD OpenShift 3.11
>
> Oh, that makes perfect sense
>
> I would assume that your default gateway points to your workstation, in
> 192.168.100.0/24?
>
>
>
> -- although lately, I've seen some inconsistencies: usually, OpenShift
> services would bind on the address assigned to whichever interface routes
> to your default gateway.
>
>
>
> Assuming that switching your default gateway is not an option, then you
> may force OpenShift bind address from your openshift_node_groups
> definition.
>
> Dealing with that variable in ini format is quite painful, and usually
> leads to syntax errors, ... First we'll create a "group_vars" sub-folder
> alongside our inventory.
>
>
>
>
>
> mkdir -p <path-to-inventory-base-directory>/group_vars
>
>
>
>
>
>
>
> In that folder, we would create a file OSEv3.yml, with the following
> content:
>
>
>
>
>
> openshift_node_groups:
>
> - name: node-config-master-infra
>   labels:
>     - 'node-role.kubernetes.io/master=true'
>     - 'node-role.kubernetes.io/infra=true'
>   edits:
>   - key: kubeletArguments.node-ip
>     value: [ 192.168.122.54 ]
>
> - name: node-config-node1
>
>   labels:
>     - 'node-role.kubernetes.io/compute=true'
>   edits:
>   - key: kubeletArguments.node-ip
>     value: [ <insert-node1-ip-address> ]
>
> - name: node-config-node2
>
>   labels:
>     - 'node-role.kubernetes.io/compute=true'
>   edits:
>   - key: kubeletArguments.node-ip
>     value: [ <insert-node2-ip-address> ]
>
>
>
>
>
>
>
> see ./roles/openshift_facts/defaults/main.yml for the default
> openshift_node_groups definition, if you're curious.
>
>
>
> Also make sure that each node from your cluster would load its own
> configuration:
>
>
>
>
>
>
>
> [masters]
>
> master.olab.oshift.edu openshift_node_groups_name=node-config-master-infra
>
>
>
> [etcd:children]
> masters
>
>
>
> [compute]
>
> node1.olab.oshift.edu openshift_node_groups_name=node-config-node1
>
> node2.olab.oshift.edu openshift_node_groups_name=node-config-node2
>
>
>
> [nodes:children]
>
> masters
>
> compute
>
>
>
> [nfs]
>
> ...
>
>
>
> [OSEv3:children]
>
> nodes
>
> nfs
>
>
>
> ...
>
>
>
>
>
> Let us know how that goes.
>
>
>
>
>
> Regards
>
>
>
>
>
>
>
> On Fri, May 24, 2019 at 3:05 PM Jérôme Meyer <jerome.me...@lcsystems.ch>
> wrote:
>
> Hi,
>
>
>
> Thanks for your help and tips. Yeah, I've forgot this time to remove the
> htpasswd entries.. ;(
>
>
>
> After changing the master definition as 'node-config-master-infra' in
> inventory I've restart the deploy-cluster playbook again.
>
> As you wrote, I've got the master api and etcd information from docker
> and checked the logs.
>
>
>
> So, some questions arises:
>
>
>
>    1. Why this following address is used : *192.168.122.54*? This
>    corresponds to the master interface. It's a nat address using dhcp to
>    connected to my pc.
>    2. Apparently there're a issue with the etcd access on master: *connection
>    refused on 2379*.
>    3. In the last log, it appears that the request is made on the ip
>    address *0.0.0.0:8444*
>    
> <https://smex12-5-en-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=http%3a%2f%2f0.0.0.0%3a8444&umid=0e9c0b30-9ea4-4924-ae7d-cd3ece23a0ba&auth=f2aeef1e705192504f558e668703ea9246add7c9-495450228bf01359b2a33e6cd11c0fca64a55535>,
>    something is wrong in my config?
>
>
>
> Here're the ip interfaces list of master; where the 192.168.100.100 is
> the communication network for openshift as defined in hostname and DNS.
>
>
>
> *Interface list*
>
>
>
> [root@master ~]# ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1000
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever
>     inet6 ::1/128 scope host
>        valid_lft forever preferred_lft forever
> 2: ens10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state UP group default qlen 1000
>     link/ether 52:54:00:ca:44:c8 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.100.100/24 brd 192.168.100.255 scope global noprefixroute
> ens10
>        valid_lft forever preferred_lft forever
>     inet6 fe80::5054:ff:feca:44c8/64 scope link
>        valid_lft forever preferred_lft forever
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state UP group default qlen 1000
>     link/ether 52:54:00:a8:8b:00 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.122.54/24 brd 192.168.122.255 scope global noprefixroute
> dynamic eth1
>        valid_lft 3090sec preferred_lft 3090sec
>     inet6 fe80::c138:7cb0:f8af:7cba/64 scope link noprefixroute
>        valid_lft forever preferred_lft forever
> 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
> state DOWN group default
>     link/ether 02:42:a9:c9:8d:d3 brd ff:ff:ff:ff:ff:ff
>     inet 172.17.0.1/16 scope global docker0
>        valid_lft forever preferred_lft forever
>
>
>
> *Log from etcd-master*
>
>
>
>
>
> {"log":"2019-05-24 14:19:57.592591 D | etcdserver/api/v2http: [GET]
> /health remote:192.168.122.54:44748
> \n","stream":"stderr","time":"2019-05-24T12:19:57.592680803Z"}
> {"log":"2019-05-24 14:20:07.580420 D | etcdserver/api/v2http: [GET]
> /v2/members remote:192.168.122.54:45038
> \n","stream":"stderr","time":"2019-05-24T12:20:07.580688397Z"}
> {"log":"2019-05-24 14:20:07.590218 D | etcdserver/api/v2http: [GET]
> /health remote:192.168.122.54:45040
> \n","stream":"stderr","time":"2019-05-24T12:20:07.590356315Z"}
> {"log":"2019-05-24 14:20:17.582661 D | etcdserver/api/v2http: [GET]
> /v2/members remote:192.168.122.54:45336
> \n","stream":"stderr","time":"2019-05-24T12:20:17.582774753Z"}
> {"log":"2019-05-24 14:20:17.595674 D | etcdserver/api/v2http: [GET]
> /health remote:192.168.122.54:45338
> \n","stream":"stderr","time":"2019-05-24T12:20:17.595844742Z"}
> {"log":"2019-05-24 14:20:27.581915 D | etcdserver/api/v2http: [GET]
> /v2/members remote:192.168.122.54:45638
> \n","stream":"stderr","time":"2019-05-24T12:20:27.582036442Z"}
> {"log":"2019-05-24 14:20:27.592091 D | etcdserver/api/v2http: [GET]
> /health remote:192.168.122.54:45640
> \n","stream":"stderr","time":"2019-05-24T12:20:27.59225275Z"}
> {"log":"2019-05-24 14:20:37.584090 D | etcdserver/api/v2http: [GET]
> /v2/members remote:192.168.122.54:45932
> \n","stream":"stderr","time":"2019-05-24T12:20:37.584291782Z"}
> {"log":"2019-05-24 14:20:37.593862 D | etcdserver/api/v2http: [GET]
> /health remote:192.168.122.54:45934
> \n","stream":"stderr","time":"2019-05-24T12:20:37.593980682Z"}
>
>
>
> *Log from api-master*
>
>
>
> {"log":"I0524 14:18:50.016547       1 plugins.go:84] Registered admission
> plugin
> \"ResourceQuota\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016617699Z"}
> {"log":"I0524 14:18:50.016581       1 plugins.go:84] Registered admission
> plugin
> \"PodSecurityPolicy\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016622959Z"}
> {"log":"I0524 14:18:50.016622       1 plugins.go:84] Registered admission
> plugin
> \"Priority\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016659601Z"}
> {"log":"I0524 14:18:50.016662       1 plugins.go:84] Registered admission
> plugin
> \"SecurityContextDeny\"\n","stream":"stderr","time":"2019-05-24T12:18:50.01670916Z"}
> {"log":"I0524 14:18:50.016713       1 plugins.go:84] Registered admission
> plugin
> \"ServiceAccount\"\n","stream":"stderr","time":"2019-05-24T12:18:50.01678609Z"}
> {"log":"I0524 14:18:50.016753       1 plugins.go:84] Registered admission
> plugin
> \"DefaultStorageClass\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016856209Z"}
> {"log":"I0524 14:18:50.016784       1 plugins.go:84] Registered admission
> plugin
> \"PersistentVolumeClaimResize\"\n","stream":"stderr","time":"2019-05-24T12:18:50.01686304Z"}
> {"log":"I0524 14:18:50.016801       1 plugins.go:84] Registered admission
> plugin
> \"StorageObjectInUseProtection\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016865753Z"}
> {"log":"F0524 14:19:20.021832       1 start_api.go:68] dial tcp
> 192.168.100.100:2379: connect: connection
> refused\n","stream":"stderr","time":"2019-05-24T12:19:20.02217046Z"}
>
>
>
> *Container log*
>
>
>
> [root@master controllers]# tail -f 7.log
> {"log":"I0524 14:19:13.744728       1 reflector.go:133] Starting reflector
> *v1.PersistentVolumeClaim (0s) from
> k8s.io/client-go/informers/factory.go:130\n
> <http://k8s.io/client-go/informers/factory.go:130/n>
> ","stream":"stderr","time":"2019-05-24T12:19:13.747727009Z"}
> {"log":"I0524 14:19:13.744754       1 reflector.go:171] Listing and
> watching *v1.PersistentVolumeClaim from
> k8s.io/client-go/informers/factory.go:130\n
> <http://k8s.io/client-go/informers/factory.go:130/n>
> ","stream":"stderr","time":"2019-05-24T12:19:13.74773138Z"}
> {"log":"I0524 14:19:13.745323       1 reflector.go:133] Starting reflector
> *v1.ReplicationController (0s) from
> k8s.io/client-go/informers/factory.go:130\n
> <http://k8s.io/client-go/informers/factory.go:130/n>
> ","stream":"stderr","time":"2019-05-24T12:19:13.747735832Z"}
> {"log":"I0524 14:19:13.745340       1 reflector.go:171] Listing and
> watching *v1.ReplicationController from
> k8s.io/client-go/informers/factory.go:130\n
> <http://k8s.io/client-go/informers/factory.go:130/n>
> ","stream":"stderr","time":"2019-05-24T12:19:13.747740084Z"}
> {"log":"I0524 14:19:13.745907       1 reflector.go:133] Starting reflector
> *v1beta1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:130\n
> <http://k8s.io/client-go/informers/factory.go:130/n>
> ","stream":"stderr","time":"2019-05-24T12:19:13.747744229Z"}
> {"log":"I0524 14:19:13.745925       1 reflector.go:171] Listing and
> watching *v1beta1.ReplicaSet from
> k8s.io/client-go/informers/factory.go:130\n
> <http://k8s.io/client-go/informers/factory.go:130/n>
> ","stream":"stderr","time":"2019-05-24T12:19:13.747748717Z"}
> {"log":"I0524 14:19:13.746647       1 controllermanager.go:128] Version:
> v1.11.0+d4cacc0\n","stream":"stderr","time":"2019-05-24T12:19:13.747753221Z"}
> {"log":"I0524 14:19:13.746697       1 leaderelection.go:185] attempting to
> acquire leader lease  kube
> -system/kube-controller-manager...\n","stream":"stderr","time":"2019-05-24T12:19:13.747757701Z"}
> {"log":"I0524 14:19:13.746889       1 standalone_apiserver.go:101] Started
> health checks at 0.0.0.0:8444
> <https://smex12-5-en-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=http%3a%2f%2f0.0.0.0%3a8444&umid=0e9c0b30-9ea4-4924-ae7d-cd3ece23a0ba&auth=f2aeef1e705192504f558e668703ea9246add7c9-495450228bf01359b2a33e6cd11c0fca64a55535>
> \n","stream":"stderr","time":"2019-05-24T12:19:13.747761834Z"}
> {"log":"F0524 14:19:13.747339       1 standalone_apiserver.go:117] listen
> tcp4 0.0.0.0:8444
> <https://smex12-5-en-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=http%3a%2f%2f0.0.0.0%3a8444&umid=0e9c0b30-9ea4-4924-ae7d-cd3ece23a0ba&auth=f2aeef1e705192504f558e668703ea9246add7c9-495450228bf01359b2a33e6cd11c0fca64a55535>:
> bind: address already in
> use\n","stream":"stderr","time":"2019-05-24T12:19:13.747765655Z"}
>
> Best regards, J
>
>
>
>
>
> *From:* Samuel Martín Moro <faus...@gmail.com>
> *Sent:* Donnerstag, 23. Mai 2019 23:53
> *To:* Jérôme Meyer <jerome.me...@lcsystems.ch>
> *Cc:* users@lists.openshift.redhat.com
> *Subject:* Re: Issue by installing OKD OpenShift 3.11
>
>
>
> Hi,
>
>
>
>
>
> As a general rule, you may want to check for the corresponding container
> health and logs.
>
>
>
> You won't find any apache or nginx listening. The process serving on
> :8443 is openshift, it should be started in a container.
>
> Note that the master-api container, in charge of that service, closely
> rely on another container: etcd. Which is what ansible's waiting for, in
> your logs.
>
>
>
> On the master node, use "docker ps" (worst case scenario, "docker ps -a").
>
> Locate your etcd and master-api containers ID (first column).
>
> Then use "docker logs [-f] <container-id>", search for errors.
>
>
>
> You may find file copies of these logs in /var/log/containers (and
> /var/log/pods).
>
>
>
> Let us know how that goes.
>
>
>
> And try to avoid mailing your htpasswd entries ;)
>
>
>
>
>
> Regards.
>
>
>
>
>
>
>
> On Thu, May 23, 2019 at 10:42 AM Jérôme Meyer <jerome.me...@lcsystems.ch>
> wrote:
>
> Dear Team,
>
> I've encountered some issue to installing openshift (okd 3.11) on 3 vms
> (1 master and 2 nodes).
> I followed the recommendations and procedure as described in docs.
> Then I launched the ansible prerequiste playbook without issue, all was
> fine. But unfortunately the deploy_cluster playbook didn't finished.
> Some errors appears when he start the pod.
>
> 2019-05-17 16:58:52,157 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appear (2 retries left).
> 2019-05-17 16:58:57,607 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appear (1 retries left).
> 2019-05-17 16:59:02,998 p=6592 u=root |  failed: [master.lab.oshift.edu]
> (item=etcd) => {"attempts": 60, "changed": false, "item": "etcd", "msg":
> {"cmd": "/usr/bin/oc get pod master-etcd-master.lab.oshift.edu -o json -n
> kube-system", "results": [{}], "returncode": 1, "stderr": "The connection
> to the server master:8443 was refused - did you specify the right host or
> port?\n", "stdout": ""}}
> 2019-05-17 16:59:03,531 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appear (60 retries left).
> 2019-05-17 16:59:08,980 p=6592 u=root |  FAILED - RETRYING: Wait for
> control plane pods to appear (59 retries left).
>
> Regarding this issue, I've checked the master server and I didn't seen
> the http port 8443 open or no http/nginx/or whatever service are running,
> strange.....
>
>
> DNS server was installed on a vm called services and the dig command was
> ok.
>
>
>
> Please let me know if I failed to install something or is the inventory
> config wrong? what should I do to troubleshoot this problem?
>
> Thanks and best regards, J.
>
>
>
>
>
> *Here's the inventory file:*
>
>
>
> # cat inventory/hosts
> #####################################################################
> #
> # HOSTS configuration for our labs
> #
> # 2019-05-17
> #
> #####################################################################
>
> [workstation]
> workstation.lab.oshift.edu
>
> [masters]
> master.lab.oshift.edu
>
> [etcd]
> master.lab.oshift.edu
>
> [nodes]
> master.lab.oshift.edu openshift_node_group_name="node-config-master"
> node1.lab.oshift.edu openshift_node_group_name="node-config-compute"
> node2.lab.oshift.edu openshift_node_group_name="node-config-compute"
>
> [nfs]
> services.lab.oshift.edu
>
> # Create an OSEv3 group that contains the masters and nodes groups
> [OSEv3:children]
> masters
> nodes
> etcd
> nfs
>
> [OSEv3:vars]
>
> ###############################################################################
> # Common/ Required configuration variables
> follow                             #
>
> ###############################################################################
> # How ansible access hosts
> ansible_user=root
> ansible_become=true
>
> openshift_deployment_type=origin
>
> openshift_release="3.11"
>
> openshift_master_default_subdomain=apps.lab.oshift.edu
>
>
> ###############################################################################
> # Additional configuration variables
> follow                                   #
>
> ###############################################################################
>
> # DEBUG
> debug_level=4
>
> # DISABLE SOME CHECKS
>
> openshift_disable_check=disk_availability,memory_availability,docker_storage
>
> # Enable etcd debug logging, defaults to false
> etcd_debug=true
> # Set etcd log levels by package
> etcd_log_package_levels="etcdserver=WARNING,security=INFO"
>
> # htpasswd auth
> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
> # Defining htpasswd users
> openshift_master_htpasswd_users={'admin':
> '$apr1$Ky/ZY39n$Z8/t3xJsnxGANzypVTtmD0', 'developer':
> '$apr1$MdVAOTmy$8nB.ANU4OeciLjDeU68w/1'}
>
> # Option B - External NFS Host
> openshift_hosted_registry_storage_kind=nfs
> openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
> openshift_hosted_registry_storage_nfs_directory=/openshift_storage
> openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
> openshift_hosted_registry_storage_volume_name=registry
> openshift_hosted_registry_storage_volume_size=10Gi
>
> # ENABLE FIREWALLD
> os_firewall_use_firewalld=true
> [root@workstation openshift-ansible]#
>
>
>
> _______________________________________________
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
>
> --
>
> Samuel Martín Moro
> {EPITECH.} 2011
>
> "Nobody wants to say how this works.
>  Maybe nobody knows ..."
>                       Xorg.conf(5)
>
>
>
>
> --
>
> Samuel Martín Moro
> {EPITECH.} 2011
>
> "Nobody wants to say how this works.
>  Maybe nobody knows ..."
>                       Xorg.conf(5)
>
>

-- 
Samuel Martín Moro
{EPITECH.} 2011

"Nobody wants to say how this works.
 Maybe nobody knows ..."
                      Xorg.conf(5)
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to