sounds like maybe your master node is not scheduleable, can you run:

$ oc get nodes

$ oc describe node master

?


On Tue, May 23, 2017 at 9:42 AM, Hetz Ben Hamo <h...@hetz.biz> wrote:

> Sure, here it is:
>
> # oc describe pod docker-registry-2-deploy
> Name:                   docker-registry-2-deploy
> Namespace:              default
> Security Policy:        restricted
> Node:                   /
> Labels:                 openshift.io/deployer-pod-for.
> name=docker-registry-2
> Status:                 Pending
> IP:
> Controllers:            <none>
> Containers:
>   deployment:
>     Image:      openshift/origin-deployer:v1.4.1
>     Port:
>     Volume Mounts:
>       /var/run/secrets/kubernetes.io/serviceaccount from
> deployer-token-sbvm4 (ro)
>     Environment Variables:
>       KUBERNETES_MASTER:        https://master-home:8443
>       OPENSHIFT_MASTER:         https://master-home:8443
>       BEARER_TOKEN_FILE:        /var/run/secrets/kubernetes.
> io/serviceaccount/token
>       OPENSHIFT_CA_DATA:        -----BEGIN CERTIFICATE-----
> MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
> c2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEwHhcNMTcwNTIzMTIxMzMwWhcNMjIwNTIy
> MTIxMzMxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEw
> ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9imtjAe8JBjpD99nt3D4h
> VCwlWKCMugpIGWYdnHaICBS71KuIim8pWaOWYPUb73QhoUUZhZ80MYOzlB7lk/xK
> NWUnQBDFYc9zKqXkxjiWlTXHv1UCyB56mxFdfxPTHN61JbE8dD9jbiBLRudgb1cq
> Vhff4CRXqkdDURk8KjpnGkWW57Ky0Icp0rbOrRT/OhYv5CB8sqJedSC2VKfe9qtz
> +L4ykOOa4Q1qfqD7YqPDAqnUEJFXEbqjFCdLe6q2TS0vscx/rRJcANmzApgw4BRd
> OxEHH1KX6ariXSNkSWxhQIBa8qDukDrGc2dvAoLHi8ALBbnpGLE0zwtf087zdyF/
> AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
> SIb3DQEBCwUAA4IBAQA7Nn3iGUVH0HJN6WxR6oirpIv9VdqRgugqOoBM8O5GlV7D
> 7kd4VGFSzFtXKr0kHCgA+/6sEiu0ZlQZT7IvwDWgiY/bXOY/gT8whMWVLXbXBGGT
> 4brdqSRQVdgjv56kBG/cqIWedwNzItGFb+eye+AjHi20fUuVKW49Z7lvStHcvOHK
> c4XyP+e3S/wg6VEMT64kAuZUvvRLhUvJK9ZfxlEGZnjQ+qrYCEpGGDjeDTeXOxMi
> 6NL7Rh09p/yjemw3u+EZkfNlBMgBsA2+zEOxKbAGmENjjctFGRTJVGKq+FWR2HMi
> P2pHCOPEcn2on3GAyTncdp1ANcBNTjb8gTnsoPbc
> -----END CERTIFICATE-----
>
>       OPENSHIFT_DEPLOYMENT_NAME:        docker-registry-2
>       OPENSHIFT_DEPLOYMENT_NAMESPACE:   default
> Conditions:
>   Type          Status
>   PodScheduled  False
> Volumes:
>   deployer-token-sbvm4:
>     Type:       Secret (a volume populated by a Secret)
>     SecretName: deployer-token-sbvm4
> QoS Class:      BestEffort
> Tolerations:    <none>
> Events:
>   FirstSeen     LastSeen        Count   From
>  SubobjectPath   Type            Reason                  Message
>   ---------     --------        -----   ----
>  -------------   --------        ------                  -------
>   11m           7m              4       {default-scheduler }
>      Warning         FailedScheduling        pod (docker-registry-2-deploy)
> failed to fit in any node
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
>
>   4m    3m      2       {default-scheduler }            Warning
> FailedScheduling        pod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>
>   11m   2m      3       {default-scheduler }            Warning
> FailedScheduling        pod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>
>   13m   2m      13      {default-scheduler }            Warning
> FailedScheduling        pod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>
>   11m   1m      7       {default-scheduler }            Warning
> FailedScheduling        pod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>
>   13m   1m      4       {default-scheduler }            Warning
> FailedScheduling        pod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
>
>   13m   37s     10      {default-scheduler }            Warning
> FailedScheduling        pod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
>
>   13m   5s      7       {default-scheduler }            Warning
> FailedScheduling        pod (docker-registry-2-deploy) failed to fit in any
> node
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
>
>
> # oc describe pod router-2-deploy
> Name:                   router-2-deploy
> Namespace:              default
> Security Policy:        restricted
> Node:                   /
> Labels:                 openshift.io/deployer-pod-for.name=router-2
> Status:                 Pending
> IP:
> Controllers:            <none>
> Containers:
>   deployment:
>     Image:      openshift/origin-deployer:v1.4.1
>     Port:
>     Volume Mounts:
>       /var/run/secrets/kubernetes.io/serviceaccount from
> deployer-token-sbvm4 (ro)
>     Environment Variables:
>       KUBERNETES_MASTER:        https://master-home:8443
>       OPENSHIFT_MASTER:         https://master-home:8443
>       BEARER_TOKEN_FILE:        /var/run/secrets/kubernetes.
> io/serviceaccount/token
>       OPENSHIFT_CA_DATA:        -----BEGIN CERTIFICATE-----
> MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
> c2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEwHhcNMTcwNTIzMTIxMzMwWhcNMjIwNTIy
> MTIxMzMxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEw
> ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9imtjAe8JBjpD99nt3D4h
> VCwlWKCMugpIGWYdnHaICBS71KuIim8pWaOWYPUb73QhoUUZhZ80MYOzlB7lk/xK
> NWUnQBDFYc9zKqXkxjiWlTXHv1UCyB56mxFdfxPTHN61JbE8dD9jbiBLRudgb1cq
> Vhff4CRXqkdDURk8KjpnGkWW57Ky0Icp0rbOrRT/OhYv5CB8sqJedSC2VKfe9qtz
> +L4ykOOa4Q1qfqD7YqPDAqnUEJFXEbqjFCdLe6q2TS0vscx/rRJcANmzApgw4BRd
> OxEHH1KX6ariXSNkSWxhQIBa8qDukDrGc2dvAoLHi8ALBbnpGLE0zwtf087zdyF/
> AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
> SIb3DQEBCwUAA4IBAQA7Nn3iGUVH0HJN6WxR6oirpIv9VdqRgugqOoBM8O5GlV7D
> 7kd4VGFSzFtXKr0kHCgA+/6sEiu0ZlQZT7IvwDWgiY/bXOY/gT8whMWVLXbXBGGT
> 4brdqSRQVdgjv56kBG/cqIWedwNzItGFb+eye+AjHi20fUuVKW49Z7lvStHcvOHK
> c4XyP+e3S/wg6VEMT64kAuZUvvRLhUvJK9ZfxlEGZnjQ+qrYCEpGGDjeDTeXOxMi
> 6NL7Rh09p/yjemw3u+EZkfNlBMgBsA2+zEOxKbAGmENjjctFGRTJVGKq+FWR2HMi
> P2pHCOPEcn2on3GAyTncdp1ANcBNTjb8gTnsoPbc
> -----END CERTIFICATE-----
>
>       OPENSHIFT_DEPLOYMENT_NAME:        router-2
>       OPENSHIFT_DEPLOYMENT_NAMESPACE:   default
> Conditions:
>   Type          Status
>   PodScheduled  False
> Volumes:
>   deployer-token-sbvm4:
>     Type:       Secret (a volume populated by a Secret)
>     SecretName: deployer-token-sbvm4
> QoS Class:      BestEffort
> Tolerations:    <none>
> Events:
>   FirstSeen     LastSeen        Count   From
>  SubobjectPath   Type            Reason                  Message
>   ---------     --------        -----   ----
>  -------------   --------        ------                  -------
>   13m           13m             1       {default-scheduler }
>      Warning         FailedScheduling        pod (router-2-deploy) failed
> to fit in any node
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>
>   14m   10m     2       {default-scheduler }            Warning
> FailedScheduling        pod (router-2-deploy) failed to fit in any node
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
>
>   15m   5m      12      {default-scheduler }            Warning
> FailedScheduling        pod (router-2-deploy) failed to fit in any node
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
>
>   11m   4m      3       {default-scheduler }            Warning
> FailedScheduling        pod (router-2-deploy) failed to fit in any node
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
>
>   14m   1m      10      {default-scheduler }            Warning
> FailedScheduling        pod (router-2-deploy) failed to fit in any node
> fit failure on node (node1-home): CheckServiceAffinity, MatchNodeSelector
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>
>   15m   54s     12      {default-scheduler }            Warning
> FailedScheduling        pod (router-2-deploy) failed to fit in any node
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
>
>   15m   46s     11      {default-scheduler }            Warning
> FailedScheduling        pod (router-2-deploy) failed to fit in any node
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node2-home): CheckServiceAffinity, MatchNodeSelector
>
>   15m   30s     5       {default-scheduler }            Warning
> FailedScheduling        pod (router-2-deploy) failed to fit in any node
> fit failure on node (node2-home): MatchNodeSelector, CheckServiceAffinity
> fit failure on node (node1-home): MatchNodeSelector, CheckServiceAffinity
>
> I think that there's something wrong with my ansible host file - here it
> is, specially the last few lines:
>
> [OSEv3:children]
> masters
> nodes
>
> [OSEv3:vars]
> ansible_ssh_user=root
> # ansible_become=yes
>
> deployment_type=origin
> openshift_release=v1.4
> openshift_image_tag=v1.4.1
> containerized=true
> openshift_install_examples=true
> # openshift_hosted_metrics_deploy=true
>
> # use htpasswd authentication with demo/demo
> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
> 'filename': '/etc/origin/master/htpasswd'}]
> openshift_master_htpasswd_users={'demo': '$apr1$.MaA77kd$
> Rlnn6RXq9kCjnEfh5I3w/.'}
>
> # put the router on dedicated infra node
> openshift_hosted_router_selector='region=infra'
> openshift_master_default_subdomain=apps.hetzlabs.pro
>
> # put the image registry on dedicated infra node
> openshift_hosted_registry_selector='region=infra'
>
> # project pods should be placed on primary nodes
> osm_default_node_selector='region=primary'
>
> [masters]
> master-home.hetzlabs.pro openshift_public_hostname="mas
> ter-home.hetzlabs.pro"
>
> [nodes]
> # master needs to be included in the node to be configured in the SDN
> master-home.hetzlabs.pro openshift_node_labels="{'region': 'infra',
> 'zone': 'default'}"
> node1-home.hetzlabs.pro openshift_node_labels="{'region': 'primary',
> 'zone': 'default'}"
> node2-home.hetzlabs.pro openshift_node_labels="{'region': 'primary',
> 'zone': 'default'}"
>
> Basically I'm looking to run openshift origin (1.4 or 1.5) with 1 master,
> 2 nodes (total: 3 vm's). Am I doing it right?
>
> Thanks
>
>
>
> On Tue, May 23, 2017 at 4:24 PM, Rodrigo Bersa <rbe...@redhat.com> wrote:
>
>> Hi Hetz,
>>
>> It seems that your Registry and Router PODs are not running. Probably
>> there's a problem avoiding them to deploy.
>>
>> Can you send the output of the commands below?
>>
>> # oc describe pod docker-registry-1-deploy
>> # oc describe pod router-1-deploy
>>
>>
>>
>> Rodrigo Bersa
>>
>> Cloud Consultant, RHCSA, RHCVA
>>
>> Red Hat Brasil <https://www.redhat.com>
>>
>> rbe...@redhat.com    M: +55 11 99557-5841 <+55-11-99557-5841>
>> <https://red.ht/sig> [image: Red Hat] <http://www.redhat.com.br>
>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>
>>
>> <http://www.redhat.com.br>
>>
>> On Tue, May 23, 2017 at 8:28 AM, Hetz Ben Hamo <h...@hetz.biz> wrote:
>>
>>> ]# oc get pods -n default
>>> NAME                        READY     STATUS    RESTARTS   AGE
>>> docker-registry-1-deploy    0/1       Pending   0          16m
>>> registry-console-1-deploy   0/1       Error     0          15m
>>> router-1-deploy             0/1       Pending   0          17m
>>> [root@master-home ~]# oc logs registry-console-1-deploy
>>> --> Scaling registry-console-1 to 1
>>> --> Waiting up to 10m0s for pods in rc registry-console-1 to become ready
>>> error: update acceptor rejected registry-console-1: pods for rc
>>> "registry-console-1" took longer than 600 seconds to become ready
>>> [root@master-home ~]# oc logs router-1-deploy
>>> [root@master-home ~]# oc logs docker-registry-1-deploy
>>> [root@master-home ~]# oc logs docker-registry-1-deploy -n default
>>> [root@master-home ~]# oc get pods
>>>
>>>
>>> תודה,
>>> *חץ בן חמו*
>>> אתם מוזמנים לבקר בבלוג היעוץ <http://linvirtstor.net/> או בבלוג הפרטי
>>> שלי <http://benhamo.org>
>>>
>>> On Tue, May 23, 2017 at 1:49 AM, Ben Parees <bpar...@redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Mon, May 22, 2017 at 6:18 PM, Hetz Ben Hamo <h...@hetz.biz> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I've built on a 3 nodes openshift origin using the host file included
>>>>> below, but it seems few things are getting broken. I didn't modify 
>>>>> anything
>>>>> yet on the openshift, just used the openshift-Ansible checked out from
>>>>> today.
>>>>>
>>>>> Problem one: After building an image from the examples (I chose Java
>>>>> with the example of wildfly) I get:
>>>>>
>>>>> [INFO] ------------------------------------------------------------
>>>>> ------------
>>>>> [INFO] BUILD SUCCESS
>>>>> [INFO] ------------------------------------------------------------
>>>>> ------------
>>>>> [INFO] Total time: 12.182 s
>>>>> [INFO] Finished at: 2017-05-22T22:08:21+00:00
>>>>> [INFO] Final Memory: 14M/134M
>>>>> [INFO] ------------------------------------------------------------
>>>>> ------------
>>>>> Moving built war files into /wildfly/standalone/deployments for later
>>>>> deployment...
>>>>> Moving all war artifacts from /opt/app-root/src/target directory into
>>>>> /wildfly/standalone/deployments for later deployment...
>>>>> '/opt/app-root/src/target/ROOT.war' -> '/wildfly/standalone/deploymen
>>>>> ts/ROOT.war'
>>>>> Moving all ear artifacts from /opt/app-root/src/target directory into
>>>>> /wildfly/standalone/deployments for later deployment...
>>>>> Moving all rar artifacts from /opt/app-root/src/target directory into
>>>>> /wildfly/standalone/deployments for later deployment...
>>>>> Moving all jar artifacts from /opt/app-root/src/target directory into
>>>>> /wildfly/standalone/deployments for later deployment...
>>>>> ...done
>>>>> Pushing image 172.30.172.85:5000/test1/wf:latest ...
>>>>> Warning: Push failed, retrying in 5s ...
>>>>> Warning: Push failed, retrying in 5s ...
>>>>> Warning: Push failed, retrying in 5s ...
>>>>> Warning: Push failed, retrying in 5s ...
>>>>> Warning: Push failed, retrying in 5s ...
>>>>> Warning: Push failed, retrying in 5s ...
>>>>> Warning: Push failed, retrying in 5s ...
>>>>> Registry server Address:
>>>>> Registry server User Name: serviceaccount
>>>>> Registry server Email: serviceacco...@example.org
>>>>> Registry server Password: <<non-empty>>
>>>>> error: build error: Failed to push image: Get
>>>>> https://172.30.172.85:5000/v1/_ping: dial tcp 172.30.172.85:5000:
>>>>> getsockopt: connection refused
>>>>>
>>>>>
>>>> can you confirm your registry pod is running in the default namespace
>>>> (oc get pods -n default)?  Can you get logs from it?
>>>>
>>>>
>>>>
>>>>>
>>>>> Another problem: I added the metrics option so it installed hawkler
>>>>> but when it complains that it needs SSL approval (it shows a message about
>>>>> a problem with hawkler and gives a link to open it) I get upon clicking 
>>>>> the
>>>>> link: connection refused.
>>>>>
>>>>> I've tested the host configuration on 2 sets of VM's (one at home with
>>>>> digital ocean, another set here at home with VMWare). I've set up DNS with
>>>>> subdomain wildcard and I can ping the app names but trying to connect
>>>>> through a browser or curl - gives connection refused.
>>>>>
>>>>> Have I missed something?
>>>>>
>>>>> here is my byo host file:
>>>>>
>>>>> [OSEv3:children]
>>>>> masters
>>>>> nodes
>>>>>
>>>>> [OSEv3:vars]
>>>>> ansible_ssh_user=root
>>>>>
>>>>> deployment_type=origin
>>>>> openshift_release=v1.5.0
>>>>> containerized=true
>>>>> openshift_install_examples=true
>>>>> openshift_hosted_metrics_deploy=true
>>>>>
>>>>> # use htpasswd authentication with demo/demo
>>>>> openshift_master_identity.providers=[{'name': 'htpasswd_auth',
>>>>> 'login': 'true', 'challenge': 'true', 'kind': 
>>>>> 'HTPasswdPasswordIdentityProvider',
>>>>> 'filename': '/etc/origin/master/htpasswd'}]
>>>>> openshift_master_htpasswd_users={'demo':
>>>>> '$XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.'}
>>>>>
>>>>> # put the router on dedicated infra node
>>>>> openshift_hosted_router_selector='region=infra'
>>>>> openshift_master_default_subdomain=apps.test.com
>>>>>
>>>>> # put the image registry on dedicated infra node
>>>>> openshift_hosted_registry_selector='region=infra'
>>>>>
>>>>> #.project pods should be placed on primary nodes
>>>>> osm_default_node_selector='region=primary'
>>>>>
>>>>> [masters]
>>>>> master-home.test.com openshift_public_hostname="master-home.test.com"
>>>>>
>>>>> [nodes]
>>>>> # master needs to be included in the node to be configured in the SDN
>>>>> # master-home.test.com
>>>>> master-home.test.com openshift_node_labels="{'region': 'infra',
>>>>> 'zone': 'default'}"
>>>>> node1-home.test.com openshift_node_labels="{'region': 'primary',
>>>>> 'zone': 'default'}"
>>>>> node2-home.test.com openshift_node_labels="{'region': 'primary',
>>>>> 'zone': 'default'}"
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users@lists.openshift.redhat.com
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Ben Parees | OpenShift
>>>>
>>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>


-- 
Ben Parees | OpenShift
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to