Hello Skarbek, I redployed the router like you mentioned but still got a pending router pod.
*1. oc get pods* NAME READY STATUS RESTARTS AGE docker-registry-2-pbvcf 1/1 Running 0 2d router-2-8uodm 0/1 Pending 0 20s router-2-deploy 1/1 Running 0 25s *2. oc describe pod router-2-8uodm* Name: router-2-8uodm Namespace: openshift Image(s): openshift/origin-haproxy-router:v1.1.4 Node: / Labels: deployment=router-2,deploymentconfig=router,router=router Status: Pending Reason: Message: IP: Controllers: ReplicationController/router-2 Containers: router: Container ID: Image: openshift/origin-haproxy-router:v1.1.4 Image ID: Ports: 80/TCP, 443/TCP, 1936/TCP QoS Tier: cpu: BestEffort memory: BestEffort State: Waiting Ready: False Restart Count: 0 Liveness: http-get http://localhost:1936/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://localhost:1936/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Environment Variables: *[Truncated Certificate Data for extra clarity]* OPENSHIFT_MASTER: https://master.dev.local:8443 ROUTER_EXTERNAL_HOST_HOSTNAME: ROUTER_EXTERNAL_HOST_HTTPS_VSERVER: ROUTER_EXTERNAL_HOST_HTTP_VSERVER: ROUTER_EXTERNAL_HOST_INSECURE: false ROUTER_EXTERNAL_HOST_PARTITION_PATH: ROUTER_EXTERNAL_HOST_PASSWORD: ROUTER_EXTERNAL_HOST_PRIVKEY: /etc/secret-volume/router.pem ROUTER_EXTERNAL_HOST_USERNAME: ROUTER_SERVICE_HTTPS_PORT: 443 ROUTER_SERVICE_HTTP_PORT: 80 ROUTER_SERVICE_NAME: router ROUTER_SERVICE_NAMESPACE: openshift ROUTER_SUBDOMAIN: STATS_PASSWORD: j4RksqDAD6 STATS_PORT: 1936 STATS_USERNAME: admin Volumes: router-token-qk5ot: Type: Secret (a secret that should populate this volume) SecretName: router-token-qk5ot Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 29s 29s 1 {default-scheduler } Warning FailedScheduling pod (router-2-8uodm) failed to fit in any node fit failure on node (master.dev.local): PodFitsPorts fit failure on node (node1.dev.local): Region fit failure on node (node2.dev.local): MatchNodeSelector 28s 28s 1 {default-scheduler } Warning FailedScheduling pod (router-2-8uodm) failed to fit in any node fit failure on node (master.dev.local): PodFitsPorts fit failure on node (node1.dev.local): MatchNodeSelector fit failure on node (node2.dev.local): MatchNodeSelector 22s 22s 1 {default-scheduler } Warning FailedScheduling pod (router-2-8uodm) failed to fit in any node fit failure on node (master.dev.local): PodFitsPorts fit failure on node (node1.dev.local): Region fit failure on node (node2.dev.local): Region 26s 14s 2 {default-scheduler } Warning FailedScheduling pod (router-2-8uodm) failed to fit in any node fit failure on node (master.dev.local): PodFitsPorts fit failure on node (node1.dev.local): MatchNodeSelector fit failure on node (node2.dev.local): Region Kind Regards! On Fri, Apr 8, 2016 at 11:27 PM, Skarbek, John <john.skar...@ca.com> wrote: > I have a feeling that now that you’ve enabled scheduling this ought to > work. I bet if you ran a deploy, it’ll work now. You’ll need to cancel the > current running one. So the following commands *might* help out. > > oc deploy -—cancel dc/router -n default > oc deploy -—latest dc/router -n default > > > > -- > John Skarbek > > On April 8, 2016 at 14:01:06, Mfawa Alfred Onen (muffycomp...@gmail.com) > wrote: > > Hello Tobias, below is the output of the commands you mentioned: > > *1. oc get nodes --show-labels* > > master.dev.local Ready 10d > kubernetes.io/hostname=master.dev.local,region=infra,router=router,zone=default > <https://urldefense.proofpoint.com/v2/url?u=http-3A__kubernetes.io_hostname-3Dmaster.dev.local-2Cregion-3Dinfra-2Crouter-3Drouter-2Czone-3Ddefault&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=vs4gxvYmTqi4GaiNFVKGTAtB6uxWq93pgksl6WqJo0o&s=Hd2TbmOPPTZMpQuWGzda-nN7jKPFNUcmMPbzpt596Q8&e=> > node1.dev.local Ready 10d > kubernetes.io/hostname=node1.dev.local,region=primary,zone=dhc > <https://urldefense.proofpoint.com/v2/url?u=http-3A__kubernetes.io_hostname-3Dnode1.dev.local-2Cregion-3Dprimary-2Czone-3Ddhc&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=vs4gxvYmTqi4GaiNFVKGTAtB6uxWq93pgksl6WqJo0o&s=TG0O-qfA5LqTff6ArApqTXNIIzBbK8_XWZZaWDFXqjA&e=> > node2.dev.local Ready 10d > kubernetes.io/hostname=node2.dev.local,region=primary,zone=dhc > <https://urldefense.proofpoint.com/v2/url?u=http-3A__kubernetes.io_hostname-3Dnode2.dev.local-2Cregion-3Dprimary-2Czone-3Ddhc&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=vs4gxvYmTqi4GaiNFVKGTAtB6uxWq93pgksl6WqJo0o&s=y1pT794vtjTq_sI6qbZOna4PhvPNNFEfk10aUfIvnHQ&e=> > > *2. oc describe dc router* > > Name: router > Created: 4 minutes ago > Labels: router=router > Annotations: <none> > Latest Version: 1 > Triggers: Config > Strategy: Rolling > Template: > Selector: router=router > Replicas: 1 > Containers: > router: > Image: openshift/origin-haproxy-router:v1.1.4 > QoS Tier: > cpu: BestEffort > memory: BestEffort > Liveness: http-get http://localhost:1936/healthz > <https://urldefense.proofpoint.com/v2/url?u=http-3A__localhost-3A1936_healthz&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=vs4gxvYmTqi4GaiNFVKGTAtB6uxWq93pgksl6WqJo0o&s=SBh9s8nuJBEMg1mLwpJcasN-A8xgl0DryVNilDbjb6s&e=> > delay=10s timeout=1s period=10s #success=1 #failure=3 > Readiness: http-get http://localhost:1936/healthz > <https://urldefense.proofpoint.com/v2/url?u=http-3A__localhost-3A1936_healthz&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=vs4gxvYmTqi4GaiNFVKGTAtB6uxWq93pgksl6WqJo0o&s=SBh9s8nuJBEMg1mLwpJcasN-A8xgl0DryVNilDbjb6s&e=> > delay=10s timeout=1s period=10s #success=1 #failure=3 > Environment Variables: > > *[Truncated Certificate Data for extra clarity]* > > OPENSHIFT_MASTER: > https://master.dev.local:8443 > <https://urldefense.proofpoint.com/v2/url?u=https-3A__master.dev.local-3A8443&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=vs4gxvYmTqi4GaiNFVKGTAtB6uxWq93pgksl6WqJo0o&s=0jkesyIxNzFYiVWo4Anqr-tWDUqSZ4EKfvYltgF-bZE&e=> > ROUTER_EXTERNAL_HOST_HOSTNAME: > ROUTER_EXTERNAL_HOST_HTTPS_VSERVER: > ROUTER_EXTERNAL_HOST_HTTP_VSERVER: > ROUTER_EXTERNAL_HOST_INSECURE: false > ROUTER_EXTERNAL_HOST_PARTITION_PATH: > ROUTER_EXTERNAL_HOST_PASSWORD: > ROUTER_EXTERNAL_HOST_PRIVKEY: > /etc/secret-volume/router.pem > ROUTER_EXTERNAL_HOST_USERNAME: > ROUTER_SERVICE_HTTPS_PORT: 443 > ROUTER_SERVICE_HTTP_PORT: 80 > ROUTER_SERVICE_NAME: router > ROUTER_SERVICE_NAMESPACE: openshift > ROUTER_SUBDOMAIN: > STATS_PASSWORD: Lt1ZhBJc8n > STATS_PORT: 1936 > STATS_USERNAME: admin > Deployment #1 (latest): > Name: router-1 > Created: 4 minutes ago > Status: Running > Replicas: 1 current / 1 desired > Selector: > deployment=router-1,deploymentconfig=router,router=router > Labels: > openshift.io/deployment-config.name=router,router=router > <https://urldefense.proofpoint.com/v2/url?u=http-3A__openshift.io_deployment-2Dconfig.name-3Drouter-2Crouter-3Drouter&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=vs4gxvYmTqi4GaiNFVKGTAtB6uxWq93pgksl6WqJo0o&s=yoPCQkGDa3dVGnixTcRvdfNN8BmJQ__6nJr0uEUhhhU&e=> > Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed > Events: > FirstSeen LastSeen Count From > SubobjectPath Type Reason Message > --------- -------- ----- ---- > ------------- -------- ------ ------- > 4m 4m 1 {deploymentconfig-controller } > Normal DeploymentCreated Created new deployment > "router-1" for version 1 > 4m 4m 1 {deployer } > Warning FailedUpdate Error updating > deployment openshift/router-1 status to Pending > > > On Thu, Apr 7, 2016 at 12:50 PM, Skarbek, John <john.skar...@ca.com> > wrote: > >> Hello, >> >> I ponder if there’s an issue with the labels being utilized by the nodes >> and the pods. Can you run the following command: oc get nodes >> —show-labels >> >> And then an: oc describe dc router >> >> >> -- >> John Skarbek >> >> On April 7, 2016 at 04:26:37, Mfawa Alfred Onen (muffycomp...@gmail.com) >> wrote: >> >> So I enabled scheduling as you pointed out but still no luck: >> >> *oc get nodes* >> >> NAME STATUS AGE >> master.dev.local Ready 8d >> node1.dev.local Ready 8d >> node2.dev.local Ready 8d >> >> *oc get pods* >> >> docker-registry-2-pbvcf 1/1 Running 0 10h >> router-1-bk55a 0/1 Pending 0 1s >> router-1-deploy 1/1 Running 0 4s >> >> *oc describe pod router-1-bk55a* >> >> <Showing only Event logs> >> Events: >> FirstSeen LastSeen Count From >> SubobjectPath Type Reason Message >> --------- -------- ----- ---- >> ------------- -------- ------ ------- >> 1m 1m 1 {default-scheduler } >> Warning FailedScheduling pod (router-1-bk55a) failed >> to fit in any node >> fit failure on node (master.dev.local): PodFitsPorts >> fit failure on node (node1.dev.local): Region >> fit failure on node (node2.dev.local): MatchNodeSelector >> >> 1m 1m 1 {default-scheduler } Warning >> FailedScheduling pod (router-1-bk55a) failed to fit in any node >> fit failure on node (node2.dev.local): MatchNodeSelector >> fit failure on node (master.dev.local): PodFitsPorts >> fit failure on node (node1.dev.local): MatchNodeSelector >> >> 1m 1m 2 {default-scheduler } Warning >> FailedScheduling pod (router-1-bk55a) failed to fit in any node >> fit failure on node (master.dev.local): PodFitsPorts >> fit failure on node (node1.dev.local): Region >> fit failure on node (node2.dev.local): Region >> >> 47s 47s 1 {default-scheduler } Warning >> FailedScheduling pod (router-1-bk55a) failed to fit in any node >> fit failure on node (node1.dev.local): Region >> fit failure on node (node2.dev.local): Region >> fit failure on node (master.dhcpaas.com >> <https://urldefense.proofpoint.com/v2/url?u=http-3A__master.dhcpaas.com&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Kvu7h4NH0VKpICUhIHRvPYCilfYCENeI2O51tCkTZy4&s=80sqJagvP-0xCvCUUP08aqapr3tX91PSkV1qZpv-ij4&e=>): >> PodFitsPorts >> >> 1m 15s 2 {default-scheduler } Warning >> FailedScheduling pod (router-1-bk55a) failed to fit in any node >> fit failure on node (master.dev.local): PodFitsPorts >> fit failure on node (node1.dev.local): MatchNodeSelector >> fit failure on node (node2.dev.local): Region >> >> >> Regards! >> >> >> On Thu, Apr 7, 2016 at 8:01 AM, Tobias Florek <opensh...@ibotty.net> >> wrote: >> >>> Hi. >>> >>> I assume your router does not get scheduled on master.dev.local, because >>> scheduling is disabled there: >>> >>> > *1. oc get nodes* >>> > >>> > NAME STATUS AGE >>> > master.dev.local Ready,SchedulingDisabled 8d >>> >>> Run >>> >>> oadm manage-node master.dev.local --schedulable=true >>> >>> to enable pods to run on your master. >>> >>> Cheers, >>> Tobias Florek >>> >> >> >> >> -- >> *Mfawa Alfred Onen* >> System Administrator / GDG Lead, Bingham University >> Department of Computer Science, >> Bingham University. >> >> E-Mail: muffycomp...@gmail.com >> Phone1: +234 805 944 3154 >> Phone2: +234 803 079 6088 >> Twitter: @muffycompo >> <https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_muffycompo&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Kvu7h4NH0VKpICUhIHRvPYCilfYCENeI2O51tCkTZy4&s=LWKLw0w264TLoL6B5JmDnMIuDJNd54Hjq-y-ukhqN2Q&e=> >> Google+: https://plus.google.com/+MfawaAlfredOnen >> <https://urldefense.proofpoint.com/v2/url?u=https-3A__plus.google.com_-2BMfawaAlfredOnen&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Kvu7h4NH0VKpICUhIHRvPYCilfYCENeI2O51tCkTZy4&s=Ca1fNBLvKSRfC6TOlddDWrQhFyCWSeKd4sVwwkwFlOU&e=> >> _______________________________________________ >> users mailing list >> users@lists.openshift.redhat.com >> >> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users&d=CwICAg&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=Kvu7h4NH0VKpICUhIHRvPYCilfYCENeI2O51tCkTZy4&s=0WL53uxKYHilIXS-LCdwwWsrnLspLh5l5njDeBw48m8&e= >> >> > > > -- > *Mfawa Alfred Onen* > System Administrator / GDG Lead, Bingham University > Department of Computer Science, > Bingham University. > > E-Mail: muffycomp...@gmail.com > Phone1: +234 805 944 3154 > Phone2: +234 803 079 6088 > Twitter: @muffycompo > <https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_muffycompo&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=vs4gxvYmTqi4GaiNFVKGTAtB6uxWq93pgksl6WqJo0o&s=F_mLm1cu7owbP_lXNfKFh4yBLpacUaMxzuJBg9wkxGo&e=> > Google+: https://plus.google.com/+MfawaAlfredOnen > <https://urldefense.proofpoint.com/v2/url?u=https-3A__plus.google.com_-2BMfawaAlfredOnen&d=CwMFaQ&c=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0&r=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74&m=vs4gxvYmTqi4GaiNFVKGTAtB6uxWq93pgksl6WqJo0o&s=AWGHyLPELSdtSYFfOCROpExvG68HuLMWqpyT8SE9pXU&e=> > > -- *Mfawa Alfred Onen* System Administrator / GDG Lead, Bingham University Department of Computer Science, Bingham University. E-Mail: muffycomp...@gmail.com Phone1: +234 805 944 3154 Phone2: +234 803 079 6088 Twitter: @muffycompo <https://twitter.com/muffycompo> Google+: https://plus.google.com/+MfawaAlfredOnen
_______________________________________________ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users