Please accept my invitation to join GDG Bingham University

2017-05-04 Thread Mfawa Alfred Onen

GDG Bingham University


Join Mfawa Alfred Onen in Nassarawa. Be the first to hear about upcoming 
Meetups.

Google Developer Group (GDG) Bingham University is a group of like minded 
developers using Google's products to solve problems in their communities and 
help create an outreach platform for other devel...

--

Accept invitation

https://secure.meetup.com/n/?s=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkZXN0IjoiaHR0cHM6Ly9zZWN1cmUubWVldHVwLmNvbS9yZWdpc3Rlci8_Z2o9ZWo0cyZjPTIzNTg0NDk2Jl94dGQ9Z3F0bGJXRnBiRjlqYkdsamE5b0FKRFpqTlRrMVlUa3pMVEUxTm1FdE5EY3lPQzFpTm1JMkxUZzJOR1V6TldJNFlURXdPS3BwYm5acGRHVmxYMmxrcURRME16TTBOamc0JnJnPWVqNHMmY3R4PWludiZ0YXJnZXRVcmk9aHR0cHMlM0ElMkYlMkZ3d3cubWVldHVwLmNvbSUyRkdERy1CaW5naGFtLVVuaXZlcnNpdHklMkYlM0ZnaiUzRGVqNHMlMjZyZyUzRGVqNHMiLCJob29rIjoiaW52IiwiZW1haWxfaW52aXRlZV9pZCI6NDQzMzQ2ODgsImlhdCI6MTQ5MzkwODE5NywianRpIjoiNmRlZDdiMTUtNjU3Zi00NGFjLThmNDAtNGJlN2FlMDYxM2JkIiwiZXhwIjoxNDk1MTE3Nzk3fQ%3D%3D.2nFag6Q08DmuhcPxTRPjp8DQC3KeLyg0h86BCoYo_T4%3D

--

---
This message was sent by Meetup on behalf of Mfawa Alfred Onen 
(https://www.meetup.com/GDG-Bingham-University/members/223351670/) from GDG 
Bingham University.


Questions? You can email Meetup Support at supp...@meetup.com

Unsubscribe 
(https://secure.meetup.com/n/?s=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJob29rIjoiaW52X29wdG91dCIsImRlc3QiOiJodHRwczovL3d3dy5tZWV0dXAuY29tL2FjY291bnQvb3B0b3V0Lz9zdWJtaXQ9dHJ1ZSZlbz10YzImZW1haWw9aW52aXRlJl9tc191bnN1Yj10cnVlIiwiZW1haWwiOiJ1c2Vyc0BsaXN0cy5vcGVuc2hpZnQucmVkaGF0LmNvbSIsImludml0ZXJfaWQiOjIyMzM1MTY3MCwiaWF0IjoxNDkzOTA4MTk3LCJqdGkiOiJmNzM5NTI3MC0zOGE1LTQwYjAtYmRhNy05ZTA4MzU3NDA1NWQiLCJleHAiOjE0OTUxMTc3OTd9.NgofygdqV4QcDozCjffgencFjUZuCtuhR5AQ24B3Onk%3D)
 from this type of email.

Meetup Inc. (https://www.meetup.com/), POB 4668 #37895 New York NY USA 10163
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Router Pod stuck at pending

2016-04-08 Thread Mfawa Alfred Onen
Hello Skarbek, I redployed the router like you mentioned but still got a
pending router pod.

*1. oc get pods*

NAME  READY STATUSRESTARTS   AGE
docker-registry-2-pbvcf   1/1   Running   0  2d
router-2-8uodm0/1   Pending   0  20s
router-2-deploy   1/1   Running   0  25s

*2. oc describe pod router-2-8uodm*

Name:   router-2-8uodm
Namespace:  openshift
Image(s):   openshift/origin-haproxy-router:v1.1.4
Node:   /
Labels: deployment=router-2,deploymentconfig=router,router=router
Status: Pending
Reason:
Message:
IP:
Controllers:ReplicationController/router-2
Containers:
  router:
Container ID:
Image:  openshift/origin-haproxy-router:v1.1.4
Image ID:
Ports:  80/TCP, 443/TCP, 1936/TCP
QoS Tier:
  cpu:  BestEffort
  memory:   BestEffort
State:  Waiting
Ready:  False
Restart Count:  0
Liveness:   http-get http://localhost:1936/healthz delay=10s
timeout=1s period=10s #success=1 #failure=3
Readiness:  http-get http://localhost:1936/healthz delay=10s
timeout=1s period=10s #success=1 #failure=3
Environment Variables:

*[Truncated Certificate Data for extra clarity]*

  OPENSHIFT_MASTER:
https://master.dev.local:8443
  ROUTER_EXTERNAL_HOST_HOSTNAME:
  ROUTER_EXTERNAL_HOST_HTTPS_VSERVER:
  ROUTER_EXTERNAL_HOST_HTTP_VSERVER:
  ROUTER_EXTERNAL_HOST_INSECURE:false
  ROUTER_EXTERNAL_HOST_PARTITION_PATH:
  ROUTER_EXTERNAL_HOST_PASSWORD:
  ROUTER_EXTERNAL_HOST_PRIVKEY:
/etc/secret-volume/router.pem
  ROUTER_EXTERNAL_HOST_USERNAME:
  ROUTER_SERVICE_HTTPS_PORT:443
  ROUTER_SERVICE_HTTP_PORT: 80
  ROUTER_SERVICE_NAME:  router
  ROUTER_SERVICE_NAMESPACE: openshift
  ROUTER_SUBDOMAIN:
  STATS_PASSWORD:   j4RksqDAD6
  STATS_PORT:   1936
  STATS_USERNAME:   admin
Volumes:
  router-token-qk5ot:
Type:   Secret (a secret that should populate this volume)
SecretName: router-token-qk5ot
Events:
  FirstSeen LastSeenCount   From
 SubobjectPath   TypeReason  Message
  - -   
 -   --  ---
  29s   29s 1   {default-scheduler }
 Warning FailedSchedulingpod (router-2-8uodm) failed to
fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): Region
fit failure on node (node2.dev.local): MatchNodeSelector

  28s   28s 1   {default-scheduler }Warning
FailedSchedulingpod (router-2-8uodm) failed to fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): MatchNodeSelector
fit failure on node (node2.dev.local): MatchNodeSelector

  22s   22s 1   {default-scheduler }Warning
FailedSchedulingpod (router-2-8uodm) failed to fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): Region
fit failure on node (node2.dev.local): Region

  26s   14s 2   {default-scheduler }Warning
FailedSchedulingpod (router-2-8uodm) failed to fit in any node
fit failure on node (master.dev.local): PodFitsPorts
fit failure on node (node1.dev.local): MatchNodeSelector
fit failure on node (node2.dev.local): Region


Kind Regards!

On Fri, Apr 8, 2016 at 11:27 PM, Skarbek, John <john.skar...@ca.com> wrote:

> I have a feeling that now that you’ve enabled scheduling this ought to
> work. I bet if you ran a deploy, it’ll work now. You’ll need to cancel the
> current running one. So the following commands *might* help out.
>
> oc deploy -—cancel dc/router -n default
> oc deploy -—latest dc/router -n default
>
>
>
> --
> John Skarbek
>
> On April 8, 2016 at 14:01:06, Mfawa Alfred Onen (muffycomp...@gmail.com)
> wrote:
>
> Hello Tobias, below is the output of the commands you mentioned:
>
> *1. oc get nodes --show-labels*
>
> master.dev.local   Ready 10d
> kubernetes.io/hostname=master.dev.local,region=infra,router=router,zone=default
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__kubernetes.io_hostname-3Dmaster.dev.local-2Cregion-3Dinfra-2Crouter-3Drouter-2Czone-3Ddefault=CwMFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=vs4gxvYmTqi4GaiNFVKGTAtB6uxWq93pgksl6WqJo0o=Hd2TbmOPPTZMpQuWGzda-nN7jKPFNUcmMPbzpt596Q8=>
> node1.dev.localReady 10d
> kubernetes.io/hostname=node1.dev.local,region=primary,zone=dhc
> <https://urldefense.proofpoint.com/

Re: Router Pod stuck at pending

2016-04-08 Thread Mfawa Alfred Onen
Hello Tobias, below is the output of the commands you mentioned:

*1. oc get nodes --show-labels*

master.dev.local   Ready 10d
kubernetes.io/hostname=master.dev.local,region=infra,router=router,zone=default
node1.dev.localReady 10d
kubernetes.io/hostname=node1.dev.local,region=primary,zone=dhc
node2.dev.localReady 10d
kubernetes.io/hostname=node2.dev.local,region=primary,zone=dhc

*2. oc describe dc router*

Name:   router
Created:4 minutes ago
Labels: router=router
Annotations:
Latest Version: 1
Triggers:   Config
Strategy:   Rolling
Template:
  Selector: router=router
  Replicas: 1
  Containers:
  router:
Image:  openshift/origin-haproxy-router:v1.1.4
QoS Tier:
  cpu:  BestEffort
  memory:   BestEffort
Liveness:   http-get http://localhost:1936/healthz delay=10s timeout=1s
period=10s #success=1 #failure=3
Readiness:  http-get http://localhost:1936/healthz delay=10s timeout=1s
period=10s #success=1 #failure=3
Environment Variables:

 *[Truncated Certificate Data for extra clarity]*

  OPENSHIFT_MASTER:
https://master.dev.local:8443
  ROUTER_EXTERNAL_HOST_HOSTNAME:
  ROUTER_EXTERNAL_HOST_HTTPS_VSERVER:
  ROUTER_EXTERNAL_HOST_HTTP_VSERVER:
  ROUTER_EXTERNAL_HOST_INSECURE:false
  ROUTER_EXTERNAL_HOST_PARTITION_PATH:
  ROUTER_EXTERNAL_HOST_PASSWORD:
  ROUTER_EXTERNAL_HOST_PRIVKEY:
/etc/secret-volume/router.pem
  ROUTER_EXTERNAL_HOST_USERNAME:
  ROUTER_SERVICE_HTTPS_PORT:443
  ROUTER_SERVICE_HTTP_PORT: 80
  ROUTER_SERVICE_NAME:  router
  ROUTER_SERVICE_NAMESPACE: openshift
  ROUTER_SUBDOMAIN:
  STATS_PASSWORD:   Lt1ZhBJc8n
  STATS_PORT:   1936
  STATS_USERNAME:   admin
Deployment #1 (latest):
Name:   router-1
Created:4 minutes ago
Status: Running
Replicas:   1 current / 1 desired
Selector:
deployment=router-1,deploymentconfig=router,router=router
Labels:
openshift.io/deployment-config.name=router,router=router
Pods Status:0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Events:
  FirstSeen LastSeenCount   From
 SubobjectPath   TypeReason  Message
  - -   
 -   --  ---
  4m4m  1   {deploymentconfig-controller }
 Normal  DeploymentCreated   Created new deployment
"router-1" for version 1
  4m4m  1   {deployer }
Warning FailedUpdateError updating
deployment openshift/router-1 status to Pending


On Thu, Apr 7, 2016 at 12:50 PM, Skarbek, John <john.skar...@ca.com> wrote:

> Hello,
>
> I ponder if there’s an issue with the labels being utilized by the nodes
> and the pods. Can you run the following command: oc get nodes
> —show-labels
>
> And then an: oc describe dc router
>
>
>
> --
> John Skarbek
>
> On April 7, 2016 at 04:26:37, Mfawa Alfred Onen (muffycomp...@gmail.com)
> wrote:
>
> So I enabled scheduling as you pointed out but still no luck:
>
> *oc get nodes*
>
> NAME STATUSAGE
> master.dev.local   Ready 8d
> node1.dev.localReady 8d
> node2.dev.localReady 8d
>
> *oc get pods*
>
> docker-registry-2-pbvcf   1/1   Running   0  10h
> router-1-bk55a0/1   Pending   0  1s
> router-1-deploy   1/1   Running   0  4s
>
> *oc describe pod router-1-bk55a*
>
> 
> Events:
>   FirstSeen LastSeenCount   From
>  SubobjectPath   TypeReason  Message
>   - -   
>  -   --  ---
>   1m1m  1   {default-scheduler }
>  Warning FailedSchedulingpod (router-1-bk55a) failed to
> fit in any node
> fit failure on node (master.dev.local): PodFitsPorts
> fit failure on node (node1.dev.local): Region
> fit failure on node (node2.dev.local): MatchNodeSelector
>
>   1m1m  1   {default-scheduler }Warning
> FailedSchedulingpod (router-1-bk55a) failed to fit in any node
> fit failure on node (node2.dev.local): MatchNodeSelector
> fit failure on node (master.dev.local): PodFitsPorts
> fit failure on node (node1.dev.local): MatchNodeSelector
>
>   1m1m  2   {default-scheduler }Warning
> FailedSchedulingpod (router-1-bk55a) failed to fit in any node
> fit failure on node (master.dev.loc

Re: Error Starting Origin Node

2016-04-01 Thread Mfawa Alfred Onen
Hello Jason, thanks for swift response:

*What does your inventory file look like?*

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
nfs

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
debug_level=2
deployment_type=origin


# htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
'filename': '/etc/origin/master/htpasswd'}]

# default subdomain to use for exposed routes
osm_default_subdomain=app.maomuffy.lab

# default project node selector
osm_default_node_selector='region=primary'

# default selectors for router and registry services
openshift_router_selector='region=infra'
openshift_registry_selector='region=infra'

## Registry Storage Options
##
## Storage Kind
openshift_hosted_registry_storage_kind=nfs
##
## Storage Host
openshift_hosted_registry_storage_host=registry.maomuffy.lab
##
## NFS Export Options
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
##
## NFS Export Directory
openshift_hosted_registry_storage_nfs_directory=/exports
##
## Registry Volume Name
openshift_hosted_registry_storage_volume_name=registry
##
## Persistent Volume Access Mode
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']

openshift_override_hostname_check=true

# host group for masters
[masters]
master.maomuffy.lab openshift_ip=1.1.1.171

[nfs]
registry.maomuffy.lab

[nodes]
master.maomuffy.lab openshift_node_labels="{'region': 'infra', 'zone':
'default'}" openshift_ip=1.1.1.171 openshift_schedulable=False
node1.maomuffy.lab openshift_node_labels="{'region': 'primary', 'zone':
'north'}" openshift_ip=1.1.1.172


After running the installer again with the exception of changing the
*debug_level=2* to *debug_level=5*, it worked. I have battled with this
error for two days now and for some weird reason, it worked. Not sure what
fixed it really.

Thanks again for your help!


On Fri, Apr 1, 2016 at 1:53 PM, Jason DeTiberus <jdeti...@redhat.com> wrote:

> What does your inventory file look like?
>
> How about the output of the journal logs for origin-master?
>
> Is this a cloud deployment (AWS, GCE, OpenStack)? If so, are you
> configuring the cloud provider integration?
> On Apr 1, 2016 8:18 AM, "Mfawa Alfred Onen" <muffycomp...@gmail.com>
> wrote:
>
>> I wanted to setup a small lab consisting of 1 Master, 1 Node, 1 NFS
>> storage Node for the Registry but got the following error during the
>> ansible playbook run. I am using the openshift-ansible installer (for
>> advanced installation) from
>> https://github.com/openshift/openshift-ansible
>>
>> *1. Ansible Playbook Error*
>>
>> TASK: [openshift_node | Start and enable node]
>> 
>> failed: [master.maomuffy.lab] => {"failed": true}
>> msg: Job for origin-node.service failed because the control process
>> exited with error code. See "systemctl status origin-node.service" and
>> "journalctl -xe" for details.
>>
>> failed: [node1.maomuffy.lab] => {"failed": true}
>> msg: Job for origin-node.service failed because the control process
>> exited with error code. See "systemctl status origin-node.service" and
>> "journalctl -xe" for details.
>>
>>
>> FATAL: all hosts have already failed -- aborting
>>
>> PLAY RECAP
>> 
>>to retry, use: --limit @/root/config.retry
>>
>> localhost  : ok=22   changed=0unreachable=0
>>  failed=0
>> master.maomuffy.lab: ok=295  changed=2unreachable=0
>>  failed=1
>> node1.maomuffy.lab : ok=72   changed=1unreachable=0
>>  failed=1
>> registry.maomuffy.lab  : ok=35   changed=0unreachable=0
>>  failed=0
>>
>>
>> *2. Result of "systemctl status origin-node.service -l"*
>>
>> origin-node.service - Origin Node
>>Loaded: loaded (/usr/lib/systemd/system/origin-node.service; enabled;
>> vendor preset: disabled)
>>   Drop-In: /usr/lib/systemd/system/origin-node.service.d
>>ââopenshift-sdn-ovs.conf
>>Active: activating (start) since Fri 2016-04-01 15:08:50 WAT; 28s ago
>>  Docs: https://github.com/openshift/origin
>>  Main PID: 22983 (openshift)
>>CGroup: /system.slice/origin-node.service
>>ââ22983 /usr/bin/openshift start node
>> --config=/etc/origin/node/node-config.yaml --loglevel=2
>>
>> Apr 01 15:09:14 master.maomuffy.lab origin-node[22983]: W0401
>> 15:09:14.509989   22983 subnets.go:150] Co

Error Starting Origin Node

2016-04-01 Thread Mfawa Alfred Onen
 startup failed: F
Apr 01 15:10:25 master.maomuffy.lab systemd[1]: origin-node.service: main
process exited, code=exited, status=255/n/a
Apr 01 15:10:25 master.maomuffy.lab systemd[1]: Failed to start Origin Node.
-- Subject: Unit origin-node.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit origin-node.service has failed.
--
-- The result is failed.
Apr 01 15:10:25 master.maomuffy.lab systemd[1]: Unit origin-node.service
entered failed state.
Apr 01 15:10:25 master.maomuffy.lab systemd[1]: origin-node.service failed.
Apr 01 15:10:26 master.maomuffy.lab systemd[1]: origin-node.service holdoff
time over, scheduling restart.
Apr 01 15:10:26 master.maomuffy.lab systemd[1]: Starting Origin Node...
-- Subject: Unit origin-node.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit origin-node.service has begun starting up.

*4. Checking /var/log/messages*

Apr  1 15:11:30 master origin-node: I0401 15:11:30.804637   23092
manager.go:172] Version: {KernelVersion:3.10.0-327.10.1.el7.x86_64
ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.9.1
CadvisorVersion: CadvisorRevision:}
Apr  1 15:11:30 master origin-node: I0401 15:11:30.805420   23092
server.go:320] Using root directory: /var/lib/origin/openshift.local.volumes
Apr  1 15:11:30 master origin-node: I0401 15:11:30.805646   23092
server.go:654] Watching apiserver
Apr  1 15:11:30 master origin-node: I0401 15:11:30.823906   23092
plugins.go:122] Loaded network plugin "redhat/openshift-ovs-subnet"
Apr  1 15:11:30 master origin-node: I0401 15:11:30.823960   23092
kubelet.go:370] Hairpin mode set to true
Apr  1 15:11:31 master origin-node: W0401 15:11:31.527611   23092
subnets.go:150] Could not find an allocated subnet for node:
master.maomuffy.lab, Waiting...
Apr  1 15:11:31 master origin-node: I0401 15:11:31.658479   23092
manager.go:196] Setting dockerRoot to /var/lib/docker
Apr  1 15:11:31 master origin-node: I0401 15:11:31.658561   23092
plugins.go:56] Registering credential provider: .dockercfg
Apr  1 15:11:32 master origin-node: W0401 15:11:32.050089   23092
subnets.go:150] Could not find an allocated subnet for node:
master.maomuffy.lab, Waiting...
Apr  1 15:11:32 master origin-node: W0401 15:11:32.555947   23092
subnets.go:150] Could not find an allocated subnet for node:
master.maomuffy.lab, Waiting...
Apr  1 15:11:33 master origin-node: W0401 15:11:33.066090   23092
subnets.go:150] Could not find an allocated subnet for node:
master.maomuffy.lab, Waiting...
Apr  1 15:11:33 master origin-node: W0401 15:11:33.572013   23092
subnets.go:150] Could not find an allocated subnet for node:
master.maomuffy.lab, Waiting...
Apr  1 15:11:33 master origin-node: F0401 15:11:33.148075   23072
node.go:258] error: SDN node startup failed: Failed to start plugin: Failed
to get subnet for this host: master.maomuffy.lab, error: hostsubnets
"master.maomuffy.lab" not found

What could I have done wrong considering I used the documentation here:
https://docs.openshift.org/latest/install_config/install/advanced_install.html#single-master

Kind Regards!

-- 
*Mfawa Alfred Onen*
System Administrator / GDG Lead, Bingham University
Department of Computer Science,
Bingham University.

E-Mail: muffycomp...@gmail.com
Phone1: +234 805 944 3154
Phone2: +234 803 079 6088
Twitter: @muffycompo <https://twitter.com/muffycompo>
Google+: https://plus.google.com/+MfawaAlfredOnen
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users