Jason,
   Hi, I meant to thank you at the end of June in helping getting the OOM 
Kubernetes seed running with the rest of us – including adding the service 
account in create_namespace() – very appreciated!

   I agree with all the comments below as well – we should implement them all - 
definitely.  However, since the code is still in seed mode – the goal is to get 
ONAP running on containers as easily as possible so we can run/develop locally 
– for those that are overly enthusiastic ONAP engineers – when we get the final 
POC fully functional – we should refactor.

   Ideally we would address all the issues and make the system as 
decoupled/generic/pluggable as possible – the reality currently is that to get 
to the goal of closed-loop running in the vFirewall – certain workarounds are 
temporarily required to be able to work with what we have (some are relics of 
reverse-engineering the config and should go away).

   I agree Rancher is a temporarily solution – getting it configured took a 
while – it is a lot more simpler than bare k8s for now – but it has several 
bugs causing issues.
   I agree that the docker secret config should not be required – but at the 
time I did not have access to modify the nexus3 security config – likely this 
is the way to go – we should put in a DevOps ticket.  When 
OOM-3<https://jira.onap.org/browse/OOM-3> is merged we won’t need a workaround 
– the current config is the only one that works for all adopters.

    Everyone is welcome to contribute the ongoing config page using their own 
live system to verify at
https://wiki.onap.org/display/DW/ONAP+on+Kubernetes

Setup is currently 2 parts and will likely change

-          Overcloud(rancher), k8s client/host, clone oom

-          Run config pod, run all-onap script (the config pod create should be 
added to ./createAll.bash – so we have a single script)





/michael


From: [email protected] 
[mailto:[email protected]] On Behalf Of Borislav Glozman
Sent: Thursday, August 3, 2017 06:50
To: Jason Hunt <[email protected]>; Mandeep Khinda <[email protected]>
Cc: [email protected]; [email protected]
Subject: Re: [onap-discuss] [OOM] Using OOM kubernetes based seed code over 
Rancher

Hi,

The implementation of secrets under review allows to provide passwords as 
environment variables.
There are default passwords that are used in secrets for the case when the env 
vars are not set.

I don’t think ONAP deployment should be coupled with Rancher.
In case of Rancher we can do without secrets.
On different deployments of k8s we need them.

Thanks,
Borislav Glozman
O:+972.9.776.1988
M:+972.52.2835726

From: 
[email protected]<mailto:[email protected]> 
[mailto:[email protected]] On Behalf Of Jason Hunt
Sent: Thursday, August 3, 2017 1:22 AM
To: Mandeep Khinda <[email protected]<mailto:[email protected]>>
Cc: [email protected]<mailto:[email protected]>; 
[email protected]<mailto:[email protected]>
Subject: Re: [onap-discuss] [OOM] Using OOM kubernetes based seed code over 
Rancher

A couple of thoughts on this, as I've had some of the same issues outlined 
below.

I think it's very important that OOM provide as simple of an installation 
process as possible -- ideally just running one script (as was initially 
envisioned).  Every additional step increases friction and reduces adoption by 
the developer community.

Of course, this is not to say that we should embed passwords into scripts in 
the git repository (see the other thread in the mailing list).  Perhaps we 
should ask why we have a docker registry with a password that is widely 
published.  But if we do require a password, perhaps the installation script 
can accept the uid/password as a parameter to the script.


Regards,
Jason Hunt
Executive Software Architect, IBM

Phone: 314-749-7422
Email: [email protected]<mailto:[email protected]>



From:        Mandeep Khinda 
<[email protected]<mailto:[email protected]>>
To:        Michael O'Brien 
<[email protected]<mailto:[email protected]>>, Viswa KSP 
<[email protected]<mailto:[email protected]>>
Cc:        "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date:        08/02/2017 10:42 AM
Subject:        Re: [onap-discuss] [OOM] Using OOM kubernetes based seed code 
over Rancher
Sent by:        
[email protected]<mailto:[email protected]>
________________________________



We overcame docker image pull problems by configuring our docker daemons use 
the “--insecure-registry” flag.  You then have to docker login to each registry 
that requires user/pass and then copy the generated credential file into a 
location that kubelet uses.

This is why the current seed yamls did not have any need for secrets or 
pre-pulling docker images and why I am reluctant to +2 docker secret changes 
that are currently under review.
I’ll try and dig up the steps I used and update the wiki.  Hopefully it will 
unblock you Viswa.

Mandeep

From: 
[email protected]<mailto:[email protected]> 
[mailto:[email protected]] On Behalf Of Michael O'Brien
Sent: Wednesday, August 02, 2017 11:24 AM
To: Viswa KSP <[email protected]<mailto:[email protected]>>
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [onap-discuss] [OOM] Using OOM kubernetes based seed code over 
Rancher

Viswa,
    Missed that you have config-init in your mail – so covered OK.
    I stood up robot and aai on a clean VM to verify the docs (this is inside 
an openstack lab behind a firewall – that has its own proxy)
    Let me know if you have docker proxy issues – I would also like to 
reference/doc this for others like yourselves.
    Also, just verifying that you added the security workaround step – (or make 
your docker repo trust all repos)

vi oom/kubernetes/oneclick/createAll.bash
create_namespace() {
  kubectl create namespace $1-$2
+  kubectl --namespace $1-$2 create secret docker-registry regsecret 
--docker-server=nexus3.onap.org:10001 --docker-username=docker 
--docker-password=docker 
[email protected]<mailto:[email protected]>
+  kubectl --namespace $1-$2 patch serviceaccount default -p 
'{"imagePullSecrets": [{"name": "regsecret"}]}'
}

    I just happen to be standing up a clean deployment on openstack – lets try 
just robot – it pulls the testsuite image

root@obrienk-1:/home/ubuntu/oom/kubernetes/config# kubectl create -f 
pod-config-init.yaml
pod "config-init" created
root@obrienk-1:/home/ubuntu/oom/kubernetes/config# cd ../oneclick/
root@obrienk-1:/home/ubuntu/oom/kubernetes/oneclick# ./createAll.bash  -n onap 
-a robot
********** Creating up ONAP: robot
Creating namespaces **********
namespace "onap-robot" created
secret "regsecret" created
serviceaccount "default" patched
Creating services **********
service "robot" created
********** Creating deployments for  robot **********
Robot....
deployment "robot" created
**** Done ****
root@obrienk-1:/home/ubuntu/oom/kubernetes/oneclick# kubectl get pods 
--all-namespaces | grep onap
onap-robot    robot-4262359493-k84b6                 1/1       Running   0      
    4m

One nexus3 onap specific images downloaded for robot

root@obrienk-1:/home/ubuntu/oom/kubernetes/oneclick# docker images
REPOSITORY                                             TAG                  
IMAGE ID            CREATED             SIZE
nexus3.onap.org:10001/openecomp/testsuite              1.0-STAGING-latest   
3a476b4fe0d8        2 hours ago         1.16 GB

next try one with 3 onap images like aai
uses
nexus3.onap.org:10001/openecomp/ajsc-aai               1.0-STAGING-latest   
c45b3a0ca00f        2 hours ago         1.352 GB
aaidocker/aai-hbase-1.2.3                              latest               
aba535a6f8b5        7 months ago        1.562 GB

hbase is first
root@obrienk-1:/home/ubuntu/oom/kubernetes/oneclick# ./createAll.bash  -n onap 
-a aai
root@obrienk-1:/home/ubuntu/oom/kubernetes/oneclick# kubectl get pods 
--all-namespaces | grep onap
onap-aai      aai-service-3351257372-tlq93            0/1       PodInitializing 
  0          2m
onap-aai      hbase-1381435241-ld56l                  1/1       Running         
  0          2m
onap-aai      model-loader-service-2816942467-kh1n0   0/1       Init:0/2        
  0          2m
onap-robot    robot-4262359493-k84b6                  1/1       Running         
  0          8m

aai-service is next
root@obrienk-1:/home/ubuntu/oom/kubernetes/oneclick# kubectl get pods 
--all-namespaces | grep onap
onap-aai      aai-service-3351257372-tlq93            1/1       Running    0    
      5m
onap-aai      hbase-1381435241-ld56l                  1/1       Running    0    
      5m
onap-aai      model-loader-service-2816942467-kh1n0   0/1       Init:1/2   0    
      5m
onap-robot    robot-4262359493-k84b6                  1/1       Running    0    
      11m

model-loader-service takes the longest


/michael
From: 
[email protected]<mailto:[email protected]>[mailto:[email protected]]
 On Behalf Of Michael O'Brien
Sent: Wednesday, August 2, 2017 10:52
To: Viswa KSP <[email protected]<mailto:[email protected]>>
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [onap-discuss] [OOM] Using OOM kubernetes based seed code over 
Rancher

Visha,
   Hi , Search line limits turn out to be a red-herring – this is a non-fatal 
bug in Rancher you can ignore – they put too many entries in the search domain 
list (Ill update the wiki)
   Verify you have a stopped config-init pod – it won’t show in the get pods 
command – goto the rancher or kubernetes gui.
   The fact you have non-working robot – looks like you may be having issues 
pulling images from docker – verify your image list and that docker can pull 
from nexus3

  >docker images
   Should show some nexus3.onap.org ones if the pull were ok

  If not try an image pull to verify this:

ubuntu@obrienk-1:~/oom/kubernetes/config$ docker login -u docker -p docker 
nexus3.onap.org:10001
Login Succeeded
ubuntu@obrienk-1:~/oom/kubernetes/config$ docker pull 
nexus3.onap.org:10001/openecomp/mso:1.0-STAGING-latest
1.0-STAGING-latest: Pulling from openecomp/mso

23a6960fe4a9: Extracting [===================================>               ] 
32.93 MB/45.89 MB
e9e104b0e69d: Download complete

If so then you need to set the docker proxy – or run outside the firewall like 
I do.

Also to be able to run the 34 pods (without DCAE) you will need 37g+ (33 + 4 
for rancher/k82 + some for the OS) – also plan for over 100G of HD space.

   /michael


From: Viswa KSP [mailto:[email protected]]
Sent: Wednesday, August 2, 2017 09:57
To: Michael O'Brien <[email protected]<mailto:[email protected]>>
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [onap-discuss] [OOM] Using OOM kubernetes based seed code over 
Rancher

Hi Michael,

Thanks for your response. I tried again by creating InitContainer POD as 
suggested and also in 32G machine as suggested by Victor.
But still the environment is NOT up yet for me. Only 5 PODs are running 
successfully.

kspviswa@onap-oom:~/oom/kubernetes/oneclick$ kubectl get pods --all-namespaces 
| grep onap
onap-aai              aai-service-3351257372-8htfv            0/1       
ImagePullBackOff   0          36m
onap-aai              hbase-1381435241-xqnrq                  1/1       Running 
           0          36m
onap-aai              model-loader-service-2816942467-x8581   0/1       
Init:0/2           1          36m
onap-appc             appc-3536695022-tf21s                   0/1       
ImagePullBackOff   0          36m
onap-appc             appc-dbhost-2406136630-5ph55            1/1       Running 
           0          36m
onap-appc             appc-dgbuilder-4185623067-142kt         0/1       
Init:Error         0          36m
onap-message-router   dmaap-1601351982-638d7                  0/1       
Init:0/1           1          36m
onap-message-router   global-kafka-1879814407-kpxgk           0/1       
CrashLoopBackOff   9          36m
onap-message-router   zookeeper-1324948063-sdn0n              1/1       Running 
           0          36m
onap-mso              mariadb-756296431-890d0                 0/1       
ImagePullBackOff   0          36m
onap-mso              mso-1648770403-3ntv6                    0/1       
Init:0/1           1          36m
onap-policy           brmsgw-2416784739-499ht                 0/1       
Init:Error         0          36m
onap-policy           drools-3970961907-mnpwk                 0/1       
Init:Error         0          36m
onap-policy           mariadb-1486010673-pzjlq                0/1       
ErrImagePull       0          36m
onap-policy           nexus-1796544898-84c4c                  0/1       
Init:Error         0          36m
onap-policy           pap-2037709173-9f2n2                    0/1       
Init:0/2           1          36m
onap-policy           pdp-2518504778-cvjt4                    0/1       
Init:Error         0          36m
onap-policy           pypdp-2081973022-h4qsf                  0/1       
Init:Error         0          36m
onap-portal           portalapps-1161448524-hgrlq             0/1       
Init:Error         0          36m
onap-portal           portaldb-567766617-nd4sr                0/1       
ErrImagePull       0          36m
onap-portal           vnc-portal-647462255-4d2l5              0/1       
Init:0/5           1          36m
onap-robot            robot-4262359493-d8dgl                  0/1       
ErrImagePull       0          36m
onap-sdc              sdc-be-4008639732-k29hg                 0/1       
Init:0/2           1          36m
onap-sdc              sdc-cs-2600616558-j9b2f                 0/1       
Init:Error         1          36m
onap-sdc              sdc-es-2988521946-1wznh                 0/1       
ImagePullBackOff   0          36m
onap-sdc              sdc-fe-2781606410-tvjbz                 0/1       
Init:0/1           1          36m
onap-sdc              sdc-kb-220921189-dntw4                  0/1       
Init:0/1           1          36m
onap-sdnc             sdnc-2444767113-v2nd5                   0/1       
ErrImagePull       0          36m
onap-sdnc             sdnc-dbhost-3342709633-2k0sr            1/1       Running 
           0          36m
onap-sdnc             sdnc-dgbuilder-2456201986-b5f65         0/1       
Init:0/1           1          36m
onap-sdnc             sdnc-portal-3781432694-9l0f6            0/1       
Init:0/1           1          36m
onap-vid              vid-mariadb-344946511-dj3b6             1/1       Running 
           0          36m
onap-vid              vid-server-3905085274-g5bh9             0/1       
ErrImagePull       0          36m


I also see below warning on every POD.

 Search Line limits were exceeded, some dns names have been omitted, the 
applied search line is: onap-portal.svc.cluster.local svc.cluster.local 
cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal 
rancher.internal
Error syncing pod


BTW I had edited /etc/resolv.conf as per the suggestion in Wiki. But still 
facing the issues.

Please let me know if I had missed anything.

BR,
Viswa


On Mon, Jul 31, 2017 at 10:25 PM, Michael O'Brien 
<[email protected]<mailto:[email protected]>> wrote:
Viswa,
   Sorry for the late reply – a fellow ONAP teammate alerted me to your message 
we missed – however we would not have had an answer on these NFS failures until 
12 days later anyway.

   I would expect you did not run the init-config container first – this is ok 
I had this issue as well until the 16th (The failure pods – match exactly the 
ones I previously saw before 16 July) – I only figured out I was missing that 
step then and added it to the wiki on 16 July.
root@obriensystemsucont0:~/onap/oom/kubernetes/config# kubectl create -f 
pod-config-init.yaml

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-QuickstartInstallation
https://jira.onap.org/browse/OOM-1

    Try bringing up the config-init container first – then all your other pods 
will be able to see the mount you created on your system at 
/dockerdata-nfs/onapdemo
    This will fix 33 of 34 containers (there are still 11 more container 
pending for DCAE in the queue).  Only the drools container under onap-policy is 
still having timeout issues pulling artifacts.

     Thank you
     /michael

From: 
[email protected]<mailto:[email protected]>[mailto:[email protected]<mailto:[email protected]>]
 On Behalf Of Viswa KSP
Sent: Tuesday, July 4, 2017 11:12
To: [email protected]<mailto:[email protected]>
Subject: [onap-discuss] [OOM] Using OOM kubernetes based seed code over Rancher

Dear OOM Team,

I tried to bring up OOM seed code ( https://gerrit.onap.org/r/gitweb?p=oom.git) 
using Rancher as per wiki ( 
https://wiki.onap.org/display/DW/ONAP+on+Kubernetes).
However certain PODs on several ONAP namespaces, didn't come up well. All 
failed PODs show similar error pattern.

Following are the status of all PODs currently.

Namespace

POD

Status

onap-aai

aai-service

Up

hbase

Up

model-loader-service

Down

onap-appc

appc

Up

appc-dbhost

Up

appc-dgbuilder

Up

onap-message-router

dmaap

Up

global-kafka

Up

zookeeper

Up

onap-mso

mso

Up

mariadb

Up

onap-policy

brmsgw

Down

drools

Down

mariadb

Up

nexus

Up

Pap

Up

Pdp

Down

Pypdp

Down

onap-portal

portalapps

Down

portaldb

Up

vnc-portal

Down

onap-robot

robot

Up

onap-sdc

sdc-be

Down

sdc-cs

Up

sdc-es

Up

sdc-fe

Down

sdc-kb

Down

onap-sdnc

sdnc

Up

sdnc-dbhost

Up

sdnc-dgbuilder

Up

sdnc-portal

Up

onap-vid

vid-mariadb

Up

vid-server

Down



PODs which are in failed state show below errors :

Error syncing pod, skipping: failed to "InitContainer" for 
"vid-server-readiness" with RunInitContainerError: "init container 
\"vid-server-readiness\" exited with 1"

Any help would be highly appreciated.

PS : I had earlier asked subset of this question as a follow-up question in 
another thread. Then realized this discussion would help future developers. So 
starting this new thread with appropriate email subject for easy search & 
retrieval.

BR,
Viswa
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at 
https://www.amdocs.com/about/email-disclaimer_______________________________________________
onap-discuss mailing list
[email protected]<mailto:[email protected]>
https://lists.onap.org/mailman/listinfo/onap-discuss

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>
_______________________________________________
onap-discuss mailing list
[email protected]
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to