Visha,
   Hi , Search line limits turn out to be a red-herring – this is a non-fatal 
bug in Rancher you can ignore – they put too many entries in the search domain 
list (Ill update the wiki)
   Verify you have a stopped config-init pod – it won’t show in the get pods 
command – goto the rancher or kubernetes gui.
   The fact you have non-working robot – looks like you may be having issues 
pulling images from docker – verify your image list and that docker can pull 
from nexus3

  >docker images
   Should show some nexus3.onap.org ones if the pull were ok

  If not try an image pull to verify this:

ubuntu@obrienk-1:~/oom/kubernetes/config$ docker login -u docker -p docker 
nexus3.onap.org:10001
Login Succeeded
ubuntu@obrienk-1:~/oom/kubernetes/config$ docker pull 
nexus3.onap.org:10001/openecomp/mso:1.0-STAGING-latest
1.0-STAGING-latest: Pulling from openecomp/mso

23a6960fe4a9: Extracting [===================================>               ] 
32.93 MB/45.89 MB
e9e104b0e69d: Download complete

If so then you need to set the docker proxy – or run outside the firewall like 
I do.

Also to be able to run the 34 pods (without DCAE) you will need 37g+ (33 + 4 
for rancher/k82 + some for the OS) – also plan for over 100G of HD space.

   /michael


From: Viswa KSP [mailto:[email protected]]
Sent: Wednesday, August 2, 2017 09:57
To: Michael O'Brien <[email protected]>
Cc: [email protected]
Subject: Re: [onap-discuss] [OOM] Using OOM kubernetes based seed code over 
Rancher

Hi Michael,

Thanks for your response. I tried again by creating InitContainer POD as 
suggested and also in 32G machine as suggested by Victor.
But still the environment is NOT up yet for me. Only 5 PODs are running 
successfully.

kspviswa@onap-oom:~/oom/kubernetes/oneclick$ kubectl get pods --all-namespaces 
| grep onap
onap-aai              aai-service-3351257372-8htfv            0/1       
ImagePullBackOff   0          36m
onap-aai              hbase-1381435241-xqnrq                  1/1       Running 
           0          36m
onap-aai              model-loader-service-2816942467-x8581   0/1       
Init:0/2           1          36m
onap-appc             appc-3536695022-tf21s                   0/1       
ImagePullBackOff   0          36m
onap-appc             appc-dbhost-2406136630-5ph55            1/1       Running 
           0          36m
onap-appc             appc-dgbuilder-4185623067-142kt         0/1       
Init:Error         0          36m
onap-message-router   dmaap-1601351982-638d7                  0/1       
Init:0/1           1          36m
onap-message-router   global-kafka-1879814407-kpxgk           0/1       
CrashLoopBackOff   9          36m
onap-message-router   zookeeper-1324948063-sdn0n              1/1       Running 
           0          36m
onap-mso              mariadb-756296431-890d0                 0/1       
ImagePullBackOff   0          36m
onap-mso              mso-1648770403-3ntv6                    0/1       
Init:0/1           1          36m
onap-policy           brmsgw-2416784739-499ht                 0/1       
Init:Error         0          36m
onap-policy           drools-3970961907-mnpwk                 0/1       
Init:Error         0          36m
onap-policy           mariadb-1486010673-pzjlq                0/1       
ErrImagePull       0          36m
onap-policy           nexus-1796544898-84c4c                  0/1       
Init:Error         0          36m
onap-policy           pap-2037709173-9f2n2                    0/1       
Init:0/2           1          36m
onap-policy           pdp-2518504778-cvjt4                    0/1       
Init:Error         0          36m
onap-policy           pypdp-2081973022-h4qsf                  0/1       
Init:Error         0          36m
onap-portal           portalapps-1161448524-hgrlq             0/1       
Init:Error         0          36m
onap-portal           portaldb-567766617-nd4sr                0/1       
ErrImagePull       0          36m
onap-portal           vnc-portal-647462255-4d2l5              0/1       
Init:0/5           1          36m
onap-robot            robot-4262359493-d8dgl                  0/1       
ErrImagePull       0          36m
onap-sdc              sdc-be-4008639732-k29hg                 0/1       
Init:0/2           1          36m
onap-sdc              sdc-cs-2600616558-j9b2f                 0/1       
Init:Error         1          36m
onap-sdc              sdc-es-2988521946-1wznh                 0/1       
ImagePullBackOff   0          36m
onap-sdc              sdc-fe-2781606410-tvjbz                 0/1       
Init:0/1           1          36m
onap-sdc              sdc-kb-220921189-dntw4                  0/1       
Init:0/1           1          36m
onap-sdnc             sdnc-2444767113-v2nd5                   0/1       
ErrImagePull       0          36m
onap-sdnc             sdnc-dbhost-3342709633-2k0sr            1/1       Running 
           0          36m
onap-sdnc             sdnc-dgbuilder-2456201986-b5f65         0/1       
Init:0/1           1          36m
onap-sdnc             sdnc-portal-3781432694-9l0f6            0/1       
Init:0/1           1          36m
onap-vid              vid-mariadb-344946511-dj3b6             1/1       Running 
           0          36m
onap-vid              vid-server-3905085274-g5bh9             0/1       
ErrImagePull       0          36m


I also see below warning on every POD.

 Search Line limits were exceeded, some dns names have been omitted, the 
applied search line is: onap-portal.svc.cluster.local svc.cluster.local 
cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal 
rancher.internal
Error syncing pod


BTW I had edited /etc/resolv.conf as per the suggestion in Wiki. But still 
facing the issues.

Please let me know if I had missed anything.

BR,
Viswa


On Mon, Jul 31, 2017 at 10:25 PM, Michael O'Brien 
<[email protected]<mailto:[email protected]>> wrote:
Viswa,
   Sorry for the late reply – a fellow ONAP teammate alerted me to your message 
we missed – however we would not have had an answer on these NFS failures until 
12 days later anyway.

   I would expect you did not run the init-config container first – this is ok 
I had this issue as well until the 16th (The failure pods – match exactly the 
ones I previously saw before 16 July) – I only figured out I was missing that 
step then and added it to the wiki on 16 July.
root@obriensystemsucont0:~/onap/oom/kubernetes/config# kubectl create -f 
pod-config-init.yaml

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-QuickstartInstallation
https://jira.onap.org/browse/OOM-1

    Try bringing up the config-init container first – then all your other pods 
will be able to see the mount you created on your system at 
/dockerdata-nfs/onapdemo
    This will fix 33 of 34 containers (there are still 11 more container 
pending for DCAE in the queue).  Only the drools container under onap-policy is 
still having timeout issues pulling artifacts.

     Thank you
     /michael

From: 
[email protected]<mailto:[email protected]> 
[mailto:[email protected]<mailto:[email protected]>]
 On Behalf Of Viswa KSP
Sent: Tuesday, July 4, 2017 11:12
To: [email protected]<mailto:[email protected]>
Subject: [onap-discuss] [OOM] Using OOM kubernetes based seed code over Rancher

Dear OOM Team,

I tried to bring up OOM seed code ( https://gerrit.onap.org/r/gitweb?p=oom.git 
) using Rancher as per wiki ( 
https://wiki.onap.org/display/DW/ONAP+on+Kubernetes ).
However certain PODs on several ONAP namespaces, didn't come up well. All 
failed PODs show similar error pattern.

Following are the status of all PODs currently.

Namespace

POD

Status

onap-aai

aai-service

Up

hbase

Up

model-loader-service

Down

onap-appc

appc

Up

appc-dbhost

Up

appc-dgbuilder

Up

onap-message-router

dmaap

Up

global-kafka

Up

zookeeper

Up

onap-mso

mso

Up

mariadb

Up

onap-policy

brmsgw

Down

drools

Down

mariadb

Up

nexus

Up

Pap

Up

Pdp

Down

Pypdp

Down

onap-portal

portalapps

Down

portaldb

Up

vnc-portal

Down

onap-robot

robot

Up

onap-sdc

sdc-be

Down

sdc-cs

Up

sdc-es

Up

sdc-fe

Down

sdc-kb

Down

onap-sdnc

sdnc

Up

sdnc-dbhost

Up

sdnc-dgbuilder

Up

sdnc-portal

Up

onap-vid

vid-mariadb

Up

vid-server

Down


PODs which are in failed state show below errors :

Error syncing pod, skipping: failed to "InitContainer" for 
"vid-server-readiness" with RunInitContainerError: "init container 
\"vid-server-readiness\" exited with 1"

Any help would be highly appreciated.

PS : I had earlier asked subset of this question as a follow-up question in 
another thread. Then realized this discussion would help future developers. So 
starting this new thread with appropriate email subject for easy search & 
retrieval.

BR,
Viswa
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>
_______________________________________________
onap-discuss mailing list
[email protected]
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to