Hi Alexis,

 I reverted the oom commit from head to:


git checkout cb02aa241edd97acb6c5ca744de84313f53e8a5a

Author: yuryn <yury.novit...@amdocs.com>
Date:   Thu Dec 21 14:31:21 2017 +0200

    Fix firefox tab crashes in VNC

    Change-Id: Ie295257d98ddf32693309535e15c6ad9529f10fc
    Issue-ID: OOM-531


Everything works with service creation, vnf and vf creates!
Please note that I am running with dcae disabled.
Something is broken with dcae disabled in the latest.
100% reproducible with service distribution step through operator taking a 
policy exception mailed earlier.
Have a nice weekend.

Regards,
-Karthick




________________________________
From: Ramanarayanan, Karthick
Sent: Friday, January 19, 2018 8:48:23 AM
To: Alexis de Talhouët
Cc: onap-discuss@lists.onap.org
Subject: Re: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on 
latest ONAP/OOM


Hi Alexis,

 I did check the policy pod logs before sending the mail.

 I didn't see anything suspicious.

 I initially suspected aai-service dns not getting resolved but you seem to 
have fixed it

 and it was accessible from policy pod.

 Nothing suspicious from any log anywhere.

 I did see that the health check on sdc pods returned all UP except: DE 
component whose health check was down.

 Not sure if its anyway related. Could be benign.


curl http://127.0.0.1:30206/sdc1/rest/healthCheck
{
 "sdcVersion": "1.1.0",
 "siteMode": "unknown",
 "componentsInfo": [
   {
     "healthCheckComponent": "BE",
     "healthCheckStatus": "UP",
     "version": "1.1.0",
     "description": "OK"
   },
   {
     "healthCheckComponent": "TITAN",
     "healthCheckStatus": "UP",
     "description": "OK"
   },
   {
     "healthCheckComponent": "DE",
     "healthCheckStatus": "DOWN",
     "description": "U-EB cluster is not available"
   },
   {
     "healthCheckComponent": "CASSANDRA",
     "healthCheckStatus": "UP",
     "description": "OK"
   },
   {
     "healthCheckComponent": "ON_BOARDING",
     "healthCheckStatus": "UP",
     "version": "1.1.0",
     "description": "OK",
     "componentsInfo": [
       {
         "healthCheckComponent": "ZU",
         "healthCheckStatus": "UP",
         "version": "0.2.0",
         "description": "OK"
       },
       {
         "healthCheckComponent": "BE",
         "healthCheckStatus": "UP",
         "version": "1.1.0",
         "description": "OK"
       },
       {
         "healthCheckComponent": "CAS",
         "healthCheckStatus": "UP",
         "version": "2.1.17",
         "description": "OK"
       },
       {
         "healthCheckComponent": "FE",
         "healthCheckStatus": "UP",
         "version": "1.1.0",
         "description": "OK"
       }
     ]
   },
   {
     "healthCheckComponent": "FE",
     "healthCheckStatus": "UP",
     "version": "1.1.0",
     "description": "OK"
   }
 ]



On some occasions backend doesn't come up even though pods are running.

(seen on other nodes running onap and was there even without your changes. Logs 
indicated nothing.

But if I restart the sdc pods for cassandra, elastic search and kibana before 
backend restart, backend starts responding and ends up creating the user 
profile entries for the various user roles for onap as seen in logs. But this 
is unrelated to this service distribution error as backend is up.)

)



Regards,

-Karthick


________________________________
From: Alexis de Talhouët <adetalhoue...@gmail.com>
Sent: Friday, January 19, 2018 4:54 AM
To: Ramanarayanan, Karthick
Cc: onap-discuss@lists.onap.org
Subject: [**EXTERNAL**] Re: [onap-discuss] Service distribution error on latest 
ONAP/OOM

Hi,

Could you look at the log of Policy for errors, for that you need to go in the 
pod themselves, under /var/log/onap.
You could do the same for SDC container (backend).
The thing that could have affect Policy is the fact we removed the persisted 
data of mariadb, because it was bogus 
(https://gerrit.onap.org/r/#/c/27521/<https://urldefense.proofpoint.com/v2/url?u=https-3A__gerrit.onap.org_r_-23_c_27521_&d=DwMFaQ&c=06gGS5mmTNpWnXkc0ACHoA&r=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8&m=KvVcZ86B4A84GbAp761jI6ct9mjN5qBKlllOw238TUU&s=AxY6GD58MW0vVFUppssl7HvOycADxAmnLxX8lBIQnHQ&e=>).
 But I doubt it does explain your issue.
Beside that, nothing having a potential disruptive effect happen to policy.
The DCAE work was well tested before it got merged. I’ll re-test sometime today 
or early next week to make sure nothing has slept through the crack.

Thanks,
Alexis

On Jan 18, 2018, at 11:44 PM, Ramanarayanan, Karthick 
<krama...@ciena.com<mailto:krama...@ciena.com>> wrote:


Hi,
 Trying to distribute a demo firewall service instance on a kubernetes host 
running ONAP, I am seeing a new policy exception error on the latest oom on 
amsterdam.
(dcae deploy is false and disableDcae is true)

Error code: POL5000
Status code: 500
Internal Server Error. Please try again later.


All pods are up. Health check seems to be fine on all pods.
k8s pod logs don't seem to reveal anything and this happens consistently 
whenever I try to distribute the service as an operator.

It was working fine last week.
Even yesterday I didn't get this error though I got a different one related 
createVnfInfra notify exception on SO vnf create workflow step but that was a 
different failure than this.

After the dcae config changes got merged, this service distribution error seems 
to have popped up. (dcae is disabled for my setup)

What am I missing?

Thanks,
-Karthick
_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
https://lists.onap.org/mailman/listinfo/onap-discuss<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.onap.org_mailman_listinfo_onap-2Ddiscuss&d=DwMFaQ&c=06gGS5mmTNpWnXkc0ACHoA&r=3Q306Mu4iPxbTMD0vasm2o7f6Fs_R4Dsdq4HWP9yOq8&m=KvVcZ86B4A84GbAp761jI6ct9mjN5qBKlllOw238TUU&s=iSTQDGafujTqHJZN1wt5f_D193fX7bpHPFrDor4tV4I&e=>

_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to