Re: [onap-discuss] [ONAP] [OOM] shared cassandra DB for OOM

2019-04-08 Thread James MacNider
One small clarification.  The changes to move the some of the ONAP components 
over to the shared Cassandra and Maria-DB clusters have not yet been merged 
from an OOM perspective.  Before this can be done, the CI/CD infrastructure 
needs a small update to enable the cassandra and maria-db shared components.

Sylvain, are you in a position to help make that happen?



From: onap-discuss@lists.onap.org  On Behalf Of 
Mike Elliott
Sent: Saturday, April 6, 2019 9:21 AM
To: morgan.richo...@orange.com; Michael O'Brien ; 
gary.i...@huawei.com; bf1...@att.com; yang...@huawei.com; 
plata...@research.att.com; Mahendra Raghuwanshi 
; Amit Sinha 
Cc: ALLAMAND Sebastien DTSI/DSI ; DESBUREAUX 
Sylvain TGI/OLN ; onap-discuss@lists.onap.org; 
LUCE Jean Armel DTSI/DSI ; BLAISONNEAU David TGI/OLN 
; tephan.acquate...@orange.com; DEBEAU Eric 
TGI/OLN 
Subject: Re: [onap-discuss] [ONAP] [OOM] shared cassandra DB for OOM

Hi Morgan,

As of recently, a shared Cassandra cluster has been created in Dublin and A&AI 
has been migrated onto it. Migrating portal to also use the shared instance 
(before code freeze) was being investigated but need to find out current status 
of that effort. Music was not on our radar for this release. As far as to why 
Portal is using Cassandra, that is a question for the Portal team.

OOM is looking to use k8s Operators for a number of cases and will be a 
priority for El Alto. If you or your team has expertise in using Operators with 
Cassandra we would very much like speak with you. Since M4 will be the focus 
for next week, can we schedule something for the week after? Perhaps you could 
provide an overview and/or demonstration of what you’ve been working on 
regarding operators and HA?

Thanks,
Mike.

From: "morgan.richo...@orange.com" 
mailto:morgan.richo...@orange.com>>
Date: Tuesday, April 2, 2019 at 1:57 AM
To: Michael O'Brien mailto:frank.obr...@amdocs.com>>, 
"gary.i...@huawei.com" 
mailto:gary.i...@huawei.com>>, Mike Elliott 
mailto:mike.elli...@amdocs.com>>, 
"bf1...@att.com" 
mailto:bf1...@att.com>>, 
"yang...@huawei.com" 
mailto:yang...@huawei.com>>, 
"plata...@research.att.com" 
mailto:plata...@research.att.com>>
Cc: ALLAMAND Sebastien DTSI/DSI 
mailto:sebastien.allam...@orange.com>>, 
DESBUREAUX Sylvain TGI/OLN 
mailto:sylvain.desbure...@orange.com>>, 
"onap-discuss@lists.onap.org" 
mailto:onap-discuss@lists.onap.org>>, LUCE Jean 
Armel DTSI/DSI mailto:jeanarmel.l...@orange.com>>, 
BLAISONNEAU David TGI/OLN 
mailto:david.blaisonn...@orange.com>>, 
"tephan.acquate...@orange.com" 
mailto:tephan.acquate...@orange.com>>, DEBEAU 
Eric TGI/OLN mailto:eric.deb...@orange.com>>
Subject: [ONAP] [OOM] shared cassandra DB for OOM

Hi,

as an action point from last integration meeting 
(http://ircbot.wl.linuxfoundation.org/meetings/onap-int/2019/onap-int.2019-03-27-13.00.html),
 I took the point to initiate a thread on cassandra usage in ONAP.

Please note that I am not an expert, I added some experts in copy of the mail.

My understanding is that we have today several cassandra databases (aai, music 
and portal)
we have 3 nodes for music and aai and 1 single node for the portal.

onap-aai-aai-cassandra-0   1/1   
Running0  24d
onap-aai-aai-cassandra-1   1/1   
Running0  44d
onap-aai-aai-cassandra-2   1/1   
Running0  18d
onap-oof-music-cassandra-0 1/1   
Running0  44d
onap-oof-music-cassandra-1 1/1   
Running0  44d
onap-oof-music-cassandra-2 1/1   
Running0  44d
onap-portal-portal-cassandra-5c66f77c5b-xf2c7  1/1   
Running0  20d

I am a bit puzzled by the choice of cassandra DB as a mono node Database for a 
simple portal.

As far as I understood, there is a current task in Onap installer project to 
put in place a single cassandra database that could be used by the different 
projects to converge at the end to 1 cassandra database.

For the moment, my understanding is that the installation of cassandra cluster 
is done without really considering the specificities of cassandra.
Typically the 3 nodes for aai are deployed according to k8s scheduler rules 
independently from any hardware constraints/placement requirements.
My colleagues are currently finalizing the development of a cassandra operator 
for k8s that could be useful to ensure that the HA nature of cassandra remains 
HA even in virtual environment.
It would make sense to consider such operator for t

Re: [onap-discuss] [sdnc][appc][integration] SDNC and APPC fail on healthcheck with 404

2018-12-12 Thread James MacNider
I didn’t have occasion yet to look deeper into this, so it wasn’t me.  It’s an 
interesting data point regardless.  I wouldn’t have expected a restart to have 
an affect on connectivity.



From: Yang Xu (Yang, Fixed Network) 
Sent: Wednesday, December 12, 2018 5:14 PM
To: onap-discuss@lists.onap.org; Yang Xu (Yang, Fixed Network) 
; James MacNider ; TIMONEY, DAN 

Subject: RE: [onap-discuss] [sdnc][appc][integration] SDNC and APPC fail on 
healthcheck with 404

Hi James and Dan,

I restarted sdnc pod to enable karaf log, after the restart sdnc healthcheck 
passed. I don’t know if you or someone else happened to make some change, or 
restart helped.  Similarly after I restarted APPC pod, its healthcheck passed 
as well. This is Casablanca released version, we will make a note of it to see 
if it happens again.

Thanks for your help,

-Yang



From: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org> 
[mailto:onap-discuss@lists.onap.org] On Behalf Of Yang Xu
Sent: Wednesday, December 12, 2018 2:43 PM
To: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>; 
james.macni...@amdocs.com<mailto:james.macni...@amdocs.com>; TIMONEY, DAN 
mailto:dt5...@att.com>>
Subject: Re: [onap-discuss] [sdnc][appc][integration] SDNC and APPC fail on 
healthcheck with 404

Hi James,

Thanks for the reply. I don’t know much about SDNC cluster settings. On SB05, 
from Robot container I can ping sdnc-cluster.onap and also I can telenet 
sdnc.onap 8282 port. I am guessing, port 8181 is only used internally by SDNC 
domain to check heartbeat of each replica, so port 8181 doesn’t need to be 
exposed as a service port. Please let me if you need more info about SB05 
testbed.

Regards,
-Yang

From: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org> 
[mailto:onap-discuss@lists.onap.org] On Behalf Of James MacNider
Sent: Wednesday, December 12, 2018 1:30 PM
To: Yang Xu (Yang, Fixed Network) 
mailto:yang@huawei.com>>; TIMONEY, DAN 
mailto:dt5...@att.com>>; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: Re: [onap-discuss] [sdnc][appc][integration] SDNC and APPC fail on 
healthcheck with 404

It’s possible there’s a misconfiguration here, but I’ve not had the chance to 
investigate yet.

One thing to keep in mind is that when we’re discussing “domains” in the 
context of a Kubernetes cluster, what you’re actually referring to is the name 
of a Kubernetes service that is used to load-balance across one or more pods.  
These names are tied to specific ports that the pods expose.  In the case of 
SDN-C, these are the service names that are being exposed, along with their 
associated ports:

$ kubectl get services -n onap
NAME   TYPECLUSTER-IP  EXTERNAL-IP   
PORT(S)   AGE
…
controller-blueprints  ClusterIP   10.43.14.62 
8080/TCP  16m
controller-blueprints-db   ClusterIP   None
3306/TCP  16m
…
sdnc   NodePort10.43.161.171   
8282:30202/TCP,8202:30208/TCP,8280:30246/TCP,8443:30267/TCP   16m
sdnc-ansible-serverClusterIP   10.43.83.83 
8000/TCP  16m
sdnc-cluster   ClusterIP   None
2550/TCP  16m
sdnc-dbhostClusterIP   None
3306/TCP  16m
sdnc-dbhost-read   ClusterIP   10.43.51.29 
3306/TCP  16m
sdnc-dgbuilder NodePort10.43.50.6  
3000:30203/TCP16m
sdnc-dmaap-listenerClusterIP   None
16m
sdnc-portalNodePort10.43.185.129   
8843:30201/TCP16m
sdnc-sdnctldb01ClusterIP   None
3306/TCP  16m
sdnc-sdnctldb02ClusterIP   None
3306/TCP  16m
sdnc-ueb-listener  ClusterIP   None
16m

So if the cluster health check is using port 8181, this could a problem, as I 
don’t’ see any services exposing that port…




From: Yang Xu (Yang, Fixed Network) 
mailto:yang@huawei.com>>
Sent: Wednesday, December 12, 2018 12:43 PM
To: TIMONEY, DAN mailto:dt5...@att.com>>; 
onap-discuss@lists.

Re: [onap-discuss] [sdnc][appc][integration] SDNC and APPC fail on healthcheck with 404

2018-12-12 Thread James MacNider
It’s possible there’s a misconfiguration here, but I’ve not had the chance to 
investigate yet.

One thing to keep in mind is that when we’re discussing “domains” in the 
context of a Kubernetes cluster, what you’re actually referring to is the name 
of a Kubernetes service that is used to load-balance across one or more pods.  
These names are tied to specific ports that the pods expose.  In the case of 
SDN-C, these are the service names that are being exposed, along with their 
associated ports:

$ kubectl get services -n onap
NAME   TYPECLUSTER-IP  EXTERNAL-IP   
PORT(S)   AGE
…
controller-blueprints  ClusterIP   10.43.14.62 
8080/TCP  16m
controller-blueprints-db   ClusterIP   None
3306/TCP  16m
…
sdnc   NodePort10.43.161.171   
8282:30202/TCP,8202:30208/TCP,8280:30246/TCP,8443:30267/TCP   16m
sdnc-ansible-serverClusterIP   10.43.83.83 
8000/TCP  16m
sdnc-cluster   ClusterIP   None
2550/TCP  16m
sdnc-dbhostClusterIP   None
3306/TCP  16m
sdnc-dbhost-read   ClusterIP   10.43.51.29 
3306/TCP  16m
sdnc-dgbuilder NodePort10.43.50.6  
3000:30203/TCP16m
sdnc-dmaap-listenerClusterIP   None
16m
sdnc-portalNodePort10.43.185.129   
8843:30201/TCP16m
sdnc-sdnctldb01ClusterIP   None
3306/TCP  16m
sdnc-sdnctldb02ClusterIP   None
3306/TCP  16m
sdnc-ueb-listener  ClusterIP   None
16m

So if the cluster health check is using port 8181, this could a problem, as I 
don’t’ see any services exposing that port…




From: Yang Xu (Yang, Fixed Network) 
Sent: Wednesday, December 12, 2018 12:43 PM
To: TIMONEY, DAN ; onap-discuss@lists.onap.org; James MacNider 

Subject: RE: [onap-discuss] [sdnc][appc][integration] SDNC and APPC fail on 
healthcheck with 404

Dan,

I see your point. From sdnc oom script, I see sdnc-cluster.onap is used for 
SDNC cluster health check and it uses port 8181. The issue I reported is from 
SDNC component health check, the port used is 8282. Maybe both 
sdnc-cluster.onap and sdnc.onap are valid.

Thanks,
-Yang

From: TIMONEY, DAN [mailto:dt5...@att.com]
Sent: Wednesday, December 12, 2018 9:59 AM
To: Yang Xu (Yang, Fixed Network) 
mailto:yang@huawei.com>>; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>; MACNIDER, 
JAMES mailto:james.macni...@amdocs.com>>
Subject: Re: [onap-discuss] [sdnc][appc][integration] SDNC and APPC fail on 
healthcheck with 404

Yang,

The problem is that the scripts that setup clustering rely on the domain 
sdnc-cluster.onap being setup.  I just rebuilt my WindRiver test ONAP 
environment last week, following the instructions in the OOM User Guide for 
installing with Rancher, and using the casablanca branch of OOM.  In my 
environment, the pod names and domain name work as they should.

If we're saying the domain name should be sdnc.onap instead of 
sdnc-cluster.onap, I'd be okay with making that change (which is really in 
OOM).  However, the sdnc.onap domain also isn't working between pods  (i.e. if 
I exec to dev-sdnc-sdnc-0,  I can't get to dev-sdnc-sdnc-1.sdnc.onap either).  
Also, I notice in my local environment that I can ping 
dev-sdnc-1.sdnc-cluster.onap from dev-sdnc-0, but not dev-sdnc-1.sdnc.onap ... 
so I think the domain 'sdnc-cluster.onap' is correct, but for some reason not 
working.


James – can you perhaps take a look at SB05?  I’m not really familiar enough 
with the details of how that sdnc-cluster.onap domain is set up within K8S to 
be of too much more help here other than to point out it’s not working.

Dan

Dan Timoney
Principal Technical Staff Member
AT&T
Email : dtimo...@att.com<mailto:dtimo...@att.com>
Office : +1 (732) 420-3226
Mobile : +1 (201) 960-1211
200 S Laurel Ave, Rm E2-2A03
Middletown, NJ 08873

From: "Yang Xu (Yang, Fixed Network)" 
mailto:yang@huawei.com>>
Date: Tuesday, December 11,

Re: [SUSPICIOUS MESSAGE] Re: [onap-discuss] [OOM] Fatty is not pretty

2018-11-29 Thread James MacNider
We also need to keep in mind that the resources needed to be successful in 
deploying a component does necessarily reflect what it needs to operate with 
real workloads.  In most cases the project teams themselves put forward the CPU 
and RAM request sizing based on their experience.


From: onap-discuss@lists.onap.org  On Behalf Of 
Pavel Paroulek
Sent: Thursday, November 29, 2018 9:25 AM
To: onap-discuss 
Subject: [SUSPICIOUS MESSAGE] Re: [onap-discuss] [OOM] Fatty is not pretty

ROFL :D

Anyway, maybe some ONAP projects deploy services that are not needed during 
normal runtime (like data viewers, services that have not yet been integrated 
etc).

What if all OOM projects would support an “essential deployment” of all 
services needed for normal ONAP runtime and an “all” deployment where all the 
services, jobs, unfinished components etc would be deployed?

Maybe this could be a topic for the VF2F?

Br,
Pavel
 On Thu, 29 Nov 2018 14:33:04 +0100 Sylvain 
Desbureauxmailto:sylvain.desbure...@orange.com>> 
wrote 

Hello,
When I see that my 12 times 16G of RAM deployment (so 192G of RAM) is not 
sufficient for a “small” deployment of ONAP, I don’t even want to see how much 
I need for a “large” deployment (a complete DC?):

Containers:
  portal-cassandra:
Image:  nexus3.onap.org:10001/onap/music/cassandra_music:3.0.0
Ports:  9160/TCP, 7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
Limits:
  cpu: 2
  memory:  8Gi
Requests:
  cpu:  1
  memory:   4Gi
Liveness:   exec [/bin/bash -c nodetool status | grep $POD_IP | awk 
'$1!="UN" { exit 1; }'] delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness:  exec [/bin/bash -c nodetool status | grep $POD_IP | awk 
'$1!="UN" { exit 1; }'] delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
  CASSUSER:  root
  CASSPASS:  Aa123456
  JVM_OPTS:  -Xmx2536m -Xms2536m
  POD_IP: (v1:status.podIP)
Mounts:
  /docker-entrypoint-initdb.d/zzz_portal.cql from 
cassandra-docker-entrypoint-initdb (rw)
  /docker-entrypoint-initdb.d/zzz_portalsdk.cql from 
cassandra-docker-entrypoint-initdb (rw)
  /etc/localtime from localtime (ro)
  /var/lib/cassandra/data from onap-portal-portal-cassandra-data (rw)
  /var/run/secrets/kubernetes.io/serviceaccount from default-token-zcsfj 
(ro)

Events:
  Type ReasonAge   From   Message
   --     ---
  Warning  FailedScheduling  44s (x8 over 1m)  default-scheduler  No nodes are 
available that match all of the predicates: Insufficient cpu (1), Insufficient 
memory (12).


PS: with “unlimited” flavor, the _real_ RAM usage is less than 90G of RAM…

--
[cid:image001.png@01D487D1.51D56080]

Sylvain Desbureaux
Senior Automation Architect
ORANGE/IMT/OLN/CNC/NCA/SINA

Fixe : +33 2 96 07 13 80 

Mobile : +33 6 71 17 25 57 

sylvain.desbure...@orange.com


_



Ce message et ses pieces jointes peuvent contenir des informations 
conf

Re: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled all the time

2018-11-23 Thread James MacNider
Kibana uses a lot of resources when it is initializing and creating indices.  
When idle, the utilization does drop, but our experience has shown that using 
Kibana when there’s actual data in ElasticSearch can be resource intensive.

From: sylvain.desbure...@orange.com 
Sent: Friday, November 23, 2018 10:53 AM
To: James MacNider ; onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled 
all the time

James,
I moved to limits to 1.6G and it works now:
onap   onap-pomba-pomba-kibana-dbf869f9b-9cgs2  
   1 (12%)   2 (25%) 600Mi (3%)   1600Mi (10%)

the weird stuff is that we don’t use that much as you can see…

---
Sylvain Desbureaux

De : James MacNider 
mailto:james.macni...@amdocs.com>>
Date : vendredi 23 novembre 2018 à 16:23
À : "onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>" 
mailto:onap-discuss@lists.onap.org>>, DESBUREAUX 
Sylvain TGI/OLN 
mailto:sylvain.desbure...@orange.com>>
Objet : RE: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled 
all the time

Hi Sylvain,

If it’s not too much trouble could you try with 1.6Gi?  Should that fail I’d 
step up to 1.8Gi.

Thanks,
James

From: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org> 
mailto:onap-discuss@lists.onap.org>> On Behalf Of 
Sylvain Desbureaux
Sent: Friday, November 23, 2018 8:45 AM
To: James MacNider 
mailto:james.macni...@amdocs.com>>; 
onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: Re: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled 
all the time

Helllo James,

That’s why I said the other kibana were workins as one as higher resources 
limits (logging) and the other is not using resources limits (see 
https://jira.onap.org/browse/CLAMP-250).

I can test with higher values if you want.

Regards,

---
Sylvain Desbureaux

De : James MacNider 
mailto:james.macni...@amdocs.com>>
Date : vendredi 23 novembre 2018 à 14:18
À : "onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>" 
mailto:onap-discuss@lists.onap.org>>, DESBUREAUX 
Sylvain TGI/OLN 
mailto:sylvain.desbure...@orange.com>>
Objet : RE: [OOM][Casablanca][POMBA] pomba kibana is OOMKilled all the time

Hi Sylvain,

There have been occasional reports of this problem with this pod but it’s not 
been readily repeatable.  Believe it or not, it is probably due to the resource 
limits being too conservative.  The other ONAP kibana instances you mention 
both have higher resource limits (2Gi for clamp, 4Gi for logging).  I’ve raised 
https://jira.onap.org/browse/LOG-855 to track this issue.

Thanks,
James

From: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org> 
mailto:onap-discuss@lists.onap.org>> On Behalf Of 
Sylvain Desbureaux
Sent: Friday, November 23, 2018 5:27 AM
To: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled all 
the time

Hello,

I’m doing a daily deployment of a full ONAP (using 
https://gitlab.com/Orange-OpenSource/lfn/onap/onap_oom_automatic_installation).
Most of the deployment is working but  I keep getting some errors.

In particular, I’ve got pomba-kibana pods which gets “OOMKilled” at every 
deployment on the last week.
I’m deploying a “small” flavor with request at 1 Cpu (which is HUGE by the way) 
and 600M RAM and limits at 2 Cpu, 1.2G RAM.

The 2 other kibana (from log and clamp) are working fine.

Am I the only one having this issue?

--
[cid:image001.png@01D33D23.D1353070]<http://www.orange.com/>

Sylvain Desbureaux
Senior Automation Architect
ORANGE/IMT/OLN/CNC/NCA/SINA

Fixe : +33 2 96 07 13 80 
<https://monsi.sso.francetelecom.fr/index.asp?target=http%3A%2F%2Fclicvoice.sso.francetelecom.fr%2FClicvoiceV2%2FToolBar.do%3Faction%3Ddefault%26rootservice%3DSIGNATURE%26to%3D+33%202%2096%2007%2013%2080>
Mobile : +33 6 71 17 25 57 
<https://monsi.sso.francetelecom.fr/index.asp?target=http%3A%2F%2Fclicvoice.sso.francetelecom.fr%2FClicvoiceV2%2FToolBar.do%3Faction%3Ddefault%26rootservice%3DSIGNATURE%26to%3D+33%206%2071%2017%2025%2057>
sylvain.desbure...@orange.com<mailto:sylvain.desbure...@orange.com>


_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confident

Re: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled all the time

2018-11-23 Thread James MacNider
Hi Sylvain,

If it’s not too much trouble could you try with 1.6Gi?  Should that fail I’d 
step up to 1.8Gi.

Thanks,
James

From: onap-discuss@lists.onap.org  On Behalf Of 
Sylvain Desbureaux
Sent: Friday, November 23, 2018 8:45 AM
To: James MacNider ; onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled 
all the time

Helllo James,

That’s why I said the other kibana were workins as one as higher resources 
limits (logging) and the other is not using resources limits (see 
https://jira.onap.org/browse/CLAMP-250).

I can test with higher values if you want.

Regards,

---
Sylvain Desbureaux

De : James MacNider 
mailto:james.macni...@amdocs.com>>
Date : vendredi 23 novembre 2018 à 14:18
À : "onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>" 
mailto:onap-discuss@lists.onap.org>>, DESBUREAUX 
Sylvain TGI/OLN 
mailto:sylvain.desbure...@orange.com>>
Objet : RE: [OOM][Casablanca][POMBA] pomba kibana is OOMKilled all the time

Hi Sylvain,

There have been occasional reports of this problem with this pod but it’s not 
been readily repeatable.  Believe it or not, it is probably due to the resource 
limits being too conservative.  The other ONAP kibana instances you mention 
both have higher resource limits (2Gi for clamp, 4Gi for logging).  I’ve raised 
https://jira.onap.org/browse/LOG-855 to track this issue.

Thanks,
James

From: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org> 
mailto:onap-discuss@lists.onap.org>> On Behalf Of 
Sylvain Desbureaux
Sent: Friday, November 23, 2018 5:27 AM
To: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled all 
the time

Hello,

I’m doing a daily deployment of a full ONAP (using 
https://gitlab.com/Orange-OpenSource/lfn/onap/onap_oom_automatic_installation).
Most of the deployment is working but  I keep getting some errors.

In particular, I’ve got pomba-kibana pods which gets “OOMKilled” at every 
deployment on the last week.
I’m deploying a “small” flavor with request at 1 Cpu (which is HUGE by the way) 
and 600M RAM and limits at 2 Cpu, 1.2G RAM.

The 2 other kibana (from log and clamp) are working fine.

Am I the only one having this issue?

--
[cid:image001.png@01D33D23.D1353070]<http://www.orange.com/>

Sylvain Desbureaux
Senior Automation Architect
ORANGE/IMT/OLN/CNC/NCA/SINA

Fixe : +33 2 96 07 13 80 
<https://monsi.sso.francetelecom.fr/index.asp?target=http%3A%2F%2Fclicvoice.sso.francetelecom.fr%2FClicvoiceV2%2FToolBar.do%3Faction%3Ddefault%26rootservice%3DSIGNATURE%26to%3D+33%202%2096%2007%2013%2080>
Mobile : +33 6 71 17 25 57 
<https://monsi.sso.francetelecom.fr/index.asp?target=http%3A%2F%2Fclicvoice.sso.francetelecom.fr%2FClicvoiceV2%2FToolBar.do%3Faction%3Ddefault%26rootservice%3DSIGNATURE%26to%3D+33%206%2071%2017%2025%2057>
sylvain.desbure...@orange.com<mailto:sylvain.desbure...@orange.com>


_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.

This email and the information contained herein is proprietary and confidential 
and subject to the Amdocs Email Terms of Service, which you may review at 
https://www.amdocs.com/about/email-terms-of-service

_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be dist

Re: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled all the time

2018-11-23 Thread James MacNider
Hi Sylvain,

There have been occasional reports of this problem with this pod but it’s not 
been readily repeatable.  Believe it or not, it is probably due to the resource 
limits being too conservative.  The other ONAP kibana instances you mention 
both have higher resource limits (2Gi for clamp, 4Gi for logging).  I’ve raised 
https://jira.onap.org/browse/LOG-855 to track this issue.

Thanks,
James

From: onap-discuss@lists.onap.org  On Behalf Of 
Sylvain Desbureaux
Sent: Friday, November 23, 2018 5:27 AM
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] [OOM][Casablanca][POMBA] pomba kibana is OOMKilled all 
the time

Hello,

I’m doing a daily deployment of a full ONAP (using 
https://gitlab.com/Orange-OpenSource/lfn/onap/onap_oom_automatic_installation).
Most of the deployment is working but  I keep getting some errors.

In particular, I’ve got pomba-kibana pods which gets “OOMKilled” at every 
deployment on the last week.
I’m deploying a “small” flavor with request at 1 Cpu (which is HUGE by the way) 
and 600M RAM and limits at 2 Cpu, 1.2G RAM.

The 2 other kibana (from log and clamp) are working fine.

Am I the only one having this issue?

--
[cid:image001.png@01D33D23.D1353070]

Sylvain Desbureaux
Senior Automation Architect
ORANGE/IMT/OLN/CNC/NCA/SINA

Fixe : +33 2 96 07 13 80 

Mobile : +33 6 71 17 25 57 

sylvain.desbure...@orange.com


_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.

This email and the information contained herein is proprietary and confidential 
and subject to the Amdocs Email Terms of Service, which you may review at 
https://www.amdocs.com/about/email-terms-of-service 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14007): https://lists.onap.org/g/onap-discuss/message/14007
Mute This Topic: https://lists.onap.org/mt/28292082/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [onap-discuss] Images Missing in Release Manifest

2018-11-21 Thread James MacNider
Spike is an AAI service.  It looks quite overdue to be up-revved.


From: onap-discuss@lists.onap.org  On Behalf Of 
Gary Wu
Sent: Tuesday, November 20, 2018 6:11 PM
To: onap-discuss@lists.onap.org Group ; 
onap-rele...@lists.onap.org
Cc: Gildas Lanilis 
Subject: Re: [onap-discuss] Images Missing in Release Manifest

Hi all,

As of now the following images are still NOT accounted for, i.e. neither 
released nor declared to be unused for Casablanca:

onap/dcae-tools,1.3-STAGING-latest
onap/network-discovery,latest
onap/service-decomposition,latest
onap/spike,1.0-STAGING-latest

Would the owners please respond with your dispositions on these images above?

Thanks,
Gary

_
From: Gary Wu
Sent: Wednesday, November 14, 2018 2:59 PM
To: onap-discuss@lists.onap.org Group 
mailto:onap-discuss@lists.onap.org>>; 
onap-rele...@lists.onap.org
Subject: Images Missing in Release Manifest


Hi all,

The following images are used in the OOM helm charts but do not have a 
corresponding entry in the release manifest.  Can you please add the 
appropriate release images to the release manifest?

onap/aaf/aaf_cass,2.1.7
onap/ccsdk-controllerblueprints,latest
onap/dcae-be,1.3-STAGING-latest
onap/dcae-dt,1.2-STAGING-latest
onap/dcae-fe,1.3-STAGING-latest
onap/dcae-tools,1.3-STAGING-latest
onap/dcae-tosca-app,1.3-STAGING-latest
onap/fproxy,2.1-STAGING-latest
onap/music/cassandra_music,3.0.0
onap/music/prom,1.0.5-latest
onap/network-discovery,latest
onap/org.onap.dcaegen2.deployments.pnda-bootstrap-container,5.0.0
onap/org.onap.dcaegen2.deployments.pnda-mirror-container,5.0.0
onap/rproxy,2.1-STAGING-latest
onap/service-decomposition,latest
onap/spike,1.0-STAGING-latest
onap/tproxy-config,2.1-STAGING-latest

Thanks,
Gary




“Amdocs’ email platform is based on a third-party, worldwide, cloud-based 
system. Any emails sent to Amdocs will be processed and stored using such 
system and are accessible by third party providers of such system on a limited 
basis. Your sending of emails to Amdocs evidences your consent to the use of 
such system and such processing, storing and access”.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13951): https://lists.onap.org/g/onap-discuss/message/13951
Mute This Topic: https://lists.onap.org/mt/28140417/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [onap-discuss] Images Missing in Release Manifest

2018-11-21 Thread James MacNider
Hi Gary,

These images are part of the POMBA project.  They should both be set to version 
1.4.2.  I'll submit a review to address this shortly in the manifest files.

onap/network-discovery,latest
onap/service-decomposition,latest


Thanks,
James

From: onap-discuss@lists.onap.org  On Behalf Of 
Gary Wu
Sent: Tuesday, November 20, 2018 6:11 PM
To: onap-discuss@lists.onap.org Group ; 
onap-rele...@lists.onap.org
Cc: Gildas Lanilis 
Subject: Re: [onap-discuss] Images Missing in Release Manifest

Hi all,

As of now the following images are still NOT accounted for, i.e. neither 
released nor declared to be unused for Casablanca:

onap/dcae-tools,1.3-STAGING-latest
onap/network-discovery,latest
onap/service-decomposition,latest
onap/spike,1.0-STAGING-latest

Would the owners please respond with your dispositions on these images above?

Thanks,
Gary

_
From: Gary Wu
Sent: Wednesday, November 14, 2018 2:59 PM
To: onap-discuss@lists.onap.org Group 
mailto:onap-discuss@lists.onap.org>>; 
onap-rele...@lists.onap.org
Subject: Images Missing in Release Manifest


Hi all,

The following images are used in the OOM helm charts but do not have a 
corresponding entry in the release manifest.  Can you please add the 
appropriate release images to the release manifest?

onap/aaf/aaf_cass,2.1.7
onap/ccsdk-controllerblueprints,latest
onap/dcae-be,1.3-STAGING-latest
onap/dcae-dt,1.2-STAGING-latest
onap/dcae-fe,1.3-STAGING-latest
onap/dcae-tools,1.3-STAGING-latest
onap/dcae-tosca-app,1.3-STAGING-latest
onap/fproxy,2.1-STAGING-latest
onap/music/cassandra_music,3.0.0
onap/music/prom,1.0.5-latest
onap/network-discovery,latest
onap/org.onap.dcaegen2.deployments.pnda-bootstrap-container,5.0.0
onap/org.onap.dcaegen2.deployments.pnda-mirror-container,5.0.0
onap/rproxy,2.1-STAGING-latest
onap/service-decomposition,latest
onap/spike,1.0-STAGING-latest
onap/tproxy-config,2.1-STAGING-latest

Thanks,
Gary




“Amdocs’ email platform is based on a third-party, worldwide, cloud-based 
system. Any emails sent to Amdocs will be processed and stored using such 
system and are accessible by third party providers of such system on a limited 
basis. Your sending of emails to Amdocs evidences your consent to the use of 
such system and such processing, storing and access”.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13949): https://lists.onap.org/g/onap-discuss/message/13949
Mute This Topic: https://lists.onap.org/mt/28140417/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [E] Re: [onap-discuss] [OOM] SDNC is not installing when installing a subset of ONAP in Rancher/Kubernetes

2018-10-25 Thread James MacNider
A fix for these deployment issues was merged earlier today.  Pulling down ( and 
installing ) new versions of the deploy plugin is recommended.


From: onap-discuss@lists.onap.org  On Behalf Of 
George Clapp
Sent: Tuesday, October 23, 2018 2:05 PM
To: onap-discuss@lists.onap.org
Subject: Re: [E] Re: [onap-discuss] [OOM] SDNC is not installing when 
installing a subset of ONAP in Rancher/Kubernetes

I learned that sndc will be installed if the sniro component is also installed. 
 I tried again today after updating the master branching and reinstalling the 
helm deploy plugin, but sdnc wasn't installed if sniro was disabled.  It was 
installed, though, when sniro was enabled.

“Amdocs’ email platform is based on a third-party, worldwide, cloud-based 
system. Any emails sent to Amdocs will be processed and stored using such 
system and are accessible by third party providers of such system on a limited 
basis. Your sending of emails to Amdocs evidences your consent to the use of 
such system and such processing, storing and access”.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13297): https://lists.onap.org/g/onap-discuss/message/13297
Mute This Topic: https://lists.onap.org/mt/27565995/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [E] Re: [onap-discuss] [OOM] SDNC is not installing when installing a subset of ONAP in Rancher/Kubernetes

2018-10-23 Thread James MacNider
I've seen this same issue when using the deploy plugin in my environment.  I'm 
going through the process of debugging it, so I'll report back if I find 
anything helpful.


From: onap-discuss@lists.onap.org  On Behalf Of 
Brian
Sent: Tuesday, October 23, 2018 6:10 AM
To: Gopigiri, Sirisha ; 
onap-discuss@lists.onap.org
Cc: georgehcl...@gmail.com
Subject: Re: [E] Re: [onap-discuss] [OOM] SDNC is not installing when 
installing a subset of ONAP in Rancher/Kubernetes

I dont edit onap/values but the override fike passed in as a -f option to helm 
deploy
You can look in hour  hekm/plugins directory for the log file and see. The 
deploy PMlugin did just get updated so you can te.g.y re cloning and cooyjng jt 
ingo the plugi s directory

Brian



Sent via the Samsung Galaxy S8, an AT&T 4G LTE smartphone


 Original message 
From: "Gopigiri, Sirisha" 
mailto:sirisha.gopig...@verizon.com>>
Date: 10/23/18 2:11 AM (GMT-05:00)
To: onap-discuss@lists.onap.org, "FREEMAN, 
BRIAN D" mailto:bf1...@att.com>>
Cc: georgehcl...@gmail.com
Subject: Re: [E] Re: [onap-discuss] [OOM] SDNC is not installing when 
installing a subset of ONAP in Rancher/Kubernetes

Hi

I am trying to install SDNC from ONAP Master branch and as George mentioned, I 
am facing the same problem, SDNC pods are not getting launched.

When I enable only SDNC in onap/values.yaml and run "helm deploy demo 
local/onap --namespace onap", the SDNC components are not coming up, but when I 
run "helm install demo local/onap -n onap" I am able to see the pods. Am I 
missing something?

My values.yaml has the below configuration for sdnc
sdnc:
  enabled: true
  replicaCount: 1

  mysql:
replicaCount: 1

I have edited the values.yaml as suggested(please refer below) but still no 
luck.

sdnc:
  replicaCount: 3
  config:
enableClustering: true

Thank you in advance!

Best Regards
Sirisha Gopigiri


On Tue, Oct 2, 2018 at 2:39 AM Brian mailto:bf1...@att.com>> 
wrote:
For SDNC you need something like this (from integration-override.yaml):
Remember the default is enabled: true so I think the issue is its not liking 
the mysql section vs the enableClustering: true/false

Brian

sdnc:
  replicaCount: 3
  config:
enableClustering: true

or

sdnc:
  replicaCount: 1
  config:
enableClustering: false

From: onap-discuss@lists.onap.org 
mailto:onap-discuss@lists.onap.org>> On Behalf Of 
George Clapp
Sent: Monday, October 01, 2018 4:49 PM
To: onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] [OOM] SDNC is not installing when installing a 
subset of ONAP in Rancher/Kubernetes

Thanks much!  Here it is.

# Copyright (c) 2017 Amdocs, Bell Canada
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#   
http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

#
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#
global:
  # Change to an unused port prefix range to prevent port conflicts
  # with other instances running within the same k8s cluster
  nodePortPrefix: 302
  nodePortPrefixExt: 304

  # ONAP Repository
  # Uncomment the following to enable the use of a single docker
  # repository but ONLY if your repository mirrors all ONAP
  # docker images. This includes all images from dockerhub and
  # any other repository that hosts images for ONAP components.
  #repository: 
nexus3.onap.org:10001
  repositoryCred:
user: docker
password: docker

  # readiness check - temporary repo until images migrated to nexus3
  readinessRepository: oomk8s
  # logging agent - temporary repo until images migrated to nexus3
  loggingRepository: 
docker.elastic.co

Re: [onap-discuss] [OOM] helm install local/onap fails

2018-09-04 Thread James MacNider
Hi George,

After a bit of digging, it looks like a recent merge in the Clamp helm chart 
(https://gerrit.onap.org/r/#/c/63235) has created a nodePort conflict with 
Pomba.  Until this is resolved, the workaround is to disable one of them if 
don't require both components.

As a reminder to all, when adding new nodePorts to charts, ensure that they are 
properly reserved and unique:  
https://wiki.onap.org/display/DW/OOM+NodePort+List

Thanks,
James

From: onap-discuss@lists.onap.org  On Behalf Of 
George Clapp
Sent: Monday, September 3, 2018 7:31 PM
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] [OOM] helm install local/onap fails

I am attempting for the first time to install ONAP using the instructions at 
"ONAP on Kubernetes with Rancher" and "OOM User Guide" in an OpenStack 
environment.  I got to the point of the command:

helm install local/onap --name development

And got these error messages:

% helm install local/onap --name development
Error: release development failed: Service "pomba-kibana" is invalid: 
spec.ports[0].nodePort: Invalid value: 30234: provided port is already allocated
% helm install local/onap --name development
Error: release development failed: secrets "development-aaf-cs" already exists

I searched but nothing came up about this error.  I would greatly appreciate 
any suggestions.

Thanks,
George

“Amdocs’ email platform is based on a third-party, worldwide, cloud-based 
system. Any emails sent to Amdocs will be processed and stored using such 
system and are accessible by third party providers of such system on a limited 
basis. Your sending of emails to Amdocs evidences your consent to the use of 
such system and such processing, storing and access”.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12215): https://lists.onap.org/g/onap-discuss/message/12215
Mute This Topic: https://lists.onap.org/mt/25172114/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [onap-discuss] [SDNC] sdnc/oam: removing and purging files from git history

2018-08-31 Thread James MacNider
Hi Alexis,

Dan and I looked into this a while back as well, see 
https://jira.onap.org/browse/SDNC-29.  It reached a dead end because the LF 
would not grant the permissions required to clean up the history of the repo.  
The suggested alternative is to do shallow clones ( use “ –depth=1” ) when 
cloning this repository.

Not ideal, I agree.

Thanks,
James


From: onap-discuss@lists.onap.org  On Behalf Of 
Alexis de Talhouet
Sent: Thursday, August 30, 2018 9:37 AM
To: onap-s...@lists.onap.org; onap-discuss ; 
helpd...@onap.org
Subject: [onap-discuss] [SDNC] sdnc/oam: removing and purging files from git 
history

Greetings,

The sdnc/oam project has some large deleted filed remaining in its git 
repository, making the git clone pretty long, as almost 1GB has to be transfer.

I found through this website a way to get rid of that: 
https://blog.ostermiller.org/git-remove-from-history

So locally I’ve done the following, the main culprit are the two odl distro:

$ git rev-list --objects --all | git cat-file --batch-check='%(objecttype) 
%(objectname) %(objectsize) %(rest)' | sed -n 's/^blob //p' | sort 
--numeric-sort --key=2 | cut -c 1-12,41-

SHASIZE   FILE NAME
e3c6abcc979f 295660449 
installation/sdnc/src/main/resources/distribution-karaf-0.4.2-Beryllium-SR2.tar.gz
b6c13a8f784e 424102999 
installation/sdnc/src/main/resources/distribution-karaf-0.5.1-Boron-SR1.tar.gz

$ git filter-branch --tag-name-filter cat --index-filter 'git rm -r --cached 
--ignore-unmatch 
installation/sdnc/src/main/resources/distribution-karaf-0.4.2-Beryllium-SR2.tar.gz'
 --prune-empty -f -- --all
$ git filter-branch --tag-name-filter cat --index-filter 'git rm -r --cached 
--ignore-unmatch 
installation/sdnc/src/main/resources/distribution-karaf-0.5.1-Boron-SR1.tar.gz' 
--prune-empty -f -- —all

$ rm -rf .git/refs/original/
$ git reflog expire --expire=now --all
$ git gc --aggressive --prune=now


I have submitted https://gerrit.onap.org/r/#/c/63805/ to get rid of the first 
one.

But the second one was already removed, but still remain somewhere as an object 
in the git repository.

Is it possible for the gerrit administrator to run some clean-up command in the 
git repo to get rid of objects that were deleted?

Regards,
Alexis

“Amdocs’ email platform is based on a third-party, worldwide, cloud-based 
system. Any emails sent to Amdocs will be processed and stored using such 
system and are accessible by third party providers of such system on a limited 
basis. Your sending of emails to Amdocs evidences your consent to the use of 
such system and such processing, storing and access”.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12182): https://lists.onap.org/g/onap-discuss/message/12182
Mute This Topic: https://lists.onap.org/mt/25108379/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[onap-discuss] A&AI service naming in OOM

2018-05-04 Thread James MacNider
Hi Harish,

I've got a question about the naming of the A&AI service in OOM that listens on 
port 8443.  Currently, this service is called simply 'aai', and is referred to 
as such by all clients.

$ kubectl get svc -n onap
NAME  TYPECLUSTER-IP  EXTERNAL-IP   PORT(S) 
AGE
aai   NodePort10.43.220.69
8080:30232/TCP,8443:30233/TCP   33s
aai-cassandra ClusterIP   None
9042/TCP,9160/TCP   34s
aai-elasticsearch ClusterIP   None9200/TCP
34s
aai-hbase ClusterIP   None
2181/TCP,8080/TCP,8085/TCP,9090/TCP,16000/TCP,16010/TCP,16201/TCP   34s
aai-modelloader   NodePort10.43.124.107   
8080:30210/TCP,8443:30229/TCP   34s
aai-resources ClusterIP   None
8447/TCP,5005/TCP   34s
aai-search-data   ClusterIP   None9509/TCP
34s
aai-sparky-be ClusterIP   None9517/TCP
33s
aai-traversal ClusterIP   None
8446/TCP,5005/TCP   33s

I noticed though that "aai-aai" is configured as the service name in 
oom/kubernetes/aai/values.yaml, and also in the values.yaml for aai-sparky-be:

$ grep -R aai-aai *
aai/values.yaml:serviceName: aai-aai
aai/charts/aai-sparky-be/values.yaml:serviceName: aai-aai


Is the "aai-aai" naming a relic that should be removed?

Thanks,

James MacNider
Software Architect

Open Network Division
Amdocs Technology
(office) (613)-595-5213

[amdocs-a]
Amdocs is a Platinuim Member of ONAP<https://onap.org/>

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>
___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss


Re: [onap-discuss] [onap-sdnc] Query Regarding ONAP Installation on Kubernettes Rancher Multi VM Platform

2018-03-21 Thread James MacNider
I had a similar issue, but I wasn't able to figure out the reason behind it.  I 
did try an experimental install with Rancher 2.0, which happened to work for 
me, but it isn't really supported yet.

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes+on+Rancher#ONAPonKubernetesonRancher-Rancher2.0


From: onap-sdnc-boun...@lists.onap.org 
[mailto:onap-sdnc-boun...@lists.onap.org] On Behalf Of Saiguru Mahesh Thota
Sent: Wednesday, March 21, 2018 11:20 AM
To: onap-discuss@lists.onap.org; onap-s...@lists.onap.org; 
onap-...@lists.onap.org
Subject: [onap-sdnc] Query Regarding ONAP Installation on Kubernettes Rancher 
Multi VM Platform

Hi Team,

I am trying to setup ONAP environment using Kubernettes Rancher as per WIKI 
documentation.

I installed Docker 17.03.02 and Rancher 1.6.14,but unable to add hosts where as 
I can add with Amsterdam configurations like Docker 1.12 and Rancher 1.6.10.

Has anybody faced this issue before?

Please advise.

Regards,
Mahesh
Senior Consultant
Infosys Limited
+91-4030844799
saiguru_th...@infosys.com

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 

___
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss