Re: [onap-discuss] [oom] helm install local/onap fails - fix for #clamp #aaf #pomba

2018-09-06 Thread Roger Maitland
Hi George,

Unfortunately you got caught up in a nodeport conflict.  Nodeports provide the 
mechanism for external access to ports within an ONAP deployment and management 
of nodeport allocation is manual which resulted in this conflict. The OOM team 
is looking at alternative solutions to exposing all of these ports and in the 
short term Michael has generously re-allocated some ports to try to resolve the 
issue. Unless one deploys all of ONAP a nodeport conflict may not be seen after 
what seems like a simple fix so extra caution is advised when changing node 
ports.

Thanks for raising the issue.

Cheers,
Roger

From:  on behalf of George Clapp 

Reply-To: "onap-discuss@lists.onap.org" , 
"georgehcl...@gmail.com" 
Date: Wednesday, September 5, 2018 at 6:23 PM
To: "onap-discuss@lists.onap.org" 
Subject: Re: [onap-discuss] [oom] helm install local/onap fails - fix for 
#clamp #aaf #pomba

I tried to install again after another “helm delete –purge,” but there were a 
lot of residual components left after the failed earlier installation and then 
the delete.  I went into an etcd container and deleted all of the related 
entries and tried again, but got that same error.

Error: release onap failed: Service "dmaap-dr-prov" is invalid: 
spec.ports[0].nodePort: Invalid value: 30259: provided port is already allocated

I’ll revert to Beijing for the time being.

George

From: onap-discuss@lists.onap.org  On Behalf Of 
George Clapp
Sent: Wednesday, September 05, 2018 12:14 PM
To: onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] [OOM] helm install local/onap fails - fix for 
#clamp #aaf #pomba

I tried again to do a complete installation.  I checked out the master branch 
and did a pull to update and then entered:

sudo helm delete --purge development (the name I had used)

There were some residual deployments that survived the delete, most of them 
related to DMaaP.  I deleted them manually and then upgraded helm from 2.8.2 to 
2.9.1, removed and rebuilt the helm repo local, and did a ‘make all’ in 
oom/kubernetes.  I then entered:

sudo helm install local/onap -n onap --namespace onap

But got this error: Error: release onap failed: Service "dmaap-dr-prov" is 
invalid: spec.ports[0].nodePort: Invalid value: 30259: provided port is already 
allocated

I’m pretty sure that I had deleted all of the residual DMaaP deployments.  Any 
suggestions?

Thanks,
George

From: Michael O'Brien mailto:frank.obr...@amdocs.com>>
Sent: Tuesday, September 04, 2018 6:32 PM
To: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>; 
gary.i...@huawei.com<mailto:gary.i...@huawei.com>; James MacNider 
mailto:james.macni...@amdocs.com>>; 
georgehcl...@gmail.com<mailto:georgehcl...@gmail.com>; 
sd3...@intl.att.com<mailto:sd3...@intl.att.com>
Subject: RE: [onap-discuss] [OOM] helm install local/onap fails - fix for 
#clamp #aaf #pomba

Team,
AAF, CLAMP, POMBA, Integration, OOM,
As James has iterated – when you add/change a nodeport – do a full helm 
install/upgrade of your onap deployment – with “all” pods on – not just the vFW 
subset where several pods are disabled via –set .enabled=false – use 
the default/inferred deployment with no -f override.  Running a subset of onap 
for development, to fit in a 64g vm or to reduce the pod count is OK but when 
submitting run the full system if possible or at least your change with 
onap-minimal (the traditional vFW pods) – and then a 2nd deploy with the rest 
of ONAP for vCPE/vVOLTE (the parts that are normally disabled for vFW)

I put a fix in for clamp where they now have nodeport 58 (previously 
reserved for log/logdemonode) in 
https://wiki.onap.org/display/DW/OOM+NodePort+List  instead of the conflicted 
port with pomba – 34.
I’ll fix my ELK/log RI container to use 304xx later – it is not part of the 
requirements.yaml (it only shares the namespace) – so there is no conflict 
there until I bring up the 2nd deployment.
https://gerrit.onap.org/r/#/c/64545/
Tested and ready for merge.

Some background on 63235 - as we need a 304xx nodeport - this jira is 
really 2 parts (logging and exposing a new port) - we failed to test the 
conflict with pomba because my AWS CD system was temporarily not running - 
after the AAF/CLAMP merge for OOM-1174 in https://gerrit.onap.org/r/#/c/63235/
https://gerrit.onap.org/r/#/c/63235/1/kubernetes/clamp/values.yaml
https://git.onap.org/oom/tree/kubernetes/clamp/values.yaml#n98

reproduction:
amdocs@ubuntu:~/_dev/oom/kubernetes$ sudo helm upgrade -i onap local/onap 
--namespace onap -f dev.yaml
Error: UPGRADE FAILED: failed to create resource: Service "pomba-kibana" is 
invalid: spec.ports[0].nodePort: Invalid value: 30234: provided port is already 
allocated

See AAF enablement in CLAMP
https://jira.onap.org/browse/OOM-1174
https://jira.onap.org/browse/OOM-1364
duped to
https://jira.onap.org/browse/OOM-

Re: [onap-discuss] [oom] helm install local/onap fails - fix for #clamp #aaf #pomba

2018-09-05 Thread George Clapp
I tried to install again after another “helm delete –purge,” but there were a 
lot of residual components left after the failed earlier installation and then 
the delete.  I went into an etcd container and deleted all of the related 
entries and tried again, but got that same error.

 

Error: release onap failed: Service "dmaap-dr-prov" is invalid: 
spec.ports[0].nodePort: Invalid value: 30259: provided port is already allocated

 

I’ll revert to Beijing for the time being.

 

George

 

From: onap-discuss@lists.onap.org  On Behalf Of 
George Clapp
Sent: Wednesday, September 05, 2018 12:14 PM
To: onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] [OOM] helm install local/onap fails - fix for 
#clamp #aaf #pomba

 

I tried again to do a complete installation.  I checked out the master branch 
and did a pull to update and then entered:

 

sudo helm delete --purge development (the name I had used)

 

There were some residual deployments that survived the delete, most of them 
related to DMaaP.  I deleted them manually and then upgraded helm from 2.8.2 to 
2.9.1, removed and rebuilt the helm repo local, and did a ‘make all’ in 
oom/kubernetes.  I then entered:

 

sudo helm install local/onap -n onap --namespace onap

 

But got this error: Error: release onap failed: Service "dmaap-dr-prov" is 
invalid: spec.ports[0].nodePort: Invalid value: 30259: provided port is already 
allocated

 

I’m pretty sure that I had deleted all of the residual DMaaP deployments.  Any 
suggestions?

 

Thanks,

George

 

From: Michael O'Brien mailto:frank.obr...@amdocs.com> 
> 
Sent: Tuesday, September 04, 2018 6:32 PM
To: onap-discuss@lists.onap.org <mailto:onap-discuss@lists.onap.org> ; 
gary.i...@huawei.com <mailto:gary.i...@huawei.com> ; James MacNider 
mailto:james.macni...@amdocs.com> >; 
georgehcl...@gmail.com <mailto:georgehcl...@gmail.com> ; sd3...@intl.att.com 
<mailto:sd3...@intl.att.com> 
Subject: RE: [onap-discuss] [OOM] helm install local/onap fails - fix for 
#clamp #aaf #pomba

 

Team,

AAF, CLAMP, POMBA, Integration, OOM,

As James has iterated – when you add/change a nodeport – do a full helm 
install/upgrade of your onap deployment – with “all” pods on – not just the vFW 
subset where several pods are disabled via –set .enabled=false – use 
the default/inferred deployment with no -f override.  Running a subset of onap 
for development, to fit in a 64g vm or to reduce the pod count is OK but when 
submitting run the full system if possible or at least your change with 
onap-minimal (the traditional vFW pods) – and then a 2nd deploy with the rest 
of ONAP for vCPE/vVOLTE (the parts that are normally disabled for vFW)

 

I put a fix in for clamp where they now have nodeport 58 (previously 
reserved for log/logdemonode) in 
https://wiki.onap.org/display/DW/OOM+NodePort+List  instead of the conflicted 
port with pomba – 34.

I’ll fix my ELK/log RI container to use 304xx later – it is not part of the 
requirements.yaml (it only shares the namespace) – so there is no conflict 
there until I bring up the 2nd deployment.

https://gerrit.onap.org/r/#/c/64545/

Tested and ready for merge.

 

Some background on 63235 - as we need a 304xx nodeport - this jira is 
really 2 parts (logging and exposing a new port) - we failed to test the 
conflict with pomba because my AWS CD system was temporarily not running - 
after the AAF/CLAMP merge for OOM-1174 in https://gerrit.onap.org/r/#/c/63235/ 

https://gerrit.onap.org/r/#/c/63235/1/kubernetes/clamp/values.yaml 

https://git.onap.org/oom/tree/kubernetes/clamp/values.yaml#n98 

 

reproduction:

amdocs@ubuntu:~/_dev/oom/kubernetes$ sudo helm upgrade -i onap local/onap 
--namespace onap -f dev.yaml

Error: UPGRADE FAILED: failed to create resource: Service "pomba-kibana" is 
invalid: spec.ports[0].nodePort: Invalid value: 30234: provided port is already 
allocated

 

See AAF enablement in CLAMP

https://jira.onap.org/browse/OOM-1174

https://jira.onap.org/browse/OOM-1364

duped to

https://jira.onap.org/browse/OOM-1366 

 

To speed this up - and not have to goto 304/306xx - Giving aaf/clamp my last 
302 - 30258

https://git.onap.org/logging-analytics/tree/reference/logging-kubernetes/logdemonode/charts/logdemonode/values.yaml#n76
 

 

That I was using for the logging RI container (reserved on july 26) - I will 
use a 304xx port for logging instead

 

https://wiki.onap.org/pages/viewpage.action?pageId=38112900 

adjusting page

 

https://wiki.onap.org/display/DW/OOM+NodePort+List 

 

Also note that for any hanging PV or secret when using Helm 2.9.1 under 
Kubernetes 1.10.x (via Rancher 1.6.18) use the following

https://wiki.onap.org/display/DW/Logging+Developer+Guide#LoggingDeveloperGuide-Deployingdemopod

normal bounding of a pod

/oom/kubernetes$ sudo helm install local/onap -n onap --namespace onap -f 
onap/resources/environments/disable-allcharts.yaml --

Re: [onap-discuss] [OOM] helm install local/onap fails - fix for #clamp #aaf #pomba

2018-09-05 Thread George Clapp
I tried again to do a complete installation.  I checked out the master branch 
and did a pull to update and then entered:

 

sudo helm delete --purge development (the name I had used)

 

There were some residual deployments that survived the delete, most of them 
related to DMaaP.  I deleted them manually and then upgraded helm from 2.8.2 to 
2.9.1, removed and rebuilt the helm repo local, and did a ‘make all’ in 
oom/kubernetes.  I then entered:

 

sudo helm install local/onap -n onap --namespace onap

 

But got this error: Error: release onap failed: Service "dmaap-dr-prov" is 
invalid: spec.ports[0].nodePort: Invalid value: 30259: provided port is already 
allocated

 

I’m pretty sure that I had deleted all of the residual DMaaP deployments.  Any 
suggestions?

 

Thanks,

George

 

From: Michael O'Brien  
Sent: Tuesday, September 04, 2018 6:32 PM
To: onap-discuss@lists.onap.org; gary.i...@huawei.com; James MacNider 
; georgehcl...@gmail.com; sd3...@intl.att.com
Subject: RE: [onap-discuss] [OOM] helm install local/onap fails - fix for 
#clamp #aaf #pomba

 

Team,

AAF, CLAMP, POMBA, Integration, OOM,

As James has iterated – when you add/change a nodeport – do a full helm 
install/upgrade of your onap deployment – with “all” pods on – not just the vFW 
subset where several pods are disabled via –set .enabled=false – use 
the default/inferred deployment with no -f override.  Running a subset of onap 
for development, to fit in a 64g vm or to reduce the pod count is OK but when 
submitting run the full system if possible or at least your change with 
onap-minimal (the traditional vFW pods) – and then a 2nd deploy with the rest 
of ONAP for vCPE/vVOLTE (the parts that are normally disabled for vFW)

 

I put a fix in for clamp where they now have nodeport 58 (previously 
reserved for log/logdemonode) in 
https://wiki.onap.org/display/DW/OOM+NodePort+List  instead of the conflicted 
port with pomba – 34.

I’ll fix my ELK/log RI container to use 304xx later – it is not part of the 
requirements.yaml (it only shares the namespace) – so there is no conflict 
there until I bring up the 2nd deployment.

https://gerrit.onap.org/r/#/c/64545/

Tested and ready for merge.

 

Some background on 63235 - as we need a 304xx nodeport - this jira is 
really 2 parts (logging and exposing a new port) - we failed to test the 
conflict with pomba because my AWS CD system was temporarily not running - 
after the AAF/CLAMP merge for OOM-1174 in https://gerrit.onap.org/r/#/c/63235/ 

https://gerrit.onap.org/r/#/c/63235/1/kubernetes/clamp/values.yaml 

https://git.onap.org/oom/tree/kubernetes/clamp/values.yaml#n98 

 

reproduction:

amdocs@ubuntu:~/_dev/oom/kubernetes$ sudo helm upgrade -i onap local/onap 
--namespace onap -f dev.yaml

Error: UPGRADE FAILED: failed to create resource: Service "pomba-kibana" is 
invalid: spec.ports[0].nodePort: Invalid value: 30234: provided port is already 
allocated

 

See AAF enablement in CLAMP

https://jira.onap.org/browse/OOM-1174

https://jira.onap.org/browse/OOM-1364

duped to

https://jira.onap.org/browse/OOM-1366 

 

To speed this up - and not have to goto 304/306xx - Giving aaf/clamp my last 
302 - 30258

https://git.onap.org/logging-analytics/tree/reference/logging-kubernetes/logdemonode/charts/logdemonode/values.yaml#n76
 

 

That I was using for the logging RI container (reserved on july 26) - I will 
use a 304xx port for logging instead

 

https://wiki.onap.org/pages/viewpage.action?pageId=38112900 

adjusting page

 

https://wiki.onap.org/display/DW/OOM+NodePort+List 

 

Also note that for any hanging PV or secret when using Helm 2.9.1 under 
Kubernetes 1.10.x (via Rancher 1.6.18) use the following

https://wiki.onap.org/display/DW/Logging+Developer+Guide#LoggingDeveloperGuide-Deployingdemopod

normal bounding of a pod

/oom/kubernetes$ sudo helm install local/onap -n onap --namespace onap -f 
onap/resources/environments/disable-allcharts.yaml --set log.enabled=false

/oom/kubernetes$ sudo helm upgrade -i onap local/onap --namespace onap -f 
onap/resources/environments/disable-allcharts.yaml --set log.enabled=true



Mitigation commands to completely clean the system

sudo helm delete --purge onap

kubectl delete namespace onap

 

 

tested

https://wiki.onap.org/display/DW/OOM+NodePort+List#OOMNodePortList-VerifyNoPortConflictsduringfullONAPHelmDeploy

sudo helm delete --purge onap

kubectl delete namespace onap

sudo make all

sudo helm install local/onap -n onap --namespace onap -f dev.yaml

amdocs@ubuntu:~/_dev/oom/kubernetes$ kubectl get services --all-namespaces

NAMESPACE NAME   TYPECLUSTER-IP  
EXTERNAL-IP   PORT(S) AGE

default   kubernetes ClusterIP   10.43.0.1   
443/TCP 29d

kube-system   heapster   ClusterIP   10.43.42.68 
80/TCP   

Re: [onap-discuss] [OOM] helm install local/onap fails - fix for #clamp #aaf #pomba

2018-09-04 Thread Michael O'Brien
Team,
AAF, CLAMP, POMBA, Integration, OOM,
As James has iterated – when you add/change a nodeport – do a full helm 
install/upgrade of your onap deployment – with “all” pods on – not just the vFW 
subset where several pods are disabled via –set .enabled=false – use 
the default/inferred deployment with no -f override.  Running a subset of onap 
for development, to fit in a 64g vm or to reduce the pod count is OK but when 
submitting run the full system if possible or at least your change with 
onap-minimal (the traditional vFW pods) – and then a 2nd deploy with the rest 
of ONAP for vCPE/vVOLTE (the parts that are normally disabled for vFW)

I put a fix in for clamp where they now have nodeport 58 (previously 
reserved for log/logdemonode) in 
https://wiki.onap.org/display/DW/OOM+NodePort+List  instead of the conflicted 
port with pomba – 34.
I’ll fix my ELK/log RI container to use 304xx later – it is not part of the 
requirements.yaml (it only shares the namespace) – so there is no conflict 
there until I bring up the 2nd deployment.
https://gerrit.onap.org/r/#/c/64545/
Tested and ready for merge.

Some background on 63235 - as we need a 304xx nodeport - this jira is 
really 2 parts (logging and exposing a new port) - we failed to test the 
conflict with pomba because my AWS CD system was temporarily not running - 
after the AAF/CLAMP merge for OOM-1174 in https://gerrit.onap.org/r/#/c/63235/
https://gerrit.onap.org/r/#/c/63235/1/kubernetes/clamp/values.yaml
https://git.onap.org/oom/tree/kubernetes/clamp/values.yaml#n98

reproduction:
amdocs@ubuntu:~/_dev/oom/kubernetes$ sudo helm upgrade -i onap local/onap 
--namespace onap -f dev.yaml
Error: UPGRADE FAILED: failed to create resource: Service "pomba-kibana" is 
invalid: spec.ports[0].nodePort: Invalid value: 30234: provided port is already 
allocated

See AAF enablement in CLAMP
https://jira.onap.org/browse/OOM-1174
https://jira.onap.org/browse/OOM-1364
duped to
https://jira.onap.org/browse/OOM-1366

To speed this up - and not have to goto 304/306xx - Giving aaf/clamp my last 
302 - 30258
https://git.onap.org/logging-analytics/tree/reference/logging-kubernetes/logdemonode/charts/logdemonode/values.yaml#n76

That I was using for the logging RI container (reserved on july 26) - I will 
use a 304xx port for logging instead

https://wiki.onap.org/pages/viewpage.action?pageId=38112900
adjusting page

https://wiki.onap.org/display/DW/OOM+NodePort+List

Also note that for any hanging PV or secret when using Helm 2.9.1 under 
Kubernetes 1.10.x (via Rancher 1.6.18) use the following
https://wiki.onap.org/display/DW/Logging+Developer+Guide#LoggingDeveloperGuide-Deployingdemopod
normal bounding of a pod
/oom/kubernetes$ sudo helm install local/onap -n onap --namespace onap -f 
onap/resources/environments/disable-allcharts.yaml --set log.enabled=false
/oom/kubernetes$ sudo helm upgrade -i onap local/onap --namespace onap -f 
onap/resources/environments/disable-allcharts.yaml --set log.enabled=true

Mitigation commands to completely clean the system
sudo helm delete --purge onap
kubectl delete namespace onap


tested
https://wiki.onap.org/display/DW/OOM+NodePort+List#OOMNodePortList-VerifyNoPortConflictsduringfullONAPHelmDeploy
sudo helm delete --purge onap
kubectl delete namespace onap
sudo make all
sudo helm install local/onap -n onap --namespace onap -f dev.yaml
amdocs@ubuntu:~/_dev/oom/kubernetes$ kubectl get services --all-namespaces
NAMESPACE NAME   TYPECLUSTER-IP  
EXTERNAL-IP   PORT(S) AGE
default   kubernetes ClusterIP   10.43.0.1   
443/TCP 29d
kube-system   heapster   ClusterIP   10.43.42.68 
80/TCP  2d
kube-system   kube-dns   ClusterIP   10.43.0.10  
53/UDP,53/TCP   2d
kube-system   kubernetes-dashboard   ClusterIP   10.43.251.134   
80/TCP  2d
kube-system   monitoring-grafana ClusterIP   10.43.179.125   
80/TCP  2d
kube-system   monitoring-influxdbClusterIP   10.43.12.136
8086/TCP2d
kube-system   tiller-deploy  ClusterIP   10.43.130.135   
44134/TCP   3d
onap  aaiNodePort10.43.211.200   
8080:30232/TCP,8443:30233/TCP   27s
onap  aai-babel  NodePort10.43.140.119   
9516:30279/TCP  27s
onap  aai-cassandra  ClusterIP   None
9042/TCP,9160/TCP,61621/TCP 27s
onap  aai-champ  NodePort10.43.212.151   
9522:30278/TCP  27s
onap  aai-crud-service   NodePort10.43