Thank you very much Michael, very useful link indeed

To be honest we already resigned on all-in-one deployments and doing mainly 3 
VMs deployment for Beijing ONAP,

One of the reasons was this 110 pods limit, but we had also another challenge 
for all-in-one (single VM) ONAP deployment

 

Rancher networking was collapsing in our case after some time (usually after 
24-36hrs), symptom was that when running „kubectl get cs“

there was failed etcd check, this problem disappeared when we moved rancher 
server to other node, so practically we thought that

having rancher server collocated with k8s master node is a bad idea, so I am 
wondering if there are really some working all-in-one ONAP deployments

which can run longer …. did you face the same problem ?

 

thanks again,

Michal

 

From: Michael O'Brien [mailto:frank.obr...@amdocs.com] 
Sent: Wednesday, September 5, 2018 10:26 PM
To: m.pta...@partner.samsung.com; onap-discuss@lists.onap.org
Subject: RE: [onap-discuss] OOM Beijing HW requirements

 

Michal,

   Hi, there is a procedure to increase the 110 limit here

https://lists.onap.org/g/onap-discuss/topic/oom_110_kubernetes_pod/25213556?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,25213556

   /michael

 

From: onap-discuss-boun...@lists.onap.org 
<mailto:onap-discuss-boun...@lists.onap.org>  
<onap-discuss-boun...@lists.onap.org 
<mailto:onap-discuss-boun...@lists.onap.org> > On Behalf Of Roger Maitland
Sent: Monday, May 7, 2018 11:38 AM
To: m.pta...@partner.samsung.com <mailto:m.pta...@partner.samsung.com> ; 
onap-discuss@lists.onap.org <mailto:onap-discuss@lists.onap.org> 
Subject: Re: [onap-discuss] OOM Beijing HW requirements

 

Hi Michal,

 

The limit of 110 pods applies per node so if you add another node (or two) to 
your cluster you’ll avoid this problem. We’ve had problems in the past with 
multi-node configurations but that seems to be behind us.  I’ll put a note in 
the documentation – thanks for reporting the problem.

 

Cheers,
Roger

 

From: <onap-discuss-boun...@lists.onap.org 
<mailto:onap-discuss-boun...@lists.onap.org> > on behalf of Michal Ptacek 
<m.pta...@partner.samsung.com <mailto:m.pta...@partner.samsung.com> >
Reply-To: "m.pta...@partner.samsung.com <mailto:m.pta...@partner.samsung.com> " 
<m.pta...@partner.samsung.com <mailto:m.pta...@partner.samsung.com> >
Date: Friday, May 4, 2018 at 11:08 AM
To: "onap-discuss@lists.onap.org <mailto:onap-discuss@lists.onap.org> " 
<onap-discuss@lists.onap.org <mailto:onap-discuss@lists.onap.org> >
Subject: [onap-discuss] OOM Beijing HW requirements

 

 

Hi all,

 

I am trying to deploy latest ONAP (Beijing) using OOM (master branch) and I 
found some challenges using suggested HW resources for all-in-one (rancher 
server + k8s host on single node) deployment of ONAP:

 

HW resources:

*           24 VCPU 

*        128G RAM 

*        500G disc space 

*        rhel7.4 os 

*        rancher 1.6.14 

*        kubernetes 1.8.10 

*        helm 2.8.2 

*        kubectl 1.8.12 

*        docker 17.03-ce 

 

Problem:

some random pods are unable to spawn and are on "Pending" state with error "No 
nodes are available that match all of the predicates: Insufficient pods (1).“

 

Analyzis:

it seems that mine server can allocate max. 110 pods, which I found from 
"kubectl describe nodes" output

...

Allocatable:

cpu:     24

memory:  102861656Ki

pods:    110

...

full ONAP deployment with all components enabled might be 120+ pods but I did 
not get even that 110 running,

maybe some race-condition for latest pods. If I disable 5+ components in 
onap/values.yaml, it will fit into that server but then it seems that "Minimum 
Hardware Configuration" described in

 
<http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html>
 
http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html

is wrong (120GB RAM, 160G disc, 16vCPU)

 

or is there any hint how to increase that maximum number of allocatable pods ?

 

thanks,

Michal

 

 

 

 

 

 


 

 




  
<http://ext.w1.samsung.net/mail/ext/v1/external/status/update?userid=m.ptacek&do=bWFpbElEPTIwMTgwNTA0MDczOTMxZXVjbXMxcDM0NjgwOTJiMDllYWUzZTA4NmNjZWQ2NzgwOTM3OTE0ZiZyZWNpcGllbnRBZGRyZXNzPW9uYXAtZGlzY3Vzc0BsaXN0cy5vbmFwLm9yZw__>
 

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement, 

you may review at https://www.amdocs.com/about/email-disclaimer

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement, 

you may review at https://www.amdocs.com/about/email-disclaimer



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12262): https://lists.onap.org/g/onap-discuss/message/12262
Mute This Topic: https://lists.onap.org/mt/22460466/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to