FYI, I’ve just made the comparison with the HEAT requirements. The footprint 
for OOM is slightly smaller, but actually, it’s more than 80% DCAE’s footprint.

HEAT
29 VM
148 vCPU
336 GB RAM
3 TB Storage
29 floating IP addresses
OOM
17 VM
123 vCPU
294 GB RAM
2300 TB Storage
15 floating IP addresses
DCAE itself
15 VM
113 vCPU
2 GB RAM
2300 TB Storage
15 floating IP addresses
Hope it helps,
Alexis

> On Jan 4, 2018, at 3:46 PM, Alexis de Talhouët <adetalhoue...@gmail.com> 
> wrote:
> 
> Guarav, here are the exact numbers for DCAE requirement.
> 
> 15 instances
> 113 vCPU
> 226 GB RAM
> 2260 GB disk
> 15 floating IP
> 
>> On Jan 4, 2018, at 12:22 PM, Alexis de Talhouët <adetalhoue...@gmail.com 
>> <mailto:adetalhoue...@gmail.com>> wrote:
>> 
>> Gaurav, happy new year to you to! See answers inline.
>> 
>>> On Jan 4, 2018, at 12:14 PM, Gaurav Gupta (c) <guptagau...@vmware.com 
>>> <mailto:guptagau...@vmware.com>> wrote:
>>> 
>>> Alexis , Michael 
>>> 
>>> Happy new year ,
>>> 
>>> I had couple of questions 
>>> 
>>> a- about what is the Clean requirement for OOM in terms of Memory /vCPU if 
>>> the closed loop demo needs to be attempted implying DCAE also to be part of 
>>> . 
>> 
>> AdT:
>> - 1 VM for rancher: 2 vCPU - 4 GO RAM - 40 GO disk
>> - 1 VM for ONAP - 8 vCPUS - 64 GO RAM - 100 GO disk - 16 GO swap (I added 
>> some swap because in ONAP, most of the app are not always active, most of 
>> them are idle, so it's fine to let the host store dirty page in the swap 
>> memory.)
>> - 14 VMs for DCAE - 130 vCPUS - 300 GO RAM - 1.5 TO disk (note: this it’s 
>> based on memory, I just torn down my setup, and I don’t recall well)
>> 
>> 
>> 
>>> b - How many VM if we were to use Rancher based OOM Setup .
>> 
>> AdT: As specified above, only one VM for Rancher.
>> 
>>> c - what is the release I should be using  if I were toinstall   amsterdam 
>>> mainteance release . 
>> 
>> AdT: OOM amsterdam branch. Note: DCAE isn’t yet merged.
>> 
>>> 
>>> 
>>> thanks 
>>> Gaurav 
>> 
> 

_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to