Yeah, but does that mean the instance is alive and billable :-)?  I guess that 
counts!  I thought they were only in response to external API/admin requests.

-------------------------------------------------
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060



On Apr 24, 2012, at 3:42 PM, Luis Gervaso wrote:

> This kind of messages are coming from nova exchange aprox. each 60 secs
> 
> Can be this considered as a heartbeat for you? 
> 
>  [x] Received '{"_context_roles": ["admin"], "_msg_id": 
> "a2d13735baad4613b89c6132e0fa8302", "_context_read_deleted": "no", 
> "_context_request_id": "req-d7ffbe78-7a9c-4d20-9ac5-3e56951526fe", "args": 
> {"instance_id": 6, "instance_uuid": "e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7", 
> "host": "ubuntu", "project_id": "c290118b14564257be26a2cb901721a2", 
> "rxtx_factor": 1.0}, "_context_auth_token": null, "_context_is_admin": true, 
> "_context_project_id": null, "_context_timestamp": 
> "2012-03-24T01:36:48.774891", "_context_user_id": null, "method": 
> "get_instance_nw_info", "_context_remote_address": null}'
> 
> 
> 
> On Tue, Apr 24, 2012 at 9:31 PM, Brian Schott 
> <brian.sch...@nimbisservices.com> wrote:
> I take it that the instance manager doesn't generate any kind of heartbeat, 
> so whatever monitoring/archiving service we do should internally poll the 
> status over MQ?
> 
> 
> -------------------------------------------------
> Brian Schott, CTO
> Nimbis Services, Inc.
> brian.sch...@nimbisservices.com
> ph: 443-274-6064  fx: 443-274-6060
> 
> 
> 
> On Apr 24, 2012, at 2:10 PM, Luis Gervaso wrote:
> 
>> Probably an extra audit system is required. I'm searching for solutions in 
>> the IT market.
>> 
>> Regards
>> 
>> On Tue, Apr 24, 2012 at 6:00 PM, Loic Dachary <l...@enovance.com> wrote:
>> On 04/24/2012 04:45 PM, Monsyne Dragon wrote:
>>> 
>>> 
>>> On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:
>>> 
>>>> On 04/24/2012 03:06 PM, Monsyne Dragon wrote:
>>>>> 
>>>>> Yes,  we emit bandwidth (bytes in/out) on a per VIF basis from each 
>>>>> instance The event has the somewhat generic name of 
>>>>> 'compute.instance.exists'  and is emitted on an periodic basis, currently 
>>>>> by a cronjob. 
>>>>> Currently, we only populate bandwidth data from XenServer, but if the 
>>>>> hook is implemented for  the kvm, etc drivers, it will be picked up 
>>>>> automatically for them as well. 
>>>>> 
>>>>> Note that we could report other metrics similarly. 
>>>> Hi,
>>>> 
>>>> Thanks for clarifying this. So you're suggesting that the metering agent 
>>>> should collect this data from the nova queue instead of extracting it from 
>>>> the system (interface, disk stats etc.) ? And for other openstack 
>>>> components ( as Nick Barcet suggests below ) the metering agent will have 
>>>> to find another way. Or do you have something else in mind ?
>>> 
>>> If it's something we have access to, we should emit it in those usage 
>>> events.  As far as the other components, glance is already using the same 
>>> notification system.  (there was a thread awhile back about putting it into 
>>> openstack.common)  It would be nice to have all of the components using it. 
>>>  
>>> 
>> Hi,
>> 
>> I don't see a section in http://wiki.openstack.org/SystemUsageData about 
>> making sure all messages related to a billable event are accounted for. I 
>> mean, for instance, what if the event that says an instance is deleted is 
>> lost ? How is the billing software supposed to cope with that ? If it checks 
>> the status of all VM on a regular basis to deal with this, how can it figure 
>> out when the missed event occured ?
>> 
>> It would be worth adding a short section about this in 
>> http://wiki.openstack.org/SystemUsageData . Or I can do it if you give me a 
>> hint.
>> 
>> Cheers
>> 
>>>> Cheers
>>>> 
>>>> On 04/24/2012 12:17 PM, Nick Barcet wrote:
>>>>> 
>>>>> On 04/23/2012 10:45 PM, Doug Hellmann wrote:
>>>>>> > 
>>>>>> > 
>>>>>> > On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
>>>>>> > <brian.sch...@nimbisservices.com
>>>>>> > <mailto:brian.sch...@nimbisservices.com>> wrote:
>>>>>> > 
>>>>>> >     Doug,
>>>>>> > 
>>>>>> >     Do we mirror the table structure of nova, etc. and add
>>>>>> >     created/modified columns? 
>>>>>> > 
>>>>>> > 
>>>>>> >     Or do we flatten into an instance event record with everything?  
>>>>>> > 
>>>>>> > 
>>>>>> > I lean towards flattening the data as it is recorded and making a 
>>>>>> > second
>>>>>> > pass during the bill calculation. You need to record instance
>>>>>> > modifications separately from the creation, especially if the
>>>>>> > modification changes the billing rate. So you might have records for:
>>>>>> > 
>>>>>> > created instance, with UUID, name, size, timestamp, ownership
>>>>>> > information, etc.
>>>>>> > resized instance, with UUID, name, new size, timestamp, ownership
>>>>>> > information, etc.
>>>>>> > deleted instance, with UUID, name, size, timestamp, ownership
>>>>>> > information, etc.
>>>>>> > 
>>>>>> > Maybe some of those values don't need to be reported in some cases, but
>>>>>> > if you record a complete picture of the state of the instance then the
>>>>>> > code that aggregates the event records to produce billing information
>>>>>> > can use it to make decisions about how to record the charges.
>>>>>> > 
>>>>>> > There is also the case where an instance is still no longer running but
>>>>>> > nova thinks it is (or the reverse), so some sort of auditing sweep 
>>>>>> > needs
>>>>>> > to be included (I think that's what Dough called the "farmer" but I
>>>>>> > don't have my notes in front of me).
>>>>> When I wrote [1], one of the things that I never assumed was how agents
>>>>> would collect their information. I imagined that the system should allow
>>>>> for multiple implementation of agents that would collect the same
>>>>> counters, assuming that 2 implementations for the same counter should
>>>>> never be running at once.
>>>>> 
>>>>> That said, I am not sure an event based collection of what nova is
>>>>> notifying would satisfy the requirements I have heard from many cloud
>>>>> providers:
>>>>> - how do we ensure that event are not forged or lost in the current nova
>>>>> system?
>>>>> - how can I be sure that an instance has not simply crashed and never
>>>>> started?
>>>>> - how can I collect information which is not captured by nova events?
>>>>> 
>>>>> Hence the proposal to use a dedicated event queue for billing, allowing
>>>>> for agents to collect and eventually validate data from different
>>>>> sources, including, but not necessarily limiting, collection from the
>>>>> nova events.
>>>>> 
>>>>> Moreover, as soon as you generalize the problem to other components than
>>>>> just Nova (swift, glance, quantum, daas, ...) just using the nova event
>>>>> queue is not an option anymore.
>>>>> 
>>>>> [1] http://wiki.openstack.org/EfficientMetering
>>>>> 
>>>>> Nick
>>>>> 
>>>>> 
>>>> 
>>>>> On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:
>>>>> 
>>>>>> I think we have support for this currently in some fashion, Dragon?
>>>>>> 
>>>>>> -S
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On 04/24/2012 12:55 AM, Loic Dachary wrote:
>>>>>>> Metering needs to account for the "volume of data sent to external 
>>>>>>> network destinations " ( i.e. n4 in 
>>>>>>> http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This 
>>>>>>> kind of resource is billable.
>>>>>>> 
>>>>>>> The information described at http://wiki.openstack.org/SystemUsageData 
>>>>>>> will be used by metering but other data sources need to be harvested as 
>>>>>>> well.
>>>>> --
>>>>>   Monsyne M. Dragon
>>>>>   OpenStack/Nova 
>>>>>   cell 210-441-0965
>>>>>   work x 5014190
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> Mailing list: https://launchpad.net/~openstack
>>>>> Post to     : openstack@lists.launchpad.net
>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>> More help   : https://help.launchpad.net/ListHelp
>>>> 
>>>> 
>>>> -- 
>>>> Loïc Dachary         Chief Research Officer
>>>> // eNovance labs   http://labs.enovance.com
>>>> // ✉ l...@enovance.com  ☎ +33 1 49 70 99 82
>>>> _______________________________________________
>>>> Mailing list: https://launchpad.net/~openstack
>>>> Post to     : openstack@lists.launchpad.net
>>>> Unsubscribe : https://launchpad.net/~openstack
>>>> More help   : https://help.launchpad.net/ListHelp
>>> 
>>> --
>>> Monsyne M. Dragon
>>> OpenStack/Nova 
>>> cell 210-441-0965
>>> work x 5014190
>>> 
>> 
>> 
>> -- 
>> Loïc Dachary         Chief Research Officer
>> // eNovance labs   http://labs.enovance.com
>> // ✉ l...@enovance.com  ☎ +33 1 49 70 99 82
>> 
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>> 
>> 
>> 
>> 
>> -- 
>> -------------------------------------------
>> Luis Alberto Gervaso Martin
>> Woorea Solutions, S.L
>> CEO & CTO
>> mobile: (+34) 627983344
>> l...@woorea.es
>> 
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 
> 
> 
> 
> -- 
> -------------------------------------------
> Luis Alberto Gervaso Martin
> Woorea Solutions, S.L
> CEO & CTO
> mobile: (+34) 627983344
> l...@woorea.es
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to